a builder's codex
codex · operators · Amol Avasare · ins_cash-four-stage-growth-automation

Automate the four stages of a growth experiment; keep humans on alignment

By Amol Avasare · Head of Growth, Anthropic (ex-Mercury, MasterClass) · 2026-04-05 · podcast · Anthropic growth, CASH, and the squeezed PM

Tier A · TL;DR
Automate the four stages of a growth experiment; keep humans on alignment

Claim

Run growth experimentation through a four-stage substrate, identify opportunity, build the change, test against a quality + brand bar, ship and analyze, driven by Claude. The fifth stage (cross-functional alignment) stays human, and that is the lasting bottleneck.

Mechanism

Most growth experimentation is loosely coupled steps that already have rich playbooks: ideation, implementation, QA, analysis. A capable model can drive each step end-to-end against a written brand and quality bar, with current win rates around "junior PM 2–3 years in." The expensive human input is no longer building the experiment, it is the political and aesthetic work of getting six people in a room to agree on what to ship.

Conditions

Holds when:

Fails when:

Evidence

"Identify opportunities → build the feature → test against quality + brand bar → ship + analyze."

"We will have AGI and it will still be impossible to get six people in a room to align."

The team is led by Alexey Komissarouk inside Anthropic. Win rate today is named at "junior PM 2–3 years in." The substrate wasn't viable before Opus 4.5; it is now. Human-in-loop review need is decreasing weekly.

· Amol Avasare on Lenny's Podcast, 2026-04-05

Signals

Counter-evidence

Operators outside frontier labs may not have the brand-bar maturity, the model access, or the org buy-in to run this. Without the codified guardrails, automating stages 1–4 produces fast slop. The win is not in the automation; it is in the prerequisite of having explicit quality definitions.

Cross-references

Open the interactive view → View original source → Markdown source →