a builder's codex
codex · playbooks · Competitive analysis playbook

Competitive analysis playbook

Annotated pipeline · 10 steps across 2 phases

The Competitive Intel System

Synthesizes the CIA 4-phase cycle, Crayon's battlecard structure, and Dunford's positioning research. Output is a competitive package that helps sales win, product prioritize, and marketing sharpen positioning. The quality bar is not comprehensiveness. It is whether the analysis changes a decision within 30 days of delivery.

When to use

Steps

01 Scope the landscape. Anchor on a business question, not a competitor list.

Before you gather anything, define what decision this analysis needs to inform. CI without a business question is data collection. It is not intelligence.

Ask: which three decisions does this analysis need to enable in the next 30 days? Typical examples: which competitor to name-drop in demos, how to handle a specific objection, whether to compete or avoid in enterprise deals. If a section of the analysis does not connect to one of those three decisions, cut it. State the questions at the top of the deliverable and check against them at the end.

Then tier your competitors by revenue impact, not by marketing noise. Pull three months of CRM data on competitive mentions in won and lost deals. Ask sales which five competitors come up most. Ask prospects during interviews who else they evaluated. Cross-reference.

Competitors appearing in all three sources are Tier 1. A competitor that is loud on LinkedIn but rarely surfaces in actual deals is Tier 2 or Tier 3.

One thing most competitive tiers miss: include the status quo. When you survey buyers who did not buy, 40 to 60% report they made no purchase decision at all. They were not comparing you to a competitor. They were not convinced the problem was worth solving.

"When you survey buyers who didn't buy, 40–60% say they made no purchase decision at all."

— April Dunford, Lenny's Podcast, 2026-04-28

40–60% of B2B buyers say "no decision", your real competitor is the status quo

That means "do nothing" belongs in your competitive tier list. Not as a named competitor, but as a real option the buyer weighs. The analysis must account for it.

Tiering model:

TierDefinitionAction
Tier 1Appear in 80% of competitive deals.Full profile, battlecard, ongoing monitoring.
Tier 2Noisy but low impact. Occasional deals.Brief profile, basic talking points, periodic check.
Tier 3Niche. Rarely encountered.One paragraph. Revisit if frequency increases.
Status quo"We'll handle it manually." "We'll wait."Frame in win/loss analysis. Inform the pitch setup.

02 Centralize intel that already exists inside your organization.

Before conducting external research, harvest what your team already has. People are gathering competitive intelligence without naming it that. They just do not share it anywhere useful.

Sales reps hear objections, competitor name-drops, and buyer-side reframes every week. Customer success hears why customers switched from competitors and what competitor features customers now ask about. Product has a rough landscape view. CRM has deal notes with competitor mentions. Most organizations have this scattered across Slack threads, deal notes, and memory.

Your first step is to put it in one place.

The most important source is the sales floor. Reps are the leading indicator for positioning problems. They hear the language of failure months before any dashboard reflects it.

"Your sales team knows months before anyone else when a position is failing."

— April Dunford, LinkedIn (April 2026)

The sales team detects positioning failure months before the dashboard does

When you centralize this intel, you are also building the feedback loop you will need to keep analysis current. Set up a competitive Slack channel now. The channel does two things: it is where you publish insights going forward, and it is where reps drop what they hear. Neither works without the other.

Interview 3 to 5 top-performing reps specifically about competitive deals. Ask: which competitors come up most? Where do you feel least confident? What's the objection you hear that you cannot answer well? The answer to the third question is where to invest first.

03 Research competitors externally. Know what you are looking for before you start.

Primary research means hands-on. Sign up for free trials of Tier-1 competitors. Not to steal UX. To experience the promise they make to new users on day one and whether the product delivers it. A gap between the demo and the trial experience is a sales angle.

Then go where competitors talk to buyers instead of selling to them. G2, Capterra, TrustRadius. Read the 2- and 3-star reviews specifically. Those contain the specific objections your sales team will face and the specific promises the competitor made that fell short. One negative review with detail is worth ten testimonials.

Track reviews by source. G2 reviews, support forums, and field rep reports all have different credibility and utility. Keep them separate.

Secondary research amplifies primary. Job postings reveal capabilities competitors are building in the next 12 to 18 months. A cluster of three data-engineering hires signals a BI play before any announcement. BuiltWith and Wappalyzer expose the tech stack if you are doing displacement sales. Analyst reports give you the category language they are adopting.

Stop when you have one concrete answer per dimension in your differentiation matrix. Research has a diminishing-returns cliff. Hit it early and move to analysis. Partial intel shared fast is more useful than complete intel shared late.

04 Analyze win/loss. CRM data tells you who. Interviews tell you why.

Pull competitive deal data from the CRM. Calculate three numbers: your head-to-head win rate against each Tier-1 competitor, how frequently each competitor appears in your pipeline, and how much of your total pipeline is genuinely competitive. These tell you where to focus and where to investigate.

But CRM data alone is not win/loss analysis. Disposition codes tell you who you lost to. Interviews tell you why. And the why often surprises you.

Interview 5 to 10 recent prospects who evaluated you alongside a competitor. Include both won and lost deals. The won deals matter: customers who chose you will tell you what they liked about the competitor during evaluation, what almost changed their mind, and what the competitor said that fell flat. That is competitive intelligence.

The most important question to ask in a loss interview: did they make a purchase decision at all? If they say "we decided to wait" or "we decided to handle it differently," that is a status-quo loss, not a competitive loss. The positioning and the pitch fix are different in each case. Misattributing status-quo losses to competitor wins leads to the wrong remedies.

Separate your analysis: competitive losses (they chose Competitor X) vs. status-quo losses (they chose nothing). Invest in different responses to each.

05 Build competitor profiles. Honest about strengths. Specific at the capability level.

For each Tier-1 competitor, build a profile that answers: what do they do, who do they serve, where do they win, and where do they lose. The profile is your internal analytical document. It is not the same as a battlecard.

The most dangerous failure in a competitor profile is dishonesty about their strengths. One wrong claim about a competitor's pricing or capability permanently breaks seller trust in everything you produce.

"All it takes is one piece of bad intel to lose the trust of your sellers." — Crayon battlecard research

Acknowledge competitor strengths honestly. The stronger your concession, the more your differentiation claims carry weight. This is not a positioning strategy. It is an epistemic practice. Your salespeople will fact-check you in live conversations.

Use Ayo Omojola's three-check filter on every differentiator you plan to include:

  1. Is it actually different from what competitors offer?
  2. Is it actually better on a measurable axis?
  3. Does it matter viscerally to the buyer's job?

"Being different is not enough. Being better is not enough. It has to be better in a way that matters to the end user."

— Ayo Omojola, Lenny's Podcast, 2026-04-28

Differentiation requires three checks: different, better, and matters viscerally to users

A property that passes the first two checks but not the third is a feature comparison, not a differentiator. Drop it from the profile's attack angles and note it separately.

06 Build a differentiation matrix. Use buyer criteria, not feature inventories.

The dimensions in your matrix must come from win/loss interviews and sales conversations. Not from your product team's feature list. Ask buyers: what were the top three factors in your decision? Those are your dimensions.

A 50-row feature comparison grid answers a question buyers are not asking. They are asking whether your product handles their specific situation better than the alternatives they are considering. Features are evidence for that claim. They are not the claim itself.

Limit to 6 to 8 dimensions. More than that and the matrix becomes a checklist nobody uses. For each dimension, include a proof point: a customer quote, a specific metric, a deal reference. Without evidence, the matrix is claims, not analysis.

Gartner's sameness research documents a specific failure mode here: buyers default to perceiving comparable features across competing products. You have to test that your differentiators actually read as differentiated to external audiences, not just to your team.

"Buyer skepticism and the perception of 'sameness' will limit differentiation efforts if product marketers do not test their differentiators with external audiences."

— Gartner, Market Guide for B2B Message Testing Solutions (G00823537), 2025

Buyers see "sameness", test differentiators with external audiences before any campaign launch

Before the matrix ships, run at least one external test: a prospect conversation, a quick panel, or a 5-second test on the homepage. If the differentiators land internally but not externally, you have positioned for yourselves, not for buyers.

07 Produce a SWOT per Tier-1 competitor and for the landscape overall.

The per-competitor SWOT surfaces where you win, where you lose, and where market shifts create opportunity or risk. The landscape SWOT finds the common denominators: which weaknesses multiple competitors exploit, which trends strengthen competitors collectively.

Keep each SWOT tight. Strengths and weaknesses are internal (relative to the competitor). Opportunities and threats are external (market movements). The most useful SWOT entries are specific enough to act on. "Our integration story is weak vs. Competitor X among RevOps buyers" is actionable. "Integration is a weakness" is not.

08 Build battlecards. Curate inputs, don't just author documents.

Battlecards are the most important competitive deliverable. They are not the competitor profile. The profile is your analytical document. The battlecard is the seller-facing summary designed for use in a live deal.

The traditional model treats battlecards as documents: a PMM writes one, a sales rep reads it, it goes stale. The better model treats battlecards as curated intel that updates when the inputs update.

"AI can transform static battle cards into dynamic tools that provide real-time competitive insights directly within seller workflows. Traditional competitive battle cards, often stored in digital content repositories, quickly become obsolete in dynamic markets."

— Gartner, Innovation Insight: Rethinking Battle Cards in the Age of AI (G00832921), 2025

Battle cards become workflow primitives, not Notion pages

The practical implication: your job as the PMM shifts from authoring the document to curating the inputs. The inputs are win/loss interviews, competitor product updates, and field rep reports. When those update, the battlecard updates. Not quarterly. On event.

A minimum viable battlecard has three sections:

Why we win. Three differentiators, each with a customer story or a deal reference. Validated by top-performing sellers before distribution. "We believe" is weaker than "We heard from customers."

Competitor strengths with responses. The real strengths. Not the ones your team is comfortable acknowledging. Include copy-paste language sellers can use verbatim. Acknowledge the strength, then pivot: "You're right that Competitor X is stronger on X. The question is whether X matters more than Y for your situation."

Landmines. Two to three questions that expose competitor gaps. Framed as questions the buyer should ask the competitor in their evaluation. "Ask them about [specific scenario] and watch what happens" is more useful than "their X feature is weak."

Three rules: Accurate (one wrong claim ends trust permanently). Brief (sellers will not read long cards in a deal). Consistent (same format across all competitors so reps can navigate fast).

09 Write strategic recommendations. Specific enough to act on.

Translate the analysis into one recommendation per audience:

For sales: which competitors to prioritize for depositioning (high win rate, double down), which to investigate (low win rate, find the root cause before investing further). Include specific talk-track adjustments and where to plant landmines.

For product: which capability gaps are appearing in loss reasons. Which investments competitors are making based on job postings and product updates.

For marketing: positioning adjustments based on competitive shifts. Comparison page opportunities. Content gaps the competition is filling.

"Improve competitive messaging" is not a recommendation. "Reposition against Competitor X on integration depth, using these three proof points, because that dimension drives 40% of loss reasons in enterprise deals" is.

10 Enable and distribute. Build the rep-input loop, not just the distribution deck.

A battlecard nobody opens is the same as no battlecard. Embed in CRM at the opportunity level when the competitor is tagged. Bookmark in the competitive Slack channel. Brief the team live, with roleplay, before publishing.

The more important step is building the feedback loop going forward. Reps hear positioning failures in real time. Build a channel for that signal to reach you.

Set up a simple ritual: weekly competitive objection log. Reps paste one objection or one competitor claim they heard that week. You review Friday, update the battlecard if needed. That loop catches positioning drift months before any pipeline metric shows it.

Define the refresh cadence before you ship:

ActivityFrequency
Competitor website and product checkWeekly
Battlecard refreshMonthly or on event
Win/loss interview batchQuarterly
Full competitive analysis updateQuarterly
Tier re-evaluationSemi-annually

The calendar cadence is a floor, not the trigger. Real battlecard refreshes happen event-driven: a competitor ships something, a market shift occurs, sales asks for updated intel. If nobody is asking for updates, check adoption first. Stale content is a smaller problem than content nobody uses.

Tiering model

TierDefinitionActionTypical count
Tier 180% of competitive deals. Sales mentions constantly.Full profile, battlecard, ongoing monitoring.3 to 5
Tier 2Noisy but low revenue impact.Brief profile, basic talking points, periodic check.3 to 5
Tier 3Niche. Rarely encountered.One paragraph. Revisit if frequency increases.All others

Honest comparison page template

When publishing a competitor-vs-you page, the most credible structure:

  1. Disarm skepticism. Acknowledge you have a stake.
  2. Concede competitor strengths honestly. The more you concede, the more your claims are believed.
  3. Reframe the buying criteria. The criteria themselves are the real argument.
  4. Stack your differentiation under the new criteria.
  5. Unify under a shared problem narrative.

Tone by competitor position: market leader requires maximum deference; direct competitor allows more aggression; adjacent category calls for confident repositioning.

Quality gates

Common failure modes

Outputs

  1. Competitor tier list with rationale.
  2. Tier-1 competitor profiles.
  3. Differentiation matrix (6 to 8 buyer-relevant dimensions).
  4. SWOT per Tier-1 competitor and landscape-wide.
  5. Battlecards per Tier-1 competitor.
  6. Win/loss summary with patterns, including status-quo loss rate.
  7. Strategic recommendations per audience (sales, product, marketing).
  8. Monitoring cadence with named owner.
Open the interactive view →