Most B2B marketing leaders think about content in terms of volume and topics. Fewer think about it in terms of buyer questions answered. That distinction turns out to matter more than almost any other factor in AI visibility.
What the Threshold Is
The 100-question threshold is a diagnostic benchmark TPG uses to assess AI content readiness. It measures how many specific buyer questions a company has answered directly in publicly accessible, ungated content. Not brand narratives, not category overviews, not thought leadership essays. Direct answers to the questions a buyer types into ChatGPT or Perplexity.
Companies that have answered 100 or more buyer questions in structured, ungated content consistently outperform companies below that threshold on AXO diagnostic scores. The relationship is not perfectly linear, but it is consistent across industries and company sizes.
Companies below 20 answered questions almost always score below 30 on AXO. Companies above 80 answered questions rarely score below 50.
Why This Metric and Not Others
Content volume doesn't predict AI visibility. Publishing frequency doesn't predict it. Domain authority doesn't predict it. The signal that predicts AI citation is whether a company has published a direct, specific, retrievable answer to the question a buyer is asking.
AI tools synthesize answers by finding the most directly relevant, clearly structured source for a given query. A buyer question matched against a direct published answer is the highest-probability path to citation. A buyer question matched against a general brand essay is a near-zero probability path.
How to Map Your 100 Questions
The 100-question audit is not creative work. It's structural.
Start with your buyer personas. If you have three major buyer personas, you have at minimum 30 to 40 questions to map before even getting to the journey stage dimension.
For each persona, identify: What problem are they trying to solve? What information do they need to justify a purchase to their board? What concerns do they have about vendors in your category? What comparisons are they running? What does implementation look like for them?
Then layer in journey stage. An unaware buyer asks different questions than a buyer in active evaluation. A buyer who just saw you on a comparison list asks different questions than one who came through a referral.
Map that matrix. You'll find you have somewhere between 80 and 150 meaningful questions. Your goal is to have a direct, specific, ungated answer published for each of them.
What Direct Answers Look Like
The standard is simple but often violated in practice. A direct answer starts by restating the question, answers it in the first paragraph without preamble, includes specific data or examples where available, and is structured for extraction: headers, short paragraphs, FAQ sections.
"At TPG, we believe that marketing and sales alignment is foundational to revenue growth..." is not a direct answer. It is a brand perspective.
"B2B marketing and sales alignment increases pipeline conversion rates by an average of 38% according to a Marketo study of 500 enterprise companies. The three most common alignment failures we see are [specific list]..." is a direct answer.
The AI cites the second one. It skips the first.
FAQ
TPG's AXO diagnostic maps your current question coverage by buyer persona and identifies the specific gaps with the highest pipeline impact. Start at pedowitzgroup.com/ai-assessment.