A deterministic diagnostic logic tree
for a probabilistic ranking system.
Core Principle
SEO performance issues are very often fixed by bottleneck removal. Every site that fails to rank has a binding constraint — a specific layer it cannot pass. Everything downstream of that constraint is irrelevant until the constraint is cleared.
The clinical method: test layers in order, stop at the first binding constraint, allocate all capital to clearing that constraint before moving downstream. This prevents the most common form of SEO and AEO capital destruction — optimizing Layers 4 through 7 while the site has an unresolved Layer 2 access problem.
The framework is probabilistic in output — rankings are not guaranteed — but the diagnostic is deterministic. The layers are not opinions. They are testable, falsifiable conditions. Either the content passes, or it does not.
60-Minute Triage Reference
Run this before committing to a full diagnostic. If you can clear all checks in 60 minutes without a failure, skip to Layer 7 and Layer 8 — your binding constraint is likely competition or satisfaction.
| Layer | Name | Clinical Question | Quick Check |
|---|---|---|---|
| Pre-layer | Investment Screen | Right channel? Right page type? Right query? | Channel fit → business model classification → Demand x SERP risk x Difficulty |
| Diagnostic Layers | |||
| Layer 1 | Eligibility | Can we rank at all? | DR gap to SERP median, penalty check, YMYL risk assessment |
| Layer 2 | Access | Can crawlers reach and render it? | Raw HTTP body check, robots.txt audit, crawl coverage report |
| Layer 3 | Representation | Does the page say what it does? | Title / H1 / schema audit, freshness signal audit |
| Layer 4 | Retrieval — Lexical | Does vocabulary match target queries? | Keyword presence in high-weight positions, variant coverage |
| Layer 5 | Retrieval — Semantic | Is the topic model complete? | Entity coverage vs. top-ranking pages, internal link cluster |
| Layer 6 | Retrieval — Generative | Is this AI-retrievable? | AI crawler access, structured answer presence, primary data sourcing |
| Layer 7 | Competition | Can we win the ranking battle? | Authority gap, SERP feature concentration, differentiation gap |
| Layer 8 | Satisfaction | Does the user stay and convert? | Pogo-sticking rate, time-on-page, task completion, return visits |
Does Investing in AEO/SEO Make Sense for Your Business Right Now?
Not a diagnostic layer. This runs before the framework — a strategic filter that determines whether the diagnostic is the right investment at all, and against which pages and queries to apply it. Each screen must clear before the next runs.
Channel Qualification
Is organic search the right acquisition channel for this business?
Evaluate
Is organic search a meaningful acquisition channel for this business model at all? Some B2B businesses and service models capture demand primarily through referral, directory, or paid — organic is not always the right investment.
Failure Signature
A business where organic search is structurally not the right acquisition channel. If this screen fails, neither page type selection nor query-level analysis matters. Stop here.
Page Category Allocation
Are the right page types matched to how demand is structured in this vertical?
Evaluate
- —Business model classification: local service, e-commerce, B2B SaaS, publisher — each has a distinct organic acquisition surface and page type requirement.
- —Page type alignment: are the proposed content investments matched to how demand is structured in this vertical?
Failure Signature
A local service business building informational content clusters instead of location-service pages — the wrong page type for the business model, regardless of how well those pages are optimized. Wrong page type at Screen 1 makes query-level screening irrelevant.
Query-Level Expected Value
Does the expected return justify the investment required?
Evaluate (in sequence)
- 1.Realistic rank ceiling — given current domain authority, SERP composition, and incumbent topical authority, what is the best achievable position? If the gap to incumbents is large, the ceiling may be 6-8, not 1-3.
- 2.SERP structure risk — before estimating any return, characterize what organic position actually yields on this SERP. Is the organic click floor acceptable even in the best-case position? Some SERPs are structurally suppressed — ads, AIO, dominant aggregators, maps pack, and feature units collectively reduce organic share to the point where even position 1 captures marginal traffic. Others remain relatively clean. This is a categorical judgment, not a number: is this a SERP where organic clicks are a realistic outcome, or one where the investment case depends primarily on generative surface inclusion or brand visibility rather than direct click capture?
- 3.Generative surface potential — separately from traditional CTR, is AIO or LLM citation achievable for this query class? Changes the investment case if traditional CTR is suppressed but generative inclusion is realistic.
- 4.Commercial conversion likelihood — does the traffic this query class generates convert, or does it attract users with no intent to buy?
- 5.Expected return vs. effort — combine the above. If everything goes right, does the outcome justify the investment required to get there?
Failure
Queries that look achievable but don't pay off. The rank ceiling is reachable; the return isn't — because SERP feature suppression, the authority gap, or low conversion intent makes the expected value insufficient regardless of whether the ranking is actually achieved.
Layer Reference
Each layer is a testable condition with a distinct failure mode. Diagnose in order.
Eligibility
Clinical Question
“Is this site eligible to rank for this class of query at all?”
What to Look For
- —Domain authority relative to the SERP composition
- —Algorithmic penalty indicators at the site or category level
- —Content quality signals at the domain level (average page quality)
- —YMYL classification risk and associated trust threshold requirements
- —History of manual actions or devaluation patterns
Failure Signature
Publishing optimized content into a domain that is effectively penalized at the query category level. The ceiling is zero, not low. No amount of on-page optimization or link building will move content past a constraint that is closed at the eligibility level.
Layer Note
The eligibility ceiling is categorically different from the competition weight at Layer 7. A site can have no eligibility problem but face high competition weight — those require different interventions. Treating a ceiling problem as a weight problem (building links to a penalized domain) is the most expensive mistake in SEO capital allocation.
Access
Clinical Question
“Can the discovery systems — search crawlers and AI agents — reach, render, and index the content?”
What to Look For
- —robots.txt rules for Googlebot, GPTBot, PerplexityBot, and other AI agents
- —JavaScript rendering dependency — does the page return meaningful content to a raw HTTP request?
- —Crawl budget allocation across the site architecture
- —Core Web Vitals and server response time under crawl conditions
- —WAF and bot detection rules inadvertently blocking legitimate crawlers
- —Canonical signals and noindex directives
Failure Signature
Content that exists in the browser but not in the crawl. JS-rendered pages that return empty or near-empty bodies to crawlers. AI crawlers blocked by default WAF rules configured before the AI crawler wave. This is a Layer 2 failure masquerading as a content or competition problem.
Representation
Clinical Question
“Does the content's structural representation match what the page is actually about?”
What to Look For
- —Title tag accuracy — does it reflect the actual primary query intent?
- —H1 hierarchy and heading structure clarity
- —Schema markup correctness, completeness, and appropriate type selection
- —Canonical signals pointing to the correct authoritative URL
- —Entity disambiguation — is the page's primary topic unambiguous?
- —Freshness signal accuracy — is the content being treated as news when it's evergreen?
Failure Signature
Evergreen content carrying freshness signals from a news CMS stack. Product pages using Article schema. Category pages with no schema at all. Pages whose title tags describe the site rather than the page. The representation mismatch is the most common cause of 'we have great content but can't rank' — the content is correctly written but incorrectly labeled.
Retrieval — Lexical
Clinical Question
“Does the page's vocabulary match the vocabulary of the target queries?”
What to Look For
- —Keyword presence in high-weight positions: title, H1, opening paragraph
- —Query variant and synonym coverage throughout the document
- —Entity surface form alignment with how searchers actually refer to the concept
- —Natural language density — is the target vocabulary present without stuffing?
Failure Signature
Semantic content that uses concept-level language where searchers use product-specific, brand-specific, or colloquial language. Or the inverse: over-optimized copy that uses unnatural keyword density patterns that modern models deprioritize. Lexical retrieval failure is usually an editing problem, not an architecture problem.
Retrieval — Semantic
Clinical Question
“Is the page's topic model sufficient to rank within the relevant semantic cluster?”
What to Look For
- —Entity coverage — are the related entities mentioned, linked, and contextualized?
- —Topical completeness relative to the top-ranking documents
- —Semantic neighbor pages on the same domain — is there a topical cluster?
- —Internal linking structure supporting the semantic relationship between pages
Failure Signature
Thin pages that answer the head term but omit the semantic context that establishes topical authority in the index. Also: orphaned pages with strong on-page signals but no internal linking context — the semantic cluster signal is absent. Semantic retrieval failure is usually a content depth and site architecture problem.
Retrieval — Generative (AEO)
Clinical Question
“Is this content retrievable and citable by AI-powered answer systems?”
What to Look For
- —Crawl access for AI agents: GPTBot, PerplexityBot, ClaudeBot, and similar
- —Structured answer formatting — direct question-answer pairs, definition blocks, summary sections
- —Citation-friendly sourcing — primary data, original research, attributed claims
- —Content rendering without JavaScript dependency
- —YMYL trust signals for AI grounding: authorship, methodology transparency, sourcing
Failure Signature
Content optimized for the 10-blue-links retrieval paradigm that fails in generative contexts because it buries the answer in narrative prose, requires JavaScript to render, or cites secondary sources rather than primary data. A parallel retrieval track — not a branch or upgrade of Layers 4 and 5. Content can pass Layers 4 and 5 while completely failing Layer 6, and vice versa.
Layer Note
Layer 6 is the entry point for the S.A.G.E. optimization model. When generative retrieval is the binding constraint or the target track, S.A.G.E. (Structured Signals / Authority & Authenticity / Generative Access / Experience Depth) is the prescription. See the S.A.G.E. framework for the specific optimization protocol.
Competition
Clinical Question
“Can this page win the ranking competition for this query at this point in time?”
What to Look For
- —SERP composition: who holds positions 1-3, what content types, what authority levels
- —Authority gap at domain and page level
- —SERP feature concentration: featured snippets, AI Overviews, local pack, image carousels
- —Content differentiation gap — what does the incumbent have that this page lacks?
- —Link profile gap: quantity, quality, and topical relevance of inbound links
Failure Signature
Equal-quality content attempting to displace entrenched incumbents without a differentiation strategy. Competition is a weight problem, not a ceiling problem. A site that clears Layer 1 (eligible) but fails Layer 7 (can't win yet) has a capital allocation question: build more authority first via achievable queries, or differentiate aggressively on content depth and data.
Layer Note
Competition at Layer 7 is where the evidence loop operates. Early wins on achievable queries (low competition weight) compound domain authority over time, reducing the Layer 7 threshold for harder queries. This is not a shortcut — it is how organic authority compounds. Sequencing: win Layer 7 on medium-difficulty queries first, bank the authority priors, then re-enter harder queries.
Satisfaction
Clinical Question
“Does the content satisfy the user's complete search intent and generate the engagement signals that reinforce ranking?”
What to Look For
- —Time-on-page and scroll depth patterns across organic sessions
- —Pogo-sticking rate — immediate return to SERP after landing
- —Task completion rate: did the user accomplish what they came for?
- —Return visit rate from organic entry points
- —Conversion rate from organic — the ultimate downstream satisfaction signal
Failure Signature
Content that achieves a ranking but carries persistent pogo-sticking and short session duration. Google's satisfaction signals are direct inputs into quality assessment. Ranking on momentum from prior authority priors while satisfaction signals deteriorate is not a stable state — it is a ranking that will decay as the negative engagement signal compounds.
Layer Coupling and the Evidence Loop
The layers are sequential in diagnostic priority but not in causal isolation. Clearing Layer 2 (access) does not guarantee Layer 3 (representation) works. Each layer has an independent failure mode. The sequence tells you where to look first, not where to stop looking.
Layers 4, 5, and 6 are parallel retrieval tracks, not alternatives. A page can pass all three or any combination. The generative track (Layer 6) requires explicitly different optimization than the traditional retrieval tracks (Layers 4 and 5). Conflating them produces content that wins in one channel and is invisible in the other.
The Evidence Loop
Early wins on achievable queries — queries where Layer 7 (competition) can be cleared quickly — compound domain authority over time. This authority reduces the Layer 7 threshold for harder queries in the same topical cluster.
This is not a hack. It is how organic authority compounds. The evidence loop is the mechanism by which a low-authority challenger can eventually compete for head terms without buying links — by systematically winning tail and mid-tier queries first, building authority priors in the relevant topic cluster, and re-entering harder queries with a stronger baseline.
Banksparency.com is a live demonstration: 9–10K monthly pageviews, no link-building efforts, zero editorial hours. Layers 1–6 are structurally solved by design. Layer 7 (Competition) is the active frontier — query sequencing builds authority priors in the banking data cluster, and data granularity is the differentiation strategy for query windows where a low-authority challenger can compete before achieving authority parity on head terms.
Layer coupling example: A site that fails Layer 1 (eligibility) for competitive head terms may pass Layer 1 for long-tail variants in the same category. Running the diagnostic separately for different query classes reveals which segments are actionable. The diagnostic result is not binary across the whole domain — it is query-class specific.
Practical Notes
Run layers in order. If the binding constraint is Layer 2 (access), building links is capital destruction. If the binding constraint is Layer 1 (eligibility), all downstream work is waste until the eligibility problem is resolved. Sequencing is not optional.
The 60-minute triage is not the full diagnostic. It is a pre-check to identify which layers need deep investigation. A full layer analysis — particularly for Layer 2 (access), Layer 3 (representation), and Layer 7 (competition) — requires dedicated tooling, log file analysis, and comparative SERP research.
Layer 6 is a parallel track, not a branch. Winning at generative retrieval requires separate optimization from traditional retrieval. You can rank on 10 blue links and be completely invisible in AI Overviews — the retrieval mechanisms are different. Treat Layer 6 as a distinct investment, not a side effect of Layers 4 and 5 work.
The investment screen is not part of the diagnostic. Channel qualification, page type alignment, and query-level expected value are economic decisions, not pass/fail tests. Running the diagnostic before clearing the investment screen is how teams optimize the right queries for the wrong page type on the wrong channel.
Layer 8 (satisfaction) is not a vanity metric. Pogo-sticking and session duration are active inputs into Google's quality assessment. A ranking held by historical authority priors while satisfaction signals deteriorate is a ranking in decay. Satisfaction is a layer, not an afterthought.
Want the diagnostic applied to your stack?
The diagnostic identifies the binding constraint. The engagement model depends on what the constraint turns out to be. Some layers require architectural changes. Others require content work. A few require nothing more than a configuration fix.