Skip to main content
Citation Briefs
Citation Brief #001 · AEO methodology

How premium agencies get cited by ChatGPT, Perplexity, and Google AI Overviews

The Five-Stage Citation Hierarchy AI answer engines use, the signals that move the needle, and how to engineer for inclusion. Citation, not clicks.

By Jonathan Landman · Published · 12 min read

The 60-second answer

AI answer engines decide what to cite using five stacked signals: entity resolution, source authority, structured extractability, freshness, and recommendation history. Brands that engineer all five compound a citation moat — every accepted citation makes the next one more probable. Classical SEO targets rankings; AEO targets inclusion inside the answer itself.

The work is structural, not promotional. Run the diagnostic, fix the substrate, ship the citation assets, and the engines start recommending you within 30–60 days for entity wins and three months for compounding citation history.

Why classical SEO doesn't get you cited.

Classical SEO was engineered for a discovery model that no longer dominates buyer journeys. The model was: rank in position 1–3, capture the click, convert on-site. The new model is: the engine answers the question, names a brand, and the buyer arrives pre-qualified — or doesn't arrive at all.

Ranking and citation use different functions. A page can rank #1 organically and not be cited inside the AI Overview that appears above it. A different page — sometimes from a lower-traffic domain — wins the citation because it nailed entity clarity, schema extractability, and named-author authority. The blue link is no longer the unit of victory. The cited slot is.

This is why classical SEO agencies underperform on AEO: their toolchain measures the wrong outcome. Wiele instruments the citation directly — every brief, every engine run, every monthly delta is tracked against a named competitor set inside the

Wiele Citation Score™

subscription.

The Five-Stage Citation Hierarchy.

Each stage stacks on the one below it. Skip a stage and the stages above silently underperform. Wiele engineers all five in sequence on every engagement.

01

Entity Resolution

Does the engine know what — and who — you are?

AI engines resolve queries to entities, not strings. If your brand is ambiguous (shared name, weak knowledge graph footprint, no Organization or Person schema, missing sameAs links), the engine cannot deterministically cite you. Entity resolution is the floor; everything above it depends on this stage being clean.

02

Source Authority

Does the engine trust your domain as a primary source?

Authority is a weighted blend of domain trust signals, external mentions (especially Tier-1 publications and authoritative databases), structured citations to your work, and the founder's named-author presence. Authority is slow to build, slow to lose — which is precisely why early movers compound a moat.

03

Structured Extractability

Can the engine pull a clean answer block from your page?

Engines reward pages that hand them an extractable answer: a short, definitive opening paragraph; clear H2/H3 hierarchy; FAQPage and HowTo schema where applicable; tables and lists that map directly to query intent. Pages that hide the answer three scrolls deep get skipped.

04

Freshness

Is the content recent enough to survive the engine's recency filter?

Most answer engines apply a recency weight, especially on commercial and time-sensitive queries. Pages that ship a clear lastModified, get re-indexed via IndexNow, and demonstrate ongoing update cadence pass the filter. Stale evergreen content quietly drops from the recommendation set even when it ranks classically.

05

Recommendation History

Has the engine cited you before — and how does that loop compound?

Engines that have cited a source before are statistically more likely to cite it again. This is the compounding loop and the reason early citation matters disproportionately. Every accepted citation strengthens the next one. Late entrants compete against incumbents who have been compounding for cycles.

What agencies typically miss.

Five recurring failure patterns Wiele sees in pre-engagement Signal Audits across premium agencies and B2B firms:

  • No Person schema for the founder. The Organization is named, but the human whose voice the engines should attribute writing to is invisible to the knowledge graph. Founder-voice authority evaporates at the entity-resolution stage.

  • FAQ and HowTo schema missing. Pages carry the right content but skip the structured-data wrappers that hand the engine a pre-extracted answer block. Engines pass over them in favour of competitors who served the answer on a plate.

  • Same sameAs on Person and Organization. Founder Person schema points at the company LinkedIn instead of the personal one. The knowledge graph cannot disambiguate the human from the brand. Two entities collapse into one — and citation attribution gets diluted.

  • No IndexNow distribution. Sites rely on Google and Bing to discover updates on their own crawl cadence. Days or weeks of delay where a fresh page should already be eligible for citation.

  • Generic positioning. The page says everything every other agency says. Engines reward specificity, named frameworks, and founder-voice points of view. Boilerplate is filtered out before it's ever quoted.

How to engineer for citation.

Wiele runs every engagement on a three-stage rhythm. The rhythm is canonical — it's the same engine that powers a Signal Audit on day one and an Authority Engine retainer on month twelve.

  1. Map. Diagnose AI visibility, citation graph, and authority gaps across the prompt surface that matters to your buyer. Output: a baseline citation score against a named competitor set, plus a 30-day implementation roadmap.

    Start with a Signal Audit

    .

  2. Build. Engineer the content, schema, comparison hubs, founder-voice articles, Citation Briefs, and authority assets that AI engines cite — and humans convert from. Output: a shipping cadence that lifts every stage of the hierarchy.

    See the AI Visibility system

    .

  3. Compound. Visibility, citations, and demand compound month over month. The asset gets stronger; the moat gets deeper. The Citation Score™ subscription instruments the lift; the Authority Engine retainer executes the cadence.

    See programme shapes

    .

Methodology & sources.

Citation behaviour observed across longitudinal runs of the Wiele AI Citation Tracker dataset (private, anonymised). Engine-specific guidance cross-referenced with each provider's public documentation:

  • Google Search Central — Structured data (FAQPage, HowTo, Article, Person, Organization)

  • Google AI Overviews — robots-meta-tag opt-in (max-snippet, max-image-preview)

  • IndexNow Protocol — fast notification surface for Bing, Yandex, Seznam, Naver, Yep

  • Schema.org — Person sameAs and Organization sameAs entity reconciliation

  • Wiele Branding Canon 2026-05-14 — CBBE Pyramid, BAV Four Pillars, Brand Value Chain

  • Wiele Advertising Canon 2026-05-14 — Reeves USP, Steel Account Planning, Hackley IMC

The Wiele standard is full methodology transparency. Every claim above is reproducible from public sources or Wiele's instrumented engine-run dataset. Engagement clients receive the named-competitor lift trace inside the Citation Score™ dashboard.

Questions on this brief.

The next step

Start with a Signal Audit.

A diagnostic that maps your citation graph, entity baseline, and authority gaps — plus a 30-day implementation roadmap. The fastest way to know where you stand inside the answer economy.

Wiele Group