How is your website ranking on ChatGPT?
Q4 AEO Sprint: Win Citations in Google & AI Search
Google’s AI Mode may soon be the default. Here’s a 12‑week Answer Engine Optimization sprint to earn citations and links inside AI answers across Google, Perplexity, and ChatGPT Search—before peak Q4 hits.

Vicky
Sep 12, 2025
I’m Vicky, AEO strategist at Upcite. I trained at HEC Paris, I pace marathons by negative splits, and I play tennis by taking the ball early. That’s the mindset you need for Q4: disciplined pacing and fast first steps. With Google’s AI Mode likely to become the default experience for many users soon, you don’t have a quarter to “wait and see.” You need an Answer Engine Optimization (AEO) plan you can run in 12 weeks—one that wins you a source slot inside AI answers on Google, Perplexity, and ChatGPT Search.
Why now? Recent reporting indicates Google is preparing to lean harder into AI Mode with prominent placement and a dedicated entry point. Major publishers continue to report steep traffic losses when AI answers appear, and Google acknowledged in a court filing that the “open web is already in rapid decline” (their framing centers on open‑web display ads, but the signal is unmistakable). At the same time, Perplexity has fresh funding and distribution momentum, and ChatGPT Search is adding shopping flows and scale. Q4 demand won’t wait for internal debates. Move.
Below is a practical, how‑to sprint plan: 12 weeks, three phases, clear owners, robust measurement. I’ve included engine‑specific checklists, sample robots.txt, schema patterns, and weekly cadences built for mid‑market and enterprise SEO and growth teams.
The Sprint at a Glance (12 Weeks)
- Phase 1 (Weeks 1–4): Crawl access, baselines, and “evidence blocks” on your top themes.
- Phase 2 (Weeks 5–8): Engine‑specific inclusion plays for Google AI Mode, Perplexity, and ChatGPT Search; PR and research to earn third‑party citations.
- Phase 3 (Weeks 9–12): Measurement, forecasting, and scale—optimize to share of citations, not just rankings.
Time commitment: ~6–8 focused hours/week across SEO lead, content lead, PR/Comms, analytics, and dev.
Phase 1 (Weeks 1–4): Access, Baselines, Evidence
Think of Phase 1 like the first 10K of a marathon: smooth efficiency, no hero moves. Your only goal is to become trivially easy for answer engines to crawl, quote, and attribute—then establish a baseline for inclusion.
1) Open the right crawl doors—and close the wrong ones
Your objective is to be discoverable for search features while retaining control over model training. Update robots.txt to allow the crawlers that power answer inclusion.
- Google
- Allow:
Googlebot
(core) andGoogleOther
(common crawlers used beyond Search). UseGoogle-Extended
if you want to manage whether your site helps improve Gemini Apps/Vertex AI training—note it doesn’t affect Search inclusion.
- Allow:
- ChatGPT Search
- Allow:
OAI-SearchBot
(used for surfacing and linking in ChatGPT Search). You may still choose to blockGPTBot
if you don’t want training access.
- Allow:
- Perplexity
- Allow:
PerplexityBot
for inclusion. Be aware Perplexity also uses a user‑initiated fetcher; treat edge enforcement (IP allow/deny) as policy backup.
- Allow:
Sample robots.txt (adapt to your policy):
# Allow search inclusion, restrict model training
User-agent: Googlebot
Allow: /
User-agent: GoogleOther
Allow: /
User-agent: Google-Extended
Disallow: / # optional: controls use for Gemini/Vertex AI training (not Search)
User-agent: OAI-SearchBot
Allow: /
User-agent: GPTBot
Disallow: / # optional: block OpenAI training while still appearing in ChatGPT Search
User-agent: PerplexityBot
Allow: /
# Safety: block obvious scraper wildcards you don’t rely on
User-agent: *
Disallow: /admin/
Disallow: /cart/
Governance note: enforce with WAF rules for declared IP ranges where published; log and throttle non‑declared patterns at the edge.
2) Establish your inclusion baseline
You need a before/after to prove AEO impact:
- Queries: Select 50–100 revenue‑relevant, information‑heavy queries you must influence (mix of “what/how/compare/alternatives,” across stages).
- Engines: For each query, capture whether you appear as a citation or link inside AI answers in (a) Google AI Mode/AI Overviews, (b) Perplexity, (c) ChatGPT Search. Note position order and label “earned” (third‑party) vs. “owned” (your domain).
- KPIs to track weekly:
- Answer Inclusion Rate (AIR): % of tracked queries where your brand appears in the AI answer (owned or earned).
- Citation Share of Voice (C‑SOV): % share of citations by domain category (owned, earned, partners).
- Referral Lift: Sessions tagged from engine‑specific sources (see Phase 3) and assisted conversions.
3) Build “evidence blocks” on your cornerstone pages
Answer engines need quotable, attributable facts. Add compact sections that make extraction and attribution trivial.
On each cornerstone page (guides, comparisons, category pages), implement an evidence block:
- A clear H3: “Key Facts (updated YYYY‑MM‑DD)”
- 3–7 bullet claims with precise numbers, time‑bound, each followed by a short source line (name, year). Keep one idea per sentence.
- A short “Methodology” note where relevant.
- Author line with credentials; organization context; last‑reviewed date.
- JSON‑LD: Article/FAQPage/HowTo as appropriate; include author, datePublished/dateModified, and organization SameAs. If you publish original stats, add a lightweight CreativeWork with
citation
properties, or aDataset
if you release data.
Example HTML snippet:
<section aria-labelledby="facts">
<h3 id="facts">Key Facts (updated 2025‑09‑10)</h3>
<ul>
<li>Average implementation time for [product]: 14–21 days (2025 cohort).</li>
<li>Observed cost reduction across 112 deployments: 18% median after 90 days.</li>
<li>Security: SOC 2 Type II; single‑tenant EU region available.</li>
</ul>
<p>Sources: Internal cohort study (2025), Third‑party audit (2025).</p>
<p>Author: Jane Smith, VP Solutions. Last reviewed: 2025‑09‑10.</p>
</section>
Keep sentences short, nouns concrete, and numbers near the claim. Think of this like setting your feet early on a return of serve—tight form, no wasted motion.
Phase 2 (Weeks 5–8): Engine‑Specific Inclusion Plays
A) Google AI Mode and AI Overviews
Your inclusion levers with Google look different than classic blue links:
- Match “compound queries.” AI Mode favors multi‑clause questions. Publish pages that explicitly answer the composite: “X vs Y for [industry] in [region], with [constraint].” Repeat the exact phrasing in the first 100 words.
- Multi‑format evidence. Short tables, checklists, and step‑by‑step sections are frequently quoted. Keep them scannable (<75 words per step) and labeled.
- Author and org credibility. Prominent author bios with credentials and a clear “About” section help evaluators and downstream models resolve authority.
- Freshness. Date‑stamped facts and frequent updates support inclusion when engines weigh timeliness.
- Structured data. Beyond Article/FAQPage, add Organization with robust SameAs (official profiles), Product with Offer and AggregateRating (if applicable), and speak plainly in visible copy—don’t rely on schema alone.
- Collections. Build hub pages that assemble your best evidence blocks across a topic. AI answers often sample multiple snippets—make your hub the tidy buffet.
Delivery checklist (2 weeks):
- 10–20 compound‑query pages shipped, each with an evidence block.
- 1 topic hub with concise summaries and anchor links.
- Sitewide author bios and updated organization schema.
B) Perplexity: win “source slots” and sidecar flows
Perplexity leans into citations and often privileges high‑authority earned sources alongside official docs. Your plays:
- Earned media first. Commission a timely, data‑rich study in your category and brief tier‑one outlets and respected analysts. Your brand gets referenced in their coverage; those articles are highly likely to be cited in Perplexity answers.
- Concise “ explainer” pages. Perplexity often cites clear, canonical explainers. Build minimal‑design pages targeting definitional and how‑it‑works intents.
- Allow
PerplexityBot
and monitor. Ensure you’re not unintentionally blocking inclusion. Consider rate‑limits for user‑initiated fetches at the edge while preserving access for citations. - Provide lightweight, linkable assets: short schematics, comparison tables, glossary entries—each with unique URLs and descriptive titles.
Delivery checklist (2 weeks):
- One original mini‑study with ready‑to‑quote graphs and a 500–800‑word methods page.
- 8–12 explainers and glossaries, shipped under a /learn/ or /explained/ path.
- PR seeding calendar and brief for spokespeople (offer quotes, not pitches).
C) ChatGPT Search: citations plus shopping
ChatGPT Search surfaces links and often includes a sources panel. It also supports product discovery.
- Allow
OAI‑SearchBot
. This is separate from training crawlers. Keep it unblocked if you want inclusion. - Evidence blocks plus product feeds. If you sell products, ensure product pages are cleanly structured (Product, Offer, AggregateRating) and consider preparing a simple product feed structure so you’re ready when feed submissions open more broadly.
- “Comparison‑ready” copy. ChatGPT summarizations love succinct pros/cons, specs tables, and grounded price ranges. Add a first‑paragraph summary (40–60 words) that cleanly states who it’s for, key benefits, and constraints.
Delivery checklist (2 weeks):
- Confirm
OAI‑SearchBot
access and test crawl. - Ship 6–10 comparison pages with structured tables and clear pros/cons.
- Validate product schema and image alt text; prepare a beta feed spec.
Phase 3 (Weeks 9–12): Measure, Forecast, and Scale
Clicks won’t tell the whole story. Measure presence and influence, then forecast impact on assisted conversions and brand lift.
1) Instrument attribution for answer engines
- ChatGPT Search: Ensure analytics recognizes
utm_source=chatgpt.com
on inbound links. Standardizeutm_medium=organic_answer
andutm_campaign=aeo_q4
(or your naming). - Perplexity: Standardize inbound from
perplexity.ai
andperplexity
referrers where present; set a source/medium rule likeperplexity.ai / referral
and tag to a “Answer Engines” channel. - Google: For AI answers, direct referrals may be limited. Track brand lift as more non‑branded queries include your brand in co‑mentions; use annotation when your pages start appearing in AI Mode citations.
2) Define outcome metrics beyond clicks
- Citation Share of Voice (C‑SOV): Of all citations across your tracked queries, what % are yours vs. competitors vs. third‑party coverage that mentions you?
- Assisted conversions: Build a segment for sessions arriving from answer‑engine sources within 7 days of conversion. Attribute a share of revenue to this segment.
- Brand recall proxy: Track co‑occurrence of your brand with two target category terms across web and news monitoring, weekly.
3) Forecasting in an answer‑first world
- Top‑down: For each theme, estimate answer impressions using query volume × observed answer appearance rate × your projected inclusion rate (AIR). Multiply by historical conversion rate on comparable organic traffic to produce conservative assisted‑conversion forecasts.
- Bottom‑up: Run a 4‑week holdout on 10% of new evidence pages (no PR seeding) to separate the lift from earned media vs. owned content.
4) Scale what wins
- Double down on formats most cited: short tables, step lists, definitional intros, fresh stats.
- Expand your “research flywheel”: quarterly mini‑studies and one flagship annual report.
- Institutionalize an AEO content QA: any page shipping in priority categories must include an evidence block and a 60‑word summary.
Practical Templates and Examples
“Evidence Block” checklist (copy/paste for your team)
- One sentence per fact; 12–18 words is ideal.
- Time‑bound numbers (e.g., “Q2 2025,” “last 90 days”).
- Name the source in plain text near the claim.
- Author and reviewer attribution on page.
- JSON‑LD updated with dateModified.
“Compound Query” page outline
- H1: The exact multi‑clause question users ask.
- 60‑word executive summary answer.
- Section 1: Short answer with a 3‑row comparison table.
- Section 2: “When X is better than Y” (3 bullets).
- Section 3: “When Y is better than X” (3 bullets).
- Section 4: Steps to decide (4–6 steps, 1 sentence each).
- Evidence block with updated date and sources.
Minimal JSON‑LD example (Article with author and org)
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "[Your Compound Query Headline]",
"datePublished": "2025-09-10",
"dateModified": "2025-09-10",
"author": {"@type": "Person", "name": "[Author Name]", "jobTitle": "[Title]"},
"publisher": {"@type": "Organization", "name": "[Your Brand]"}
}
</script>
Org Design: who does what each week
- SEO Lead (owner): query set, inclusion audits, evidence block standards.
- Content Lead: compound‑query outlines, tables, summaries.
- PR/Comms: earned media pipeline and briefings, research packaging.
- Analytics: tracking plan, C‑SOV dashboard, assisted conversion model.
- Dev: robots.txt updates, schema deployment, WAF rules for bot governance.
Weekly cadence (30‑minute stand‑up):
- Week 1–4 focus: publish evidence blocks and fix access; baseline audits.
- Week 5–8 focus: engine‑specific pages; ship mini‑study; PR seeding.
- Week 9–12 focus: measurement reviews; iterate on formats with highest citation yield; finalize Q4 forecast.
Risk, Reality, and What to Expect
- Inclusion ≠ clicks. AI answers compress the funnel. Expect more assisted conversions and branded search lift than direct sessions.
- Earned beats owned (at first). Answer engines overweight high‑authority third‑party sources. Your fastest path is to be quoted inside those sources while you harden your own.
- Freshness matters. In fast‑moving categories, pages with recent dates and explicit “what changed” notes get cited more often.
- Regulatory drift. With fresh FTC scrutiny on search advertising, ad surfaces may shift through Q4; reallocating some budget to answer‑engine influence is prudent.
As in marathon training, you’re building durable aerobic capacity—not a single PR. The goal is a repeatable AEO habit: structured, quotable content plus an earned‑media engine and measurement loop.
Your 12‑Week Checklist (condensed)
- Weeks 1–2: Update robots.txt, enforce edge rules, ship evidence blocks on 10 cornerstone pages, baseline inclusion.
- Weeks 3–4: Add author bios, org schema, and a topic hub. Draft mini‑study brief.
- Weeks 5–6: Publish 6–10 compound‑query pages for Google AI Mode; seed PR for your study.
- Weeks 7–8: Ship explainers/glossaries for Perplexity; confirm
OAI‑SearchBot
access and comparison pages for ChatGPT Search. - Weeks 9–10: Turn on attribution for answer engines; build C‑SOV dashboard; run holdout test.
- Weeks 11–12: Forecast assisted conversions; double down on formats with highest citation yield; lock a Q1 research calendar.
Final Word—and Next Step
This quarter, the default user journey is inching from “search and click” to “ask and skim.” Your job is to become the source that gets skimmed—and cited—every time. If you want our sprint templates (baseline tracker, evidence block library, PR briefing pack) or a 90‑minute team workshop, reach out. I’ll help you set the pace and take the ball on the rise.
Sources
- TechRadar, Sept 9, 2025: Reporting on Google preparing to make AI Mode the default with a dedicated entry point and significant usage.
- The Guardian, Sept 6, 2025: Publishers report up to 89% traffic declines tied to AI Overviews/AI Mode.
- The Verge, Sept 9, 2025: Google acknowledges in a court filing that the “open web is already in rapid decline” (framed around open‑web display advertising).
- Google Search Central Docs: GoogleOther and Google‑Extended crawler guidance.
- Perplexity Blog/Docs, Aug–Sept 2025: Comet browser expansion to Enterprise Pro; crawler user‑agents and inclusion guidance.
- Reuters, Sept 10–12, 2025: Perplexity funding at $20B valuation; FTC probes of Google and Amazon search ads; Microsoft–OpenAI MoU.
- OpenAI, Oct 31, 2024–Apr 28, 2025: ChatGPT Search product and publisher guidance;
OAI‑SearchBot
for inclusion. - arXiv, Sept 6–10, 2025: New GSEO/CC‑GSEO‑Bench research framing content influence in generative search.
- The Verge, Sept 11, 2025: Roku signals expansion of AI‑generated ads to dramatically increase SMB participation.