How is your website ranking on ChatGPT?
Italian Publishers Take Google AI Overviews to Agcom: How Marketers Can Win as Clicks Fall
Italy’s FIEG has asked Agcom to review Google’s AI Overviews, signaling a shift in search economics. Here is a two‑week audit, a citation‑share KPI, and a 30‑day plan to protect and grow traffic.

Vicky
Oct 19, 2025
What just happened, and why it matters
On October 15, 2025, Italy’s newspaper federation FIEG escalated its fight over Google’s AI-generated summaries by filing a formal complaint with the national communications authority Agcom. According to reporting from ANSA, the federation argues that AI Overviews reduce visibility and revenue for publishers and asks regulators to examine the feature under the EU’s Digital Services Act. See ANSA’s coverage of the FIEG complaint to Agcom.
For growth and marketing leaders, this is not just media policy. It is an early signal of the new economics of search. When a search engine places an AI answer at the top of the results, fewer users scroll, fewer users click, and fewer users reach your pages. Whether you run a newsroom, a marketplace, or a SaaS site, your acquisition model is increasingly competing with generated summaries. The logical response is to retool content so it can be selected, cited, and featured by these answer engines rather than bypassed by them.
The strategic shift, from ranking to being cited
Traditional search engine optimization rewarded ranking positions, snippets, and rich results. In an AI answer world, the unit of competition is the citation inside the summary and the follow-up carousel. The winners are the sources that the model trusts to ground its synthesis. That changes incentives in three ways:
- Your most valuable content is not always the long guide that ranks, it is the concise, source-ready passage that gets quoted.
- The key metric is not just position or impressions, it is the share of AI answers that reference your domain.
- Authority builds differently when models select passages at paragraph level rather than pages at domain level.
If you accept those premises, the job shifts from optimizing pages to optimizing paragraphs, questions, and statistics that answer engines can confidently surface.
Expect fewer organic clicks as AI summaries expand
Independent of the regulatory outcome, you should plan for lower click-through rates on queries where AI answers appear. In user behavior data analyzed by the Pew Research Center, Google users were less likely to click through on results pages with an AI summary and rarely clicked the sources cited. See the Pew study on AI clicks.
For marketers, the implication is straightforward. If your forecast assumes historical click curves from top three positions, you will overestimate traffic. Instead, segment your keyword portfolio by AI answer exposure, then re-forecast with conservative click rates for those segments. Think of it as zero-click risk management.
Build an AI Answer Audit in two weeks
You can rapidly assess exposure and opportunities with a pragmatic audit. Here is a step-by-step plan a lean team can execute in two weeks.
- Map your exposure
- Export your top 500 queries by impressions and conversions from analytics and your rank tracker.
- In a clean browser session, test a stratified sample of 150 queries across brand, informational, and transactional intent. Note whether an AI overview appears, how often, and which domains are cited.
- Classify each query as High, Medium, or Low answer exposure based on the frequency of the summary, the placement, and the number of visible source links.
- Quantify traffic risk
- For each exposure band, apply conservative click multipliers. Example starting point: High exposure 0.5x, Medium 0.7x, Low 0.9x of your historical click-through rate. Calibrate with your own data over time.
- Rebuild your traffic forecast and isolate the delta. This is your AI headwind. Share it early with product, revenue, and finance teams so targets are realistic.
- Benchmark your citation share
- For every query where an AI answer appears, record whether your domain is cited. If yes, capture the paragraph or snippet the model used and the position of your link.
- Compute citation share: citations to your domain divided by total citations on the sampled queries. Track this monthly for your priority topics.
- Identify the model’s favored formats
- Note what earns citations, for instance, bulleted definitions, direct Q&A, numbered steps, or a small table of stats. Catalog the patterns by topic.
- Observe which competitor pages are cited, and what their passages look like. Often they are short, declarative, and source-linked sentences. Consider aligning your short-term updates with this two-week audit agent approach to speed discovery.
Design content for answer engines, not just pages
Once you know where the risk and opportunity sit, reshape templates and writing guidelines so your content is easy to cite. Think in blocks that a model can lift without ambiguity.
- Q&A blocks for specific intents
- Add a short Q&A section to authoritative pages, with each question answered in two or three crisp sentences. Put the highest demand questions first and avoid hedging language.
- Use explicit entities and numbers. For example, instead of “many Italian publishers,” write “the Italian Federation of Newspaper Publishers, FIEG.” Models favor concrete referents.
- Statistics blocks that can carry a story
- Create a Key statistics component with 3 to 7 current numbers, each with a named source and date.
- Structure these stats as standalone sentences so they can be extracted verbatim. Example: “In 2024, organic search generated 41 percent of signups for Product X, down from 52 percent in 2023.”
- Update these blocks on a fixed cadence, for instance quarterly, so you have fresh, time-stamped facts.
- Paragraph-first answers
- Begin major sections with a single, declarative paragraph that answers the main query directly, then expand below. This mirrors how answer engines prioritize a lead summary followed by supporting details.
- Schema as a clarity signal
- While rich result eligibility evolves, machine-readable clarity still helps systems understand intent. Use schema.org types where they are a good fit, such as QAPage or FAQPage for dedicated hubs, HowTo for procedures, and Dataset for data collections.
- Do not force schema on every page. It should reflect the true structure. Misleading markup is a negative trust signal.
Introduce a new KPI, citation share
Traditional SEO dashboards emphasize impressions, rank, and clicks. Add a new metric that aligns to answer engines: citation share.
- Definition: the percentage of AI answers in your priority keyword set that include at least one citation to your domain.
- How to measure: for now, manual sampling or scripted checks. Log queries, answer presence, cited domains, and quote snippets. Sampling 150 to 300 queries monthly is enough to see movement.
- Targets: set topic-level goals, for example, 40 percent citation share on branded support queries, 20 percent on core category definitions, 10 percent on top-of-funnel comparisons.
- Governance: assign an owner, agree a refresh cadence, and tie content sprints to citation gaps. For adjacent SERP changes that affect visibility, revisit tactics from this four-week PPC and SEO plan.
Teams can also align new lead capture and reporting with agent conversion KPIs, as outlined in this short guide to agent conversion KPIs.
Playbook by query type
Different query classes require different tactics. Use this quick guide to match format to intent.
-
Definitions and glossaries
- Build concise, one-paragraph definitions with a single sentence summary, a one sentence nuance, and one example. Include a short “also called” line for synonyms to cover alternate phrasings.
-
Comparisons and versus queries
- Offer a compact comparison table with 5 to 7 rows and a one paragraph takeaway. Provide a short scenario recommendation to help the model justify an answer.
-
Processes and how-to
- Use a numbered list of 5 to 9 steps, each one sentence long, followed by pitfalls and a time estimate. Add a materials or prerequisites list to increase extractability.
-
Regulations and policy
- Anchor to the regulator name and article numbers where possible, then summarize in plain language. Add an “effective date” line at the top to telegraph freshness.
-
Statistics roundups
- Curate 5 to 10 authoritative numbers with sources and dates. Place the newest first and group by subtopic so models can assemble coherent paragraphs.
Engineering your page for extractability
Simple page-level changes can improve the odds that answer engines cite you.
-
Headings that match queries
- Write H2s and H3s as natural questions and short statements that mirror search intent. Avoid clever wordplay that hides the topic.
-
First link best source policy
- For each key claim, link to the single most authoritative primary source you have. Redundant linking dilutes clarity. Keep outbound links tidy.
-
Image captions that carry facts
- Put one clear fact in the caption, with a date or number. Some models pull captions as convenient summaries.
-
Tables with headers and units
- Use simple tables with explicit units and clear headers. Avoid merged cells. Models do better when the structure is consistent.
-
Clean, stable URLs
- Do not rotate URLs for seasonal content. If you must update, use on-page “last updated” time stamps and maintain redirects.
Measuring the fallout without waiting for tools to catch up
Most analytics stacks do not yet break out impressions and clicks specifically tied to AI answers. Until platforms expose this directly, approximate the impact with triangulation.
-
Before and after cohorts
- Create cohorts of queries where you observed AI answers appear. Track CTR, average position, and session depth trends before and after.
-
Session enders
- Watch for rising rates of single-page sessions from search and shorter session duration on pages that align to high exposure topics. These can be secondary signals of zero-click behavior.
-
SERP feature logs
- Maintain a weekly log of which features, including AI answers, appear for your priority keywords. Over a quarter, this becomes a reliable leading indicator.
What the FIEG complaint signals about policy and product
FIEG’s filing highlights three long-running tensions that marketers should anticipate in product roadmaps and legal constraints.
-
Visibility vs platform convenience
- Platforms want to reduce friction by answering in place. Publishers and brands need attention on their own properties to convert and monetize. Expect continued experiments that balance both, like expanded source carousels or opt-in commercial units.
-
Fair use vs value transfer
- The models ground their answers in web content. The debate is how much value must flow back to the sources. Watch for frameworks that prioritize original reporting and fresh data in citations, which would reward those who invest in new information.
-
Safety and accuracy
- Regulators focus on false or harmful summaries. That increases the premium on clear, unambiguous passages that can serve as safe citations, especially on sensitive topics.
Practical governance for your AI answer program
To make this real, treat AI answer optimization as a cross functional program with defined owners and cadences.
-
Roles and responsibilities
- SEO lead, owns the audit, targets, and measurement.
- Content lead, owns templates, briefs, and quality.
- Data lead, owns sampling, scripts, and dashboards.
- Legal and comms, review high risk topics for wording and claims.
-
Cadence
- Monthly, refresh the audit sample and update citation share. Quarterly, refresh statistics blocks and review schema usage. Biannually, refactor top performing articles to lift extractability.
-
Definition of done for a page
- Contains one Q&A block with at least five questions, a statistics block with dated sources, a paragraph-first summary, and headings that map to top intents.
Forecasting and budgeting in a world of lower CTR
Your budget should assume that a portion of informational demand will not convert into sessions. Plan for that in three ways.
-
Protect conversions
- Push more conversion paths into your pages that do win clicks, such as embedded demos, lead magnets, and calculators. Every visit must work harder.
-
Diversify acquisition
- Reallocate a slice of search budget to channels that complement AI answers, for instance curated newsletters, partnerships with vertical aggregators, and targeted paid units on answer surfaces where available. Revisit bidding and content priorities using this four-week PPC and SEO plan.
-
Monetize citations
- When your brand is repeatedly cited on a high value topic, use those paragraphs in paid creative and outreach. Citations are credibility signals that can lower acquisition costs elsewhere.
A 30 day sprint to win inclusion
If you need to move fast, use this focused plan.
Week 1
- Run the AI Answer Audit, set citation share baselines, pick three topics where you can realistically gain citations in 30 days.
Week 2
- Draft Q&A and statistics blocks for each target topic. Source two or three fresh numbers per topic with clear dates and attributions. Tighten lead paragraphs.
Week 3
- Publish updates, add schema where relevant, and submit for crawling. Brief social and partnerships for amplification to accelerate discovery.
Week 4
- Resample queries, log citations, compare against baseline, and plan next month’s targets. Share early wins and refine templates.
The bottom line
Regulatory actions, like FIEG’s complaint to Agcom, show that AI answers are reshaping the economics of search. You cannot control when or how answer engines expand, but you can control how citeable your content is and how rigorously you track inclusion. Expect fewer clicks when AI summaries appear. Counter that by auditing affected queries, adding Q&A and statistics blocks that models can lift with confidence, and adopting citation share as a primary KPI.
Actionable next steps
- Choose 150 priority queries and measure AI answer presence and citations this week.
- Add a Q&A and statistics block to your top five pages by revenue influence within two weeks.
- Set quarterly targets for citation share by topic and instrument a simple sampling workflow to track it.
- Socialize the new forecast with finance and product so plans reflect the AI headwind.