How is your website ranking on ChatGPT?
Google Disables num=100: Reset SEO Baselines and Rebuild Rank Reporting
Google quietly stopped honoring the num=100 results parameter, scrambling rank trackers and skewing familiar KPIs. Here is how growth leaders can reset baselines, rebuild rank reporting with paginated sampling, and add answer engine metrics so forecasts and accountability stay intact.

Vicky
Oct 1, 2025
Breaking change, broken dashboards
Sometime between September 10 and 14, 2025, Google stopped honoring the long‑standing num=100
URL parameter that forced 100 organic results per page. The industry first saw it as a glitch, then Google clarified that the parameter is not something the company formally supports. That single line means the shortcut many rank trackers used for a decade is gone, and the ripple effects are now hitting executive dashboards. See Google’s statement as reported by Search Engine Land in mid September 2025 for the canonical account of the change, including timing and context around rank‑checking instability. Google confirms results‑per‑page parameter is unsupported.
If your weekly scorecard suddenly showed fewer impressions, fewer ranking keywords, and a jump in average position, you are not alone. This is not a sudden collapse in demand, and it is not a magic ranking windfall. It is a measurement regime change.
Why num=100 mattered so much
For years, many rank trackers and internal scripts relied on num=100
to collect 100 results in one request. That enabled efficient, stable sampling of deeper rankings. Without it, tools must paginate, collect results page by page, and reconcile positions across more requests. The practical outcomes are simple to grasp:
- Fewer easy snapshots of the top 100, so fewer long‑tail positions get captured in a single crawl
- More request volume per query, so cost and complexity go up for vendors and in‑house scripts
- More variability in what gets seen on any given day, especially beyond page 1 and 2
Most leadership dashboards, especially those rolled up from third‑party trackers, were built on the assumption that consistent 100‑result pages existed. That assumption no longer holds.
The KPI whiplash you are seeing, explained with data
Search Engine Land summarized one of the first broad datasets after the change. Across hundreds of properties, impressions fell for most sites and the number of unique ranking terms declined, while the share of top positions appeared to improve. The simplest explanation is that fewer deep positions are being counted, so averages skew upward and impression counts shrink. Review the analysis for headline stats and methodological notes: 77 percent of sites lost keyword visibility.
Two implications matter for executive reporting:
-
Lower impressions and keyword counts do not automatically mean lost demand. They may reflect fewer deep positions being observed.
-
Rising average position does not automatically mean you jumped in the rankings. It may reflect a smaller denominator of counted queries.
Reset the baseline, do not chase the noise
Treat mid September 2025 as a measurement epoch change. If you keep your old baselines, you will reverse‑engineer the wrong causes and make bad bets. The most effective step you can take this week is to declare a baseline reset with a crisp memo and a dated annotation across all dashboards.
- Freeze your pre‑change period for comparisons. For example, set January 1 to September 10, 2025 as pre‑change and September 15 onward as post‑change.
- Add a visible annotation on your analytics and BI tools with the exact date this change affected your reporting. Require teams to specify which side of the baseline they are referencing in every slide or report.
- Recast OKRs that rely on search impressions or keyword counts as directional, not absolute, for Q4 2025. Tie bonuses and goals to traffic, leads, and revenue rather than proxy volume metrics that were distorted by the change.
Messaging you can send to stakeholders today:
- “We are resetting SEO baseline metrics as of September 15 due to a change in how Google serves and counts results. Traffic and conversions remain the primary truth. Expect lower impression totals and fewer counted keywords without evidence of demand loss.”
Rebuild rank tracking with paginated sampling
You can still track rank distribution and movement, but you must rebuild how you sample and aggregate data. Here is a pragmatic method marketing leaders can deploy with internal analysts or vendors.
- Define tiers of intent
- Tier A, revenue pages and high‑intent landing pages
- Tier B, strategic mid‑funnel pages
- Tier C, long‑tail and emerging content
- Create query panels per tier
- 100 to 250 core queries for Tier A
- 250 to 500 for Tier B
- 1,000 to 3,000 for Tier C
- Paginated collection plan
- Track page 1 daily for Tier A and B, since commercial outcomes are most sensitive there
- Track to page 3 weekly for Tier B, and to page 5 biweekly for Tier C
- Use rotating samples. For Tier C, split the panel into five cohorts and crawl one cohort each weekday to smooth infrastructure load and reduce variability
- Normalize positions across pages
- When paginating, record both absolute position and page number. This enables you to calculate page‑relative CTR models accurately, even as collection windows change
- Aggregate into stable metrics
- Build a Rank Exposure Index. Example, assign weights of 1.0 for positions 1 to 3, 0.6 for positions 4 to 10, 0.3 for positions 11 to 20, and 0.1 for positions beyond 20. Multiply weights by monthly search volume for each query, then sum by tier. The index moves smoothly even when the exact set of observed positions fluctuates
- Set collection SLAs
- Lock daily collection windows. If you crawl between 06:00 and 09:00 UTC, keep that cadence to reduce diurnal volatility
This sampling approach does two things a CFO will appreciate. It caps infrastructure commitments, and it yields a defensible, consistent view of exposure that is comparable period to period.
Blend platform data to triangulate truth
With rank snapshots more volatile, triangulation matters. Combine three data planes in your weekly packet.
- Platform telemetry, Google Search Console query and page reports for directional trends, annotated at mid September
- Analytics truth, sessions, assisted conversions, pipeline and revenue, segmented by organic search and by landing page intent tier
- Rank exposure, your paginated sampling metrics and share of voice estimates for competitive benchmarks
When these three planes agree, you have signal. When they diverge, you have a hypothesis to test, not a crisis to explain in a board deck.
Rethink your KPI dictionary
Now is the moment to prune vanity metrics and elevate decision‑useful ones.
- Replace raw keyword counts with unique keywords in the top 20 and in the top 3, counted within a consistent sampling frame
- Replace average position with median position by tier and a weighted position index, since medians are less sensitive to outliers
- Replace total impressions with impressions by page bucket, page 1, page 2, pages 3 to 5. If pages 3 to 5 shrink, that is a measurement artifact, not necessarily a demand signal
Document these definitions in your KPI dictionary and require any vendor or internal analyst to adopt them in dashboards and reports.
Add answer engine metrics, or you will miss the next growth curve
Answer engines now route discovery and consideration, not only search result pages. If you only measure classic blue links, you will undercount influence and demand capture.
Track two families of answer engine signals.
- Assistant citations
- Definition, a visible link or mention of your domain or brand in an assistant answer or AI overview
- Where to measure, Google AI Overviews, Bing Copilot in Search, Perplexity, Brave Search AI answers, Arc Search, and other assistants that show sources. To scale Perplexity coverage and quality, see how to optimize Perplexity answer placement.
- How to measure, establish a weekly spot‑check schedule for your Tier A queries. Use screenshot logging for governance, capture cited domains, rank them by frequency, and compute your brand’s citation share per query group
- Referral clicks from assistants
- Definition, session entries that arrive from answer engines or assistant UIs
- Where to find, referrers such as Perplexity, Brave, Bing, and new assistants as they add distinctive referral information. Google often masks referrers, so use UTMs where permitted in deep links you control, for example links in your own documentation that assistants often cite. To expand top‑of‑funnel coverage in Google’s evolving surfaces, learn how to win voice and camera answers.
- How to measure, configure server‑side logging to capture unusual referrers and user agent strings. Map these to a lookup table in your BI environment. Add a weekly trend line for assistant referrals by landing page and query theme
Teams using Upcite.ai track both families together, because it is the combination of being cited and being clicked that predicts downstream product signups and revenue.
Forecasting under the new regime
Your previous forecast probably included impression growth and average position gains as leading indicators. Translate that into an exposure‑driven and conversion‑anchored model.
- Exposure input, your Rank Exposure Index by tier and share of citation in answer engines
- CTR curves, maintain two curves per tier, page‑based CTR for classic SERPs and assistant referral rates for answer engines
- Conversion input, landing page conversion rates segmented by traffic source and by content type. As assistant monetization evolves, proactively prepare for ChatGPT answer ads.
- Model output, sessions, pipeline, and revenue. Use scenario analysis with conservative, base, and optimistic exposure growth for each tier
Every quarter, refresh the CTR curves using observed outcomes, not vendor defaults. Answer engine interfaces change frequently, and click‑through behavior moves with design changes.
What to ask from your vendors this quarter
- Roadmap dates for paginated collection, sampling quality guarantees, and how they will represent uncertainty in rank reporting
- Explicit documentation of how they infer positions beyond page 1 and how they normalize across data centers and personalization states
- An answer engine plan, ask how they will report assistant citations, associate citations with downstream clicks, and expose this data in your warehouse
- Data export commitments, you need hourly or daily exports to rebuild your own metrics and to audit their sampling
If a vendor cannot meet these requests, press for interim deliverables. A weekly CSV of ranks by page bucket and a JSON log of assistant citations will carry you through Q4.
A 30‑60‑90 day plan for growth leaders
30 days, stabilize measurement and communication
- Reset baselines and annotate all dashboards with the mid September change
- Ship a revised KPI dictionary and a one‑page FAQ for executives
- Launch a paginated sampling pilot for Tier A and B queries
- Stand up assistant citation logging for your top 50 commercial queries
60 days, rebuild reporting and forecasting
- Expand query panels and lock collection SLAs
- Replace average position and raw keyword counts with your Rank Exposure Index and top‑bucket counts
- Add assistant referral metrics to your demand report and move two budget decisions through your spend committee using the new model
90 days, institutionalize the new stack
- Automate weekly QA of sampling coverage and assistant citation share
- Migrate executive dashboards to the new KPIs and drop deprecated metrics from recurring meetings
- Present a forecast update to Finance that reconciles traffic, pipeline, and revenue under the new measurement regime
Communicate to Finance and the board with clarity
Use specific dates and clear causal language. Your talking points:
- “On September 18, 2025, Google confirmed it no longer supports the parameter that allowed 100 results in one page. Our measurement changed, not our customers.”
- “We reset our baselines the week of September 15. Since then, our dashboards use a paginated sampling method that is more representative of reality.”
- “We added answer engine metrics, citations and referral clicks, to capture demand that classic SERP metrics miss.”
- “We continue to manage to traffic, pipeline, and revenue goals. We expect impression and keyword counts to be structurally lower going forward.”
Provide a one‑page appendix that defines the new metrics, shows a side‑by‑side view of pre‑change and post‑change trends, and explains why the new approach better maps to outcomes.
A note on governance and experimentation
You will be tempted to increase crawl depth to recreate the old world. Resist the urge to add cost without a plan. Instead, add measurement precision where it matters most. For example, expand Tier A query coverage to page 3 if you have proof that movement on page 2 correlates with meaningful revenue swings. Otherwise, invest those cycles in answer engine tracking or content experiments.
Keep your experimentation backlog focused on questions that matter under the new regime:
- Does improving assistant citation share lead to more branded search and direct traffic within two weeks
- Do page 2 to page 1 lifts still deliver the same revenue delta as in 2023 and 2024
- Which content formats are cited most often by assistants for your Tier A queries
Conclusion, the metric stack is changing, lead the change
Google removing num=100
broke habits, not growth. The winners will be the teams that reset baselines decisively, rebuild rank reporting with paginated sampling, and add answer engine metrics that mirror how people actually discover and decide today. Bring Finance along with dated annotations and new definitions, rebuild exposure metrics that relate to real outcomes, and extend your funnel to include assistant citations and referral clicks. If your dashboards tell the truth and your forecasts tie exposure to revenue, you will maintain accountability and protect growth through the change, not in spite of it.