How is your website ranking on ChatGPT?
RSL Robots.txt Licensing: Win AI Answer Visibility With Pay Per Inference Pricing
On September 10, 2025 the Really Simple Licensing standard added machine readable terms to robots.txt. Learn how to publish category level pay per inference pricing so answer engines prefer your pages as the cheapest compliant source.

Vicky
Sep 23, 2025
Summary
AI answer engines are shifting from unlicensed scraping to predictable licensed supply. With the September 10, 2025 launch of Really Simple Licensing, you can declare pay per inference terms in robots.txt. If you become the cheapest compliant source for high intent questions in your category, your content is more likely to be retrieved, cited, and included in training under your terms.
What changed on September 10, 2025
- RSL added a simple way to expose licensing and pricing signals alongside crawl rules.
- Answer engines can evaluate cost, usage rights, caching windows, and training allowances before retrieval.
- Retrieval selection can now be modeled as an optimization problem that balances answer quality, latency, and predictable cost.
RSL sits alongside your existing robots.txt directives overview. It does not replace standard crawl controls. It adds machine readable business terms that engines can compare at request time.
Core idea: win the cheapest compliant source slot
- Engines often prefer compliant sources with clear, predictable pricing.
- If your page is relevant and your per inference price is lowest in a category, you increase the probability of being chosen for retrieval and citation.
- This expands brand exposure in answer boxes while keeping costs measurable for both sides.
For broader surface optimization, pair this with the BAEO playbook for Chrome Gemini so your content is eligible and attractive once pricing tilts selection in your favor.
Definitions you will use
- Inference: a real time model call that consults your content to answer a user query.
- Training inclusion: permission for your content to be used in model training or fine tuning under specified terms.
- Category: a path or pattern grouping of pages that share similar intent and pricing logic.
How to declare terms in robots.txt using RSL style fields
Place structured directives next to your user agent rules. Keep them human readable and machine parseable.
User-agent: ai-answer-bot
Allow: /answers/
RSL-Terms: inference.ppi=0.002 USD; cap.daily=75 USD; cache.max_age=7d; attribution=required
RSL-Categories: /answers/loans/* ppi=0.004 USD; /answers/compare/* ppi=0.0015 USD; /blog/* ppi=0.0005 USD
RSL-Training: allow=false
RSL-Contact: licensing@example.com
Tips
- Use consistent units and decimals so engines can sort fairly.
- Include last modified timestamps on target pages to encourage recrawl.
- Understand cache economics with HTTP Cache-Control basics. Use cache windows that balance freshness and cost.
Pricing strategy for high intent pages
- Map value: tie price to expected commercial value of the intent, not page type alone.
- Estimate token use: model average tokens retrieved per answer and set a price that remains attractive under typical context sizes.
- Use caps: publish daily or monthly spend caps so engines can plan budgets without surprises.
- Tier by category: charge more where differentiation is highest. Set near zero for awareness content to maximize citations.
- Offer training separately: allow training for low stakes content at a small cost or keep training disabled while allowing inference.
Target the right inventory
- High intent Q and A hubs, calculators, comparison pages, and buying guides.
- Evergreen fact pages where quality and freshness matter.
- Pages with strong structured data and clear headings for chunking.
If your strategy includes browser based answer surfaces, review Session SEO for cross tab answers to align site architecture with session level retrieval.
Technical checklist
- Publish robots.txt with RSL directives and validate against a test crawler.
- Include last modified headers or tags on target pages.
- Provide lightweight snippets and canonical tags to reduce duplication risk.
- Mark non negotiable pages with Disallow or use a deliberately high price to throttle access.
Measurement plan
- Track user agent level hits and associate each hit with category and inferred price.
- Monitor share of answer: the percent of engine answers that cite your brand for targeted queries.
- Watch training usage signals if engines provide transparency events or receipts.
- Compare before and after launch periods to quantify uplift.
Governance and risk controls
- Set separate policies for inference and training to protect premium IP.
- Require attribution where legally feasible and signal it in RSL.
- Publish contact and dispute channels in RSL for takedowns or renegotiation.
- Rotate signed access tokens if your implementation supports authenticated retrieval.
- For brand owned assets and connectors, align with your Private Graph AEO strategy.
Optimization loop
- Start with conservative prices. Lower ppi until you become the preferred source without eroding perceived value.
- Expand categories once you see consistent citations and referral traffic from answer engines that click through to sources.
- A or B test cache windows. Shorter windows can improve freshness ranking but may raise engine costs.
30 day rollout
- Days 1 to 7: audit Q and A inventory, group into categories, draft legal positions for inference and training.
- Days 8 to 14: implement robots.txt RSL terms, set initial prices and caps, deploy logs and dashboards.
- Days 15 to 21: run competitive checks on comparable pages. Tune pricing to become cheapest compliant source in at least three high intent categories.
- Days 22 to 30: measure citation share, adjust prices and caps, expand to second tier categories.
Signals that your pricing is working
- Increased retrieval frequency from answer engine user agents on targeted paths.
- More consistent citation lines pointing to your pages.
- Stable or rising answer box referrals without traffic cannibalization on core navigational pages.
FAQ
- What counts as an inference if an engine hits multiple pages? Treat each qualifying retrieval session as one billable inference unless your terms define multi hop pricing.
- How do caches affect price? Set cache.max_age. Engines can reuse cached content within that window without a new charge if your terms allow it.
- Can I block training while allowing answers? Yes. Declare RSL-Training allow=false while keeping inference terms enabled.
- What if engines ignore my terms? Keep Disallow on sensitive paths. Use detection and enforcement processes. Engage via the contact channel you publish in RSL.
Key takeaways
- Publish clear per inference pricing now that RSL exists.
- Price by category to match intent value and token cost.
- Aim to be the cheapest compliant source where brand exposure yields measurable returns.
- Monitor and iterate so your RSL aware AEO program compounds over time.