How is your website ranking on ChatGPT?
Google Open-Sources an MCP Server for Google Ads: Launch a Two-Week Read-Only Audit Agent
Google quietly released an open-source Model Context Protocol server for the Google Ads API, giving growth teams a safe way to let AI read account data. Use this two-week plan to auto-audit campaigns, stand up a KPI watchlist, and measure time to insight and optimization lift.

Vicky
Oct 17, 2025
What just shipped, and why it matters
Google released an open-source server that lets large language models connect to the Google Ads API through the Model Context Protocol. For growth and marketing teams, this connector turns read-only ad data into something your AI assistants can analyze on demand. See the official Google Ads MCP server on GitHub.
Within Google’s marketing solutions organization, engineers open sourced a Google Ads MCP server that you can run locally or in a sandbox. It is licensed for use, includes a clear disclaimer that it is not an officially supported product, and provides instructions for connecting via Gemini’s CLI.
The short version for busy marketers
- You can let an AI assistant inspect Google Ads structure and performance without granting edit permissions.
- You can schedule a daily audit, get a prioritized list of issues, and track how fast insights appear versus today’s workflow.
- You can define a KPI watchlist that the agent checks each morning and trigger notifications when thresholds are breached.
If you follow Google’s broader AI moves, this complements the recent WPP and Google AI partnership.
MCP in plain English
Model Context Protocol is a standard for exposing tools and data sources to AI models with clear resources, functions, and auth. Think of it as a single port that connects your assistant to many systems. A marketer can pull yesterday’s search term report, budget pacing, and asset-level metrics in a consistent way. For a primer, read Anthropic’s Model Context Protocol overview.
What this unlocks for growth teams
By running the Google Ads MCP server in read-only mode, you can:
- Centralize campaign audits across accounts and regions without one-off scripts.
- Ask natural language questions like which Performance Max asset groups lost impression share due to budget yesterday or which RSAs underperform brand baselines.
- Watch a handful of KPIs, then explain changes with supporting GAQL evidence.
- Collapse manual effort so hours of spreadsheet pivots become a one-minute prompt with ranked opportunities.
Importantly, edit permissions stay off the table during the pilot. The agent reads data only, which lets you prove value quickly and safely before considering writes.
A two week pilot, step by step
Below is a practical sequence a growth team can run with minimal engineering support. The goal is to measure time to insight and optimization lift, while proving the safety case with read-only access.
Day 0, set up the guardrails
- Access model: provision a Google Ads user with Read only role for each test account and generate the API refresh token from that user.
- Credentials: point the server to your google ads YAML file and use a login customer that scopes access to test accounts to limit exposure.
- Server capabilities: expose only read resources and GAQL query tools. If you later test mutations, set validate only flags first and require a change review gate before enabling write scopes.
- Client: pick one assistant for the pilot, such as Gemini CLI or Claude Desktop, then add the MCP server to the client configuration.
- Logging: write all agent queries and responses to an audit log. Store GAQL queries with timestamps, campaign IDs, and top-line metrics. This becomes your evidence base.
Week 1, automate the account audit
Objective: have the agent produce a daily audit by 9 a.m. local time that any account manager can skim in five minutes.
Scope the audit to issues that burn budget or delay decisions:
- Budget pacing and lost impression share with weekly and monthly trajectories.
- Conversion tracking health including sudden drops, new disapprovals, or tag misfires inferred from conversion lag shifts.
- Search term volatility with new themes, exact matches that slipped, and negatives colliding with top converters.
- Asset and creative coverage highlighting RSAs with poor asset diversity and Performance Max assets below median CTR.
- Bid strategy diagnostics to check drift from target CPA or ROAS and rising first-page CPCs.
- Geography and device skews that move more than a standard deviation from last week’s mix.
Each morning, the agent should output:
- A one-page summary with a risk score per account.
- A list of quick wins shippable in under one hour.
- A backlog of deeper analyses, such as auditing PMax asset mapping to product taxonomy.
Use a consistent template so leaders know where to look. Many teams use Upcite.ai to keep an evidence trail, collect screenshots and charts, and auto generate the daily executive summary.
Week 2, stand up a KPI watchlist
Objective: define the few metrics that matter, then let the agent watch them, explain moves, and alert only when material.
Recommended watchlist (tune values for your model):
- Cost per acquisition: breach if 7-day CPA is 15 percent above plan for two consecutive days.
- Return on ad spend: breach if blended ROAS drops 10 percent below target for three days.
- Impression share lost to budget: breach if above 20 percent on non brand search for two days.
- Top of page rate: breach if below 60 percent on exact brand or below 35 percent on non brand priority campaigns.
- Click share in Performance Max: breach if week over week drops 10 percent without a matching rise in average CPC.
- Conversion rate: breach if seven-day CVR falls 20 percent below the 90-day median.
- Budget utilization: breach if month-to-date spend is below 70 percent of plan by the 20th.
For each breach, the agent should:
- Explain the move with minimal charts and tables, such as a CPA time series with annotations for bid or budget changes.
- Attribute likely causes, such as auction pressure from named competitors or a shift in device mix.
- Propose a playbook action labeled as reversible or not, with effort estimates in minutes.
For related automation ideas, see our Gemini 2.5 KPI playbook.
How to measure the pilot
Track two outcomes: time to insight and optimization lift.
- Time to insight (TTI): measure minutes from data availability to a decision-grade summary in email or Slack. Baseline the current workflow first. The pilot succeeds if TTI drops by at least 70 percent without loss of accuracy.
- Optimization lift: measure incremental improvement in a target metric such as CPA, ROAS, or impression share. Assign a holdout across similar campaigns or regions. Run agent plus human review on test, business as usual on control, then compare week over week changes. If true randomization is hard, use a difference-in-differences comparison with matched controls.
To maintain trust, publish a brief weekly methods note that documents any changes in agent behavior, new queries, and known blind spots. For a parallel blueprint, study our two-week agent ROI pilot.
Safety, privacy, and governance
Even with read-only access, handle data with layered defenses:
- Credentials and scoping: use a dedicated Read only user per test account, avoid personal accounts, and restrict the login customer to a sandbox hierarchy where possible.
- Network hygiene: run the server in a secure environment, restrict outbound egress if possible, and alert on anomalous query patterns.
- Tool surface area: expose only resource reads and GAQL queries until the pilot ends. When testing mutations, start with validate only so the API checks requests without persisting changes.
- Data minimization: fetch only required fields, truncate result sets, and avoid raw query logs in user channels.
- Human in the loop: require approval for any playbook action with clear reversible steps.
- Vendor policy: treat the server as experimental per the repository disclaimer. Avoid sending personally identifiable information to LLMs and ensure contracts and privacy notices cover this use.
Example prompts to copy and paste
- Daily audit: “You are an ads analyst reading Google Ads via MCP. Pull yesterday and the prior 7 days. Report budget pacing, impression share lost to budget, conversion tracking anomalies, search term volatility, creative coverage, and bid strategy health. Rank issues by forecasted CPA impact, include GAQL snippets and a five item quick wins list.”
- KPI watchlist check: “Check the CPA, ROAS, impression share lost to budget, top of page rate, click share for PMax, and conversion rate against these thresholds. If any breach, produce a three part explanation with likely causes and a reversible playbook recommendation.”
- Ad hoc diagnosis: “For campaign 1234567890, explain why CPA rose 22 percent week over week. Consider auction insights, query mix, device and geo shifts, asset-level CTR and CVR, and any recent budget or bid strategy edits.”
Architecture in one minute
- MCP client: Gemini CLI or Claude Desktop is the chat surface where you type prompts and receive results.
- Google Ads MCP server: a small Python application that exposes read-only GAQL capabilities as MCP resources and tools.
- Google Ads API: the data source, authenticated with a refresh token tied to a Read only user.
- Storage and logging: capture every query and response for audits and learning.
This reduces glue code because MCP standardizes discovery and invocation. The repository includes example client configurations for Gemini, so setup is mostly copy and paste.
Setup checklist
- Get credentials: developer token, client ID and secret, refresh token for a Read only user, and a login customer ID.
- Clone the repository, install Python 3.12 and a package manager like uv or pipx, then point the server’s environment to your credentials file.
- Register the server with your client. In Gemini CLI, add the server block and restart. In Claude Desktop, add the server through MCP settings.
- Run a smoke test: list campaigns, pull seven-day metrics for one campaign, and confirm that zero mutation endpoints are exposed.
- Schedule the agent to run the daily audit at 7 a.m. and the KPI watchlist at 8 a.m. Route output to Slack and email with a consistent subject line.
- Stand up a holdout: pick similar campaigns to exclude from the agent’s recommendations and tag them clearly.
What good looks like after two weeks
- TTI falls from hours to minutes, for example from 180 minutes of manual reporting to under 20 minutes including human review.
- You can point to at least three quick wins shipped in under an hour with quantified impact.
- Weekly reviews shift from data wrangling to decisions because the agent arrives with evidence and a clear playbook.
- Leaders gain confidence due to an audit log of every query and recommendation.
Bottom line and next steps
MCP gives you a pragmatic way to let AI read what it needs from Google Ads under your control. The open-source server reduces setup friction, you can trial it without risky automations, and you can measure value in two weeks.
Action plan:
- Spin up the server in a sandbox today, wire a single client, and confirm read-only queries work using the GitHub repository.
- Run the two-week pilot described above, measure TTI and optimization lift, and publish the weekly methods note.
- If results are strong, expand to a second account, then evaluate graduated write capabilities with validate only guardrails and human approval.