How is your website ranking on ChatGPT?
Google Ads API v22 Adds AssetGenerationService: A Two-Week Pilot for Performance Max and Demand Gen
Google Ads API v22 introduces AssetGenerationService to create on-brand text and image assets by API. Use this two-week pilot to lift conversion rate, protect CPA, slow creative fatigue, and turn early wins into a repeatable refresh playbook for Performance Max and Demand Gen.

Vicky
Nov 7, 2025
What just launched, and why it matters
Google Ads API v22 introduced AssetGenerationService, a generative AI service for producing performance ad assets. In plain terms, developers can now request copy and images from Google, then drop them directly into campaigns. It is currently available in a limited beta, but the signal is clear, Google wants creative velocity to be a programmatic capability, not only a manual task. You should plan for this capability even if you are waiting on access. Google confirmed the v22 release on October 15, 2025, calling out generative asset support as a headline feature in its official blog announcement. See the details in the post, Announcing v22 of the Google Ads API. For specifics, review the v22 release notes.
For marketing leaders running Performance Max and Demand Generation, the timing is ideal. Many teams already centralize assets inside Asset Groups and use automated enhancements. AssetGenerationService raises the ceiling by filling asset gaps and refreshing creative on a predictable cadence. For context on how large marketers operationalize AI, see our analysis of the WPP and Google five-year AI deal.
What AssetGenerationService does
At a high level, the service provides two core methods, GenerateText and GenerateImages. Inputs can include your final URL, free form prompts, keywords, and existing context from your account such as an Asset Group. The output is a set of suggested headlines, descriptions, and visual variants that you can review, approve, and attach to campaigns.
The v22 release notes describe the capability as a closed beta, with new error handling via AssetGenerationError types and options to generate images by recontextualizing product visuals or by using URL and prompt inputs.
Three practical implications follow from this design:
- You own the workflow. The API returns suggestions, your systems decide what to keep, label, and test.
- Safety and policy fit into the same pipeline. The service returns structured errors and policy signals that you can route into QA.
- Creative refresh becomes measurable. Because assets are generated and attached by API, you can track them as cohorts in reporting and compare them to human written baselines.
Where this changes creative operations
Performance Max and Demand Generation benefit from breadth of assets and steady refresh. In PMax, Asset Groups combine multiple headlines, descriptions, images, and videos into dynamic combinations. In Demand Gen, assets feed YouTube and Discover placements, where strong visuals and fresh hooks matter. AssetGenerationService lets you do the following at scale:
- Fill missing fields. If a product landing page lacks benefit focused copy, GenerateText can draft options that match the page content.
- Produce variants on demand. When an offer or message begins to fatigue, you can request new angles tied to the same product and audience.
- Standardize naming and labeling. Generated assets can be labeled with the method, prompt, date, and campaign to power later analysis.
If you are formalizing creative operations, compare this approach with our Canva creative operating system playbook.
The two week pilot, at a glance
You can validate value fast with a clean, low risk design. The pilot below assumes you run both Performance Max and Demand Gen, with basic conversion tracking already in place.
Goals
- Improve conversion rate without raising cost per acquisition.
- Detect and slow creative fatigue earlier.
- Produce a documented, AI assisted refresh playbook that your team can repeat.
Primary metrics
- Conversion rate by campaign and asset.
- Cost per acquisition by campaign and asset.
- Fatigue indicators: day over day decline in impression share for top combinations, drop in click through rate, or lift in frequency without a matching lift in conversions.
Secondary metrics
- Time to first viable variant from brief to live.
- Asset approval rate, percent of generated assets that pass internal and platform policy review.
Week 0 prep, 60 to 90 minutes
Before Day 1, do three housekeeping tasks:
- Confirm access and libraries
- Ensure your developer team has v22 client libraries installed and authenticated in your environment.
- If you are not allow listed for AssetGenerationService yet, plan the pilot steps that do not require generation, such as the measurement scaffolding, and ask your Google team for access. You can also pre build prompts and workflows to start the day you are admitted.
- Define guardrails
- Brand voice: a five sentence guidance document that defines tone, banned phrases, and claims that require legal review.
- Safety: a two rule checklist that rejects claims of superiority without substantiation and health or finance claims that trigger policy.
- Visual rules: logo placement, background color defaults, and alt text requirements for accessibility.
- Label taxonomy and governance
- Decide a label convention such as GENAI_v22_PROMPTA_2025_10_15.
- Create a shared sheet or dashboard for prompts, sample outputs, and approvals.
Week 1, generate and go live
Day 1 to 2, prompts and first batch
- Select two PMax campaigns and one Demand Gen campaign with stable spend and clear conversion goals.
- For each, pick one Asset Group with room for new assets. Target a baseline set of 5 headlines, 3 descriptions, and 4 images.
- Draft two prompts per campaign. Example prompt for an apparel brand: “Write 30 character headline options for breathable running shirts with UPF protection, focus on hot weather comfort, keep reading level at grade 7.”
Technical step, request assets
Below is pseudocode that mirrors common patterns in Google’s client libraries. It is illustrative, not production ready.
# assumes google-ads v22 client configured
from google.ads.googleads.v22.services.types import generate_text_request
from google.ads.googleads.v22.services.types import generate_images_request
from google.ads.googleads.v22.services import AssetGenerationServiceClient
client = AssetGenerationServiceClient()
# Text generation request
text_req = generate_text_request.GenerateTextRequest(
customer_id="1234567890",
final_url="https://www.example.com/running-shirts",
prompts=["breathable running shirts","hot weather comfort"],
existing_context={"asset_group":"customers/1234567890/assetGroups/111"}
)
text_resp = client.generate_text(text_req)
# Image generation request
img_req = generate_images_request.GenerateImagesRequest(
customer_id="1234567890",
final_url="https://www.example.com/running-shirts",
prompts=["athlete outdoors, sunrise, lightweight shirt"],
recontextualize_product_images=True
)
img_resp = client.generate_images(img_req)
# Persist assets with AssetService, then link via AssetGroupAssetService
Action checklist
- Human in the loop: route outputs to reviewers, apply your guardrails, and reject anything that violates brand or policy.
- Persist approved assets using AssetService, then attach to the target Asset Group using AssetGroupAssetService.
- Apply labels and store prompt text in a notes field or an external system so you can audit later.
Traffic allocation
- Keep it simple. Set a per campaign cap so that generated assets account for 20 to 30 percent of eligible combinations in Week 1. This avoids shocking the auction while you watch early performance.
Measurement setup
- Create a daily query, using GoogleAdsService, that pulls conversion rate, cost per acquisition, and click through rate by asset and by asset group. Include impression share and top combination metadata when available. Store the snapshot in your data warehouse.
- Add a fatigue detector: if an asset’s click through rate drops more than 20 percent over a rolling 3 day window while impressions rise, flag it for replacement.
Output of Week 1
- At least 3 approved headlines and 2 images per pilot campaign in rotation.
- Baseline performance for the holdout assets versus the generated variants.
Week 2, iterate and scale
Day 8 to 10, prune and replace
- Remove the bottom quartile of generated assets by conversion rate or by a blended score that includes click through rate and conversion volume.
- Generate one more batch with adjusted prompts. Example adjustments:
- Add a seasonal angle, such as gifts or holiday shipping deadlines.
- Switch from benefit led to outcome led framing, for example “stay cool on mile 10.”
- Tighten constraints, for example “30 characters max, include action verb, avoid brand name.”
Day 11 to 14, expand exposure
- Increase allocation to 40 to 50 percent of eligible combinations where generated assets beat the baseline on conversion rate or cost per acquisition.
- In Demand Gen, build one new ad variation per ad group that uses the top performing generated image, then test two new primary texts driven by GenerateText outputs.
- In Performance Max, ensure the winning headlines appear in at least two Asset Groups so that combinations can explore new audiences tied to your signals.
Decision gates
- If cost per acquisition is down or flat and conversion rate is up, keep scaling. If cost per acquisition rises more than 10 percent, roll back to Week 1 settings and dig into search terms, placements, and audience signals before proceeding.
- Fatigue rule, if any asset shows a three day decay of more than 25 percent in click through rate with flat conversion rate, replace it with a fresh variant.
Lifecycle and CRM teams can adapt the cadence outlined here using our MoEngage Merlin AI agents pilot.
How to evaluate success
Use three lenses to judge the pilot.
- Performance lens
- Conversion rate lift at the asset level compared to the holdout set.
- Cost per acquisition stability, within a 5 to 10 percent band of the pre pilot average.
- Share of spend that flows to generated assets after your pruning step.
- Process lens
- Time to review and approve each batch.
- Percentage of generated assets that pass brand guardrails on the first try.
- Number of policy rejections. Keep this near zero by instructing prompts to avoid prohibited claims and by enforcing internal rules.
- Durability lens
- Fatigue resistance, how long a generated asset stays in the top half of combinations.
- Reusability, whether an image that wins in Demand Gen also helps PMax.
Governance, policy, and quality
Keep this section short and firm.
- Human review is non negotiable. Treat generated outputs as drafts until a human approves.
- Respect data boundaries. Do not include user level data in prompts. Use public page content, product feeds, and brand approved copy.
- Archive everything. Save the prompt, the generated text or image, and the reviewer decision. You may need this for audits.
- Accessibility matters. Provide alt text for images and avoid text that relies on color contrast alone.
What good prompts look like
Prompt craft is a performance lever. Use these patterns:
- Problem benefit angle: “Write five 30 character headlines that highlight staying cool on long runs, avoid brand name, include an action verb.”
- Specificity: “Give three descriptions under 60 characters that focus on UPF protection and quick dry fabric.”
- Constraint plus tone: “Generate two headlines suitable for a safety conscious brand, friendly but direct, avoid superlatives.”
Keep prompts short, measurable, and free of forbidden claims. Create a library and reuse what works.
Reporting queries you will actually use
Ask your team to automate three queries as part of the pilot.
- Asset performance by label
- Pull impressions, clicks, conversions, conversion rate, and cost per acquisition by asset label so that you can aggregate all GENAI_v22 assets across campaigns.
- Asset group combination insights
- Use top combination views to see which pairings of headline and image are winning. This is the fastest way to identify synergy rather than isolated success.
- Fatigue alert
- A scheduled job that compares rolling three day click through rate for each asset to its prior seven day average. If the ratio drops below 0.75 while frequency rises, push a Slack alert to the owner.
Risks, limits, and how to mitigate them
- Policy risk. Generative models can over claim. Mitigation, hard code banned phrases, run a vocabulary filter, and add legal review for sensitive categories.
- Duplicates and near duplicates. Image generators often produce similar results. Mitigation, compute a simple perceptual hash on images and reject near duplicates on ingest.
- Offer drift. Models may ignore a specific price or disclaimer. Mitigation, add those details to the prompt and validate with a regex or rule based check before approval.
- Access limits. AssetGenerationService is a closed beta in v22. Mitigation, build the pipeline now and switch the generator on when you gain access. Meanwhile, treat your current human written refresh as if it came from the same pipeline so you can compare later apples to apples.
Building the AI assisted refresh playbook
By the end of two weeks, you should have a first version of a playbook that covers:
- When to refresh, for example, every 14 days or when the fatigue alert triggers.
- What to refresh, priority given to Asset Groups where cost per acquisition is above target or where share of combinations is concentrated in one or two creatives.
- How to refresh, prompt templates, brand guardrails, approval steps, and traffic allocation rules.
- How to measure, a standard report that compares generated versus baseline assets on conversion rate, cost per acquisition, and time in top combinations.
Store this playbook where campaign managers can find it. Update it monthly as the model and your results evolve.
Where Upcite.ai fits
Teams use Upcite.ai to centralize creative experiments, keep prompt and asset labels consistent, and auto assemble performance dashboards that compare generated and baseline assets. For many organizations, the biggest win is operational, reducing the cycle time from a brief to a live test while maintaining a human approval trail.
Rollout checklist after the pilot
If the pilot hits your performance and process targets, roll forward with caution and clarity.
- Scale to more Asset Groups, but keep a 50 percent cap on generated combinations until you see stable cost per acquisition for two more weeks.
- Expand prompts to new angles only after you document what worked in the pilot.
- Share learnings with creative and brand partners so that model patterns inform upcoming shoots and scripts.
The bottom line
Google Ads API v22 put generative asset creation inside the same workflow you already use to build and report campaigns. AssetGenerationService will not replace your brand voice or your best ideas, but it can keep your asset rotation fresh so those ideas show up in more auctions, more often. Run the two week pilot described above, judge it on conversion rate, cost per acquisition, and fatigue, then publish your refresh playbook. The teams that operationalize generative assets now will enjoy lower creative bottlenecks and faster test cycles in Q4 and beyond.