Brandon Ervin, Director of Product Management for Google Search Ads, recently discussed campaign consolidation, AI Max, and what advertiser control looks like in 2026 on Google’s Ads Decoded podcast. The conversation was serious and informed, and reflected a product team that understands advertiser concerns and is actively working to address them.
But the podcast is also incomplete. The gap between what Google said and what advertisers actually experience from their sales organization is large enough to warrant a direct response.
Ervin’s team is doing genuinely good work, but the platform’s structural incentives haven’t changed. Google’s evolving product is creating problems faster than it can solve them. Performance is now measured on economic standards, shaping how a search ads audit is performed.
Recent improvements to Google Search Ads
Recentish improvements are genuine:
- Brand exclusions in Performance Max and Demand Gen.
- Site visitor and customer exclusions from PMax campaigns.
- Network-level reporting within bundled campaigns.
- Improved search term visibility.
- Brand and geo controls inside AI Max at the ad group level.
- Semantic modeling that doesn’t anchor on campaign or ad group IDs, reducing learning period risk during consolidation.
These are meaningful. They are also solutions to issues introduced by bundling, opacity, and aggressive automation rollout.
These products have been mercilessly shopped to advertisers since 2021, and the controls that make it usable arrived years after the sales push began.
The ability to separate brand from non-brand traffic inside PMax/AI Max should not be framed as innovation. It restores a fundamental distinction that previously existed by default. The ability to see network performance inside a bundled campaign is not an expansion of control. It restores visibility that was removed.
An audit must ask whether new tools are genuinely expanding control or merely reintroducing baseline transparency.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Table stakes: What everyone agrees on
Before the real audit begins, the fundamentals. These are uncontroversial and should already be in place:
- Run full ad extensions (sitelinks, callouts, structured snippets, image, call).
- Use automated bidding with intentional target-setting and conversion action selection (I recognize there are still holdouts here but seems crazy to me).
- Maintain negative keyword lists.
- Write ads relevant to the queries they serve.
- Audit automatically created assets for accuracy and brand safety.
- Cut Search Partners and Display expansion from Search campaigns.
- Separate brand and generic campaigns using brand controls.
- Exclude site visitors and past customers from prospecting campaigns where appropriate.
- Import offline conversion data (MQLs, SQLs, revenue, CLV, repeat rate,) to feed the algorithm downstream signals.
- Weight conversion values by actual downstream conversion rates.
- Account for mobile vs. desktop performance gaps.
Those are table stakes. The real audit begins after that.
What a 2026 search audit must focus on
With the prevalence of AI, advertisers need to focus on reconstructing economic visibility in systems designed around aggregation and automation.
Signal architecture
In the podcast, Ervin says “control still exists, it just looks different.” Ad controls — where, when, and to whom ads appear — are still important and changing, some think, for the worse.
The old ad controls — exact match, manual bids, network selection, and device modifiers — gave advertisers direct influence over where ads appeared and what they paid.
However, the new controls are indirect. Control now lives in data quality, density, and selectivity. They influence the algorithm, but the algorithm makes the final call.
An audit should focus on three questions:
- Quality: Are you importing revenue, pipeline stage, or qualified lead status, or only surface conversions?
- Density: Is there enough high-quality data for the model to learn from, or is it sparse and noisy?
- Selectivity: Are you intentionally limiting what Google can see, or are you passing everything indiscriminately?
IMG
With these new tactics, you only pass net-new customers or high-value customers. The majority of the time, it is better to just pass the densest and most predictive conversion set.
Incrementality
Google optimizes toward reported conversions, not incremental conversions. Brand search often captures existing demand. Retargeting often captures users already in motion. Pmax/AI Max frequently blends these signals.
Ervin was asked: Are AI-driven campaigns over-indexing on warm brand traffic to inflate blended ROAS (return on ad spend)?
He doesn’t dispute the problem, but points to partial solutions, including using brand controls, better theme your account, and looking at multi-campaign A/B testing.
If incrementality is not measured, automation amplifies non-incremental signals.
Marginal returns
Google uses a blended cost-per-action (CPA). For example, the first $50K of spend might return a $30 CPA, while the next $50K might return $120.
With automation, money is spent until the blended metric falls within tolerance, meaning the last dollar is not spent efficiently. The vast majority of advertisers are bidding far beyond what they should be and have no idea it is happening.
An audit must:
- Plot spend against incremental conversions.
- Estimate marginal CPA at each spend tier.
- Identify diminishing return curves.
- Compare marginal CPA to lifetime value.
A lower target makes the algorithm more selective, competing in fewer high-value auctions. Google doesn’t suggest this because that would mean less spend and lower bids are less effective in general.
Query resolution and ability to lower targets
On the podcast, Ervin acknowledges that some AI Max matches can “look a little wonky” and says his team is working on exposing the model’s reasoning.
Query mapping has gotten meaningfully worse over the past several years: queries landing in the wrong ad groups, matching to keywords with different intent, and broad match pulling in traffic unrelated to the keyword.
AI Max has accelerated this — there’s been an increase in the volume of irrelevant queries flowing through AI Max campaigns, with no connection to the advertiser’s business or keywords in the account.
Meanwhile, Google’s recommendations consistently push toward broad matching and large themed ad groups.
The issue is not whether broad match works, but whether high-value intent is being diluted in larger, broader ad groups. Fewer ad groups means that we cannot effectively or meaningfully lower targets without a massive structural negative schema, so performance differences have to be large enough to validate the new structure.
An audit should:
- Extract full search term reports.
- Classify queries by intent tier.
- Compare CPA and lifetime value by query type.
- Quantify irrelevant or weakly related matches.
- Measure performance drift across match types.
Network economics
Performance Max and Demand Gen bundle multiple networks into single campaigns, but offer limited visibility into which networks drive results. This makes it hard to cut the underperforming ones. The slow rollout of network-level controls systematically benefits Google’s less competitive inventory.
An audit must:
- Break out performance by network.
- Compare CPA and lifetime value by placement.
- Identify cross-subsidization.
- Determine whether weaker networks are relying on surplus from strong search inventory.
Value redistribution
Combining these elements in your audit will help you succeed in this new world of ad search:
- Non-incremental traffic inflates conversion counts, making performance look better than it is.
- Looser match types expand where ads appear, diluting intent precision and forcing fewer ad groups/spend and blanket-level targets/bids.
- No clean marginal return visibility means it is much more difficult to find the point of negative return
- Network bundling hides which channels actually perform.
The cumulative effect is that the surplus value generated by your best inventory and high-intent, high-converting search queries gets redistributed across Google’s weaker inventory (i.e., Display, YouTube, Discover, Gmail, crazy tail queries).
This is how to get a dwindling supply of valuable search queries to inflate the cost-per-clicks (CPCs) of low-quality inventory.
The Ads Decoded episode: Is your campaign structure holding you back in the era of AI?
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.