Google Shopping · Ecommerce

Spend went up 170%. Revenue went up 227%. Here's why that gap is the whole story.

This home design retailer increased their Google Shopping spend — but revenue grew faster than spend. That gap isn't luck. It's what happens when you fix how Google understands your catalog: every extra dollar deployed on better products, in better auctions, at a better return than before. ROAS didn't suffer from the increased investment — it improved.

+227%
Revenue Growth
+36%
Average Order Value
+21%
ROAS Improvement

The ads were working. Just not on the right products.

If you sell products that come in multiple colours, sizes, or configurations — and most home design retailers do — you probably have hundreds or thousands of individual product listings in Google's system. Each variant (a cushion in sage, the same cushion in terracotta, the same cushion in ivory) gets tracked separately. And that's where this account's Shopping performance quietly started to unravel.

Google Shopping uses "custom labels" to classify products into performance tiers — bestsellers, mid-performers, new arrivals, and so on. These labels tell Google's Smart Bidding algorithm which products deserve competitive bids and which don't. The setup here was labelling each individual variant in isolation. So a product with 12 real sales — spread across 6 colourways — looked like 6 products with 2 sales each. Below the threshold to be considered a top performer. Quietly demoted to second-tier bids.

At the same time, the labels had no concept of product value. A $90 item that sold consistently and a $290 item that sold just as consistently were treated identically. Google had no reason to compete harder for the higher-value customer — it didn't know the difference.

The result: Google was spending the budget. But it was spending it on the cheaper, lower-margin products because those happened to look better in the data. The premium catalog was being systematically under-served.

The core issue

Google was following instructions perfectly. The instructions were wrong. Fix the instructions, and Google does the right thing — almost immediately.

Teach Google two things it didn't know: which products actually sell, and which are worth the most.

The rebuild was built around a simple principle: before Google can make good decisions, it needs accurate information. We gave it two new signals it didn't have before — product-level sales performance (not variant-level), and product value (AOV tier).

All sales data was aggregated at the product group level first — combining every colourway, size, and configuration of a product into a single performance score before deciding how to classify it. A product needed 5 genuine sales in the past 60 days to be classified as a top performer. Under that threshold, it builds its history without being prematurely labelled. This alone reclassified a significant portion of the catalog — products that were actually strong sellers, finally visible to the algorithm as such.

The second dimension added value tiers: products with an average order value 1.5× above the account average were tagged separately, so Google knew to compete more aggressively for those customer searches. For the first time, the algorithm had a reason to bid differently on a $290 product versus a $90 one.

The result was a five-tier system that gave every product in the catalog a clearly defined role — and gave Smart Bidding the context it needed to bid accordingly.

Tier 0 — Star
custom_label_0 = star

Classification Criteria

  • ≥ 5 conversions at item_group_id level in 60-day window
  • CVR at or above account median for the category
  • No feed disapprovals or price competitiveness issues
  • AOV label cross-referenced — high-AOV Star products get own campaign priority

Campaign Assignment

High Priority
ROAS target: 20–25% below current avg
Headroom to scale, not a ceiling
Tier 1 — Proven
custom_label_0 = proven

Classification Criteria

  • 3–4 conversions at item_group_id level in 60-day window
  • CVR trending positively vs prior period
  • Sufficient clicks to establish reliable signal (≥ 40 clicks)

Campaign Assignment

Standard Priority
ROAS target: account standard
Tier 2 — High AOV
custom_label_1 = high_aov

Classification Criteria

  • Product AOV ≥ 1.5× account average (cross-label, not performance-based)
  • Applied in addition to performance label — not instead of it
  • Used to inform ROAS target adjustment on a per-product-group basis

Campaign Assignment

High Priority (if Star/Proven)
ROAS target: adjusted for margin tier
Tier 3 — Build
custom_label_0 = build

Classification Criteria

  • Below 3 conversions at group level in 60-day window
  • Sufficient feed health and price competitiveness to warrant spend
  • Includes new products building conversion history

Campaign Assignment

Low Priority (Catch-All)
ROAS target: account standard
Not a penalty — building signal
Tier 4 — Excluded
custom_label_0 = excluded

Classification Criteria

  • ≥ 20 clicks with 0 conversions in 30-day window
  • No feed issues explaining the zero-conversion rate
  • Price reviewed vs. competitors before exclusion is applied

Campaign Assignment

Excluded
Reviewed monthly — not permanent
Why this matters if you sell products in multiple variants

A single wallpaper design available in 6 colourways and 4 roll sizes is 24 separate listings in Google's system. If each needs to independently hit a sales threshold to be called a "bestseller," most will never qualify — even if the design itself sells brilliantly. This is one of the most common and costly errors in Shopping accounts for home, fashion, and lifestyle retailers.

Fixing this alone reclassified a significant portion of the catalog from under-served to properly prioritised — products Google had been under-bidding on for months, finally getting the attention they deserved.

Nearly a third of the catalog moved from "underperformer" to "top priority."

When we switched to product-group-level classification, the catalog shifted dramatically across tiers. Products that had been sitting in low-priority campaigns — getting cautious bids and minimal impression share — were reclassified as Stars or Proven performers. Products incorrectly flagged as zero-sellers turned out to be variants of strong-performing designs that just hadn't individually accumulated enough clicks under the old rules.

Before — SKU-level labels
Star (T0)
12%
of catalog
Proven (T1)
18%
High AOV (T2)
8%
Build (T3)
48%
Excluded (T4)
14%
After — Product-group labels
Star (T0)
29%
+17pp
Proven (T1)
22%
+4pp
High AOV (T2)
14%
+6pp
Build (T3)
31%
−17pp
Excluded (T4)
4%
−10pp

The most meaningful shift: the Star tier nearly tripled in size. All those newly-identified top performers started receiving competitive bids for the first time. Google didn't need to be told to spend more money — it just needed to know which products were worth fighting for.

The higher-value products were there all along. Google just couldn't see them.

Four weeks after the new labels went live, the per-tier breakdown told a clear story. The products now classified as Star tier were converting at AOVs well above the account average — and the High AOV tier specifically showed a 41% jump in order value compared to the prior period. These weren't new products or new customers. They were the same catalog, finally being surfaced to the right buyers at the right bid level.

Label Tier % of Spend Avg AOV vs. Prior Period Conv. Rate Priority Campaign
Star (T0) 41% $294 +28% 3.8% High — Best Sellers
High AOV (T2) 19% $412 +41% 2.1% High — Best Sellers
Proven (T1) 26% $198 +4% 2.6% Standard
Build (T3) 13% $142 −8% 1.1% Catch-All
Excluded (T4) 1% Excluded

The lower-tier (Build) products naturally received less spend — and their numbers reflected it. That's the point. Budget that had been spread thinly across the whole catalog was now concentrating where it delivered the most return. Less noise, more signal, better outcomes.

Spend up 170%. Revenue up 227%. ROAS up 21%.

The spend increase was intentional — once the catalog was correctly structured, there were quality auctions available that hadn't been accessible before. But here's the number that matters most: revenue grew faster than spend. For every extra dollar invested, the return was higher than on the dollars already being spent.

Winner #1
+227%
$27,849 → $91,036
Revenue
More than tripled in the same 48-day window. Not from more traffic — from better products winning the right auctions. Same catalog, same customers, completely different outcome.
Winner #2
+36%
$189 → $259 avg. order value
Average Order Value
Every sale is worth more. Higher-value products were always in the catalog — Google just wasn't bidding on them. Fix the labels and it immediately starts surfacing the right items.
Winner #3
+170%
$7,867 → $21,199 spend
Budget Utilisation
Spend increased 170% — but revenue grew 227%. The extra investment didn't dilute returns, it amplified them. ROAS improved from 3.54x to 4.29x as spend scaled, because every additional dollar went into better auctions for higher-value products.

The spend increase didn't hurt ROAS — it improved it. Here's why that matters.

Ad spend increased 170% over the comparison period — from $7,867 to $21,199. For most ecommerce owners, seeing spend nearly triple would trigger immediate concern about ROAS. But this is where the structural change made all the difference: ROAS went from 3.54x to 4.29x. More spend, better returns.

The reason is straightforward. Before the restructure, the catalog was being misread — budget was going into low-value auctions because Google thought those were the best available products. After the restructure, budget went into high-value auctions because Google finally understood which products were worth competing for. The incremental spend — the extra $13,332 — generated $63,187 in incremental revenue. That's an incremental ROAS of roughly 4.74x, better than the baseline 3.54x. The more was invested, the more efficiently it performed.

This is the outcome you're looking for when you fix Shopping structure: not just better performance at the same spend level, but a platform where increasing investment accelerates returns rather than diluting them. The restructure didn't just improve what the account was doing — it changed the ceiling for what's possible.

Before Restructure
Ad Spend$7,867
Revenue$27,849
ROAS3.54x
Avg. Order Value$189
After Restructure
Ad Spend$21,199 (+170%)
Revenue$91,036 (+227%)
ROAS4.29x (+21%)
Avg. Order Value$259 (+36%)

Revenue grew faster than spend — and that gap is the point. Spend up 170%. Revenue up 227%. ROAS improved despite higher investment, not because of lower spend. The incremental $13,332 in spend generated $63,187 in incremental revenue — an incremental ROAS of ~4.74x, better than the 3.54x baseline. When the structure is right, scaling spend accelerates returns rather than diluting them.

If you sell products in multiple variants, your Shopping campaigns are probably doing this too.

Variant-heavy catalogs — home design, fashion, homewares, outdoor, lifestyle — are the most common place to find this problem. Every product that comes in colours, sizes, or configurations is at risk of being misclassified, under-bid, and quietly overlooked by Google while cheaper, simpler products take its place in front of your customers.

The frustrating part is it often doesn't show up as an obvious problem. Your campaigns run. Your budget gets spent. The ROAS might even look okay. But "okay" is doing a lot of heavy lifting — and there's usually a version of your account that performs significantly better, where increased spend accelerates returns rather than diluting them, once Google understands what it's actually working with.

This retailer didn't need more ad spend. They didn't need a new product range or a redesigned website. They needed Google to understand their catalog correctly — and once it did, it immediately prioritised the right products. The algorithm found the high-value customers almost straight away. It had simply never been told those products existed.

Sound familiar?

If your Shopping campaigns are spending budget but you're not seeing the AOV or ROAS you'd expect — especially if you have a variant-heavy catalog — there's a good chance this is part of what's holding you back. A Shopping audit takes about an hour and will show you exactly what's happening.

Ready to Find What's Costing You?

The Gap Analysis runs the same diagnostic as this case study — across your account, in 5 business days.