Quick Answer: Google Ads supports two attribution models in 2026 — data-driven attribution (DDA) and last-click. The other four models you'll still see written about — first-click, linear, time decay, and position-based — were retired in September 2023, and any conversion action that used them was upgraded to DDA automatically. For most print-on-demand stores DDA is the right default because it redistributes credit toward the upper-funnel campaigns (Performance Max, YouTube, Demand Gen) that last-click systematically under-rewards. But the model choice barely changes the dollar number until your conversion value is sending contribution margin instead of order subtotal — Printify or Printful supplier cost, payment fees, and refund rate make subtotal-based ROAS unreadable for POD. Pick DDA, run Smart Bidding against margin, and use the Model Comparison report to see what the model is actually moving on your account.

What attribution models actually do inside Google Ads

An attribution model is a rule for how Google Ads divides credit for a conversion across the ad interactions that led to it. If a customer sees your Performance Max ad on Tuesday, clicks a generic Search ad on Thursday, then clicks a branded Search ad on Saturday before buying a t-shirt, the model decides whether all the credit goes to branded Search (last-click), spreads evenly across the three (linear, retired), or distributes credit based on what your account's conversion patterns suggest each touch contributed (data-driven).

The model affects three things and nothing else:

  • The Conversions and Conv. value columns inside Google Ads. Same conversions, different credit distribution per campaign — so campaign-level ROAS shifts even though aggregate revenue is identical.
  • Smart Bidding's optimisation signal. Target ROAS, Maximize Conversion Value, and tCPA all read the model's credit assignment as ground truth. A switch quietly changes which campaigns the bidder thinks are working.
  • The Model Comparison report. The one report that exposes the dollar gap between the model you've selected and the alternative.

What the model never affects: the underlying conversions themselves, your shop's revenue, anything outside the Google Ads UI, or any GA4 reporting. GA4 has its own attribution layer with its own settings; Google Ads attribution is purely an internal credit-distribution rule. For the broader picture of how attribution fits inside the Google Ads measurement stack for POD operators, see the complete guide to Google Ads ROAS and attribution for POD.

The two models available in 2026

In September 2023 Google retired four of the six historical attribution models. Today only two are selectable on a Google Ads conversion action: data-driven attribution and last-click. Any conversion action that previously used first-click, linear, time decay, or position-based was migrated to DDA automatically, with no opt-out (Google's official attribution-model documentation is the canonical reference). For the full set of POD-specific attribution articles in this cluster, see the ROAS & attribution hub; for the broader Google Ads picture see the Google Ads topic hub and the complete Google Ads playbook for print-on-demand sellers.

Data-driven attribution (DDA)

DDA distributes credit across every ad touch on a converting path based on a model trained on your account's own conversion data. It compares paths that converted to paths that didn't, learns which interactions were genuinely contributing versus incidental, and weights credit accordingly. The output is fractional: a single conversion might be split 0.18 Performance Max impression, 0.31 generic Search click, 0.51 branded Search click — totalling 1.0 conversion across the path.

DDA is the default for new conversion actions in 2026. Eligibility is automatic — Google removed the old 3,000-clicks-and-300-conversions-per-month minimum in 2023, so DDA now runs on accounts of any size. Smaller accounts get a less-trained model that draws on aggregate cross-account patterns until they accumulate their own signal; larger accounts get a model fully trained on their own paths.

What DDA does well on a POD account:

  • Rewards upper-funnel campaigns that last-click hides. Performance Max, YouTube, and Demand Gen rarely close the path on POD apparel — customers see the ad, then come back via branded Search to buy. Last-click credits zero of those conversions to the campaign that initiated the journey; DDA credits a meaningful fraction.
  • De-emphasises branded Search slightly. Branded Search closes nearly everything on a POD account, which makes it look extraordinary under last-click. DDA shifts a portion of that credit to the campaigns that actually generated the brand intent. The branded-Search ROAS goes down on paper while non-branded ROAS goes up — net zero in revenue, but a more honest distribution.
  • Adapts to your account's actual journey shape. If your customers buy on a single touch 70% of the time, DDA gives last-click-like results because the data says single touches dominate. If your customers genuinely browse across campaigns, DDA spreads credit. Same model, account-specific behaviour.

Last-click

Last-click gives 100% of credit for a conversion to the most recent ad interaction. If a customer touched four ads, the fourth gets the full 1.0 conversion and the first three get nothing. It's the simplest model and the easiest to explain to a stakeholder; it's also the model that systematically rewards bottom-funnel intent capture at the expense of upper-funnel demand generation.

When last-click is still the right pick for a POD account:

  • Your account is essentially branded Search and a couple of Generic Search campaigns. No Performance Max, no YouTube, no Display, no Demand Gen — last-click and DDA produce nearly identical campaign-level ROAS because there's no upper-funnel campaign whose contribution would otherwise be redistributed.
  • You have an existing reporting deck built on last-click numbers. Switching mid-quarter creates an apparent revenue jump in upper-funnel campaigns that's purely cosmetic — the conversions didn't change, only the credit distribution. If reporting continuity matters for the next two months, defer the switch until quarter-end.
  • You're running an experiment that depends on stable attribution. Mid-experiment model switches break ROAS comparisons before-and-after. Hold the model constant for the experiment window.

The four deprecated models and what they used to do

You'll still see these in older articles and tutorials. They no longer exist as selectable options on a conversion action, but the concepts are useful for understanding why DDA replaced them.

First-click (retired)

Gave 100% of credit to the first ad interaction on a path. The mirror image of last-click, intended to highlight upper-funnel campaigns. Retired because in practice it over-credited the touch that happened to come first regardless of whether it was actually formative — a Display impression three weeks before a branded Search purchase got the same credit as a Display impression that started a serious consideration journey. DDA's data-driven weighting handles the same intent more accurately. See linear attribution model Google Ads explained for POD sellers for the broader history of multi-touch model retirement.

Linear (retired)

Distributed credit equally across every touch on a path. A four-touch path got 0.25 per touch. Simple to explain, but assumed equal contribution which is rarely how journeys work — discovery, consideration, and conversion touches do different jobs. Retired in the same 2023 wave.

Time decay (retired)

Weighted credit toward more recent touches using a half-life decay function. Touches one day before conversion got more credit than touches seven days before. Reasonable for some industries; the data-driven model captures the same recency-weighted pattern when the data justifies it without baking it in as a rule.

Position-based (retired)

40% credit to the first touch, 40% to the last touch, 20% distributed across the middle. The U-shape was meant to recognise that journeys are book-ended by introduction and conversion. Retired alongside the other rule-based models when DDA proved capable of producing a similar shape on accounts whose data actually supported it.

The pattern across all four retirements: rule-based models impose a credit shape regardless of whether your account's data supports it. DDA infers the shape from your data. For most accounts the inferred shape sits somewhere between time decay and last-click, but it's account-specific rather than a one-size-fits-all assumption.

A worked dollar example: DDA vs last-click on a POD account

The single biggest gap in attribution-model coverage online is concrete dollar examples. Here's a realistic POD apparel account at $30K monthly ad spend, comparing what last-click and DDA report for the same underlying conversions.

Account shape:

  • Branded Search: $4K spend, 480 conversions under last-click, $96K reported revenue, ROAS 24.0
  • Generic Search: $14K spend, 220 conversions under last-click, $44K reported revenue, ROAS 3.14
  • Performance Max: $10K spend, 90 conversions under last-click, $18K reported revenue, ROAS 1.80
  • YouTube: $2K spend, 6 conversions under last-click, $1.2K reported revenue, ROAS 0.60

Total: $30K spend, 796 conversions, $159.2K revenue, blended ROAS 5.31. Branded Search looks like a hero, YouTube looks like it's on fire.

Same conversions under DDA (typical redistribution on a multi-touch POD account):

  • Branded Search: 392 conversions credited, $78.4K — ROAS drops to 19.6
  • Generic Search: 224 conversions, $44.8K — ROAS roughly flat at 3.20
  • Performance Max: 144 conversions, $28.8K — ROAS climbs to 2.88
  • YouTube: 36 conversions, $7.2K — ROAS climbs to 3.60

Same total revenue ($159.2K), same total conversions, redistributed credit. Branded Search's reported revenue fell by $17.6K; that revenue moved to Performance Max ($10.8K) and YouTube ($6K), with small adjustments elsewhere. Neither account changed in reality — the numbers shifted because DDA recognised the upper-funnel touches that branded Search was previously claiming credit for under last-click.

What this changes:

  • Smart Bidding sees Performance Max and YouTube as more valuable. Target ROAS bidding will allocate slightly more budget headroom to both campaigns, which means more impressions to upper-funnel customers, which compounds.
  • YouTube is no longer a kill candidate. Under last-click YouTube reported a 0.60 ROAS — usually an instant pause. Under DDA it's at 3.60, often a cautious-keep depending on margin tolerance. Same campaign, same impressions, different decision driven entirely by the model.
  • Branded Search loses some of its halo. The 24.0 ROAS that made it look untouchable becomes a 19.6 ROAS that's still excellent but reflects the reality that other campaigns generated the brand intent it captures.

The catch: none of this is profitability. Both numbers measure subtotal-based ROAS — the dollar value of orders before Printify or Printful supplier cost, payment processor fees, shipping subsidy, and refunds. On a POD apparel account those costs typically take 60–70% of subtotal, leaving 30–40% as contribution margin. A 3.60 ROAS on subtotal is a 1.10 ROAS on margin. A 19.6 ROAS on subtotal is a 5.9 ROAS on margin. The campaign-level decisions only become reliable when conversion value reflects margin — for the value-layer fix, see Google Ads conversions attribution explained for POD sellers.

A decision framework for picking your model

Most attribution-model articles end with "it depends." Here's a concrete decision tree that resolves the choice for a POD account in under five minutes.

Question 1: Is your account 80%+ branded Search and Generic Search?

If yes, the model choice barely matters. Pick last-click for reporting simplicity if your stakeholders prefer it; pick DDA if you want to be future-proof for when you add Performance Max or YouTube. Either is defensible. Stop here.

If no, continue.

Question 2: Do you run Performance Max, YouTube, Demand Gen, or Display?

If yes, pick DDA. The model exists specifically to credit those campaign types fairly. Last-click structurally under-rewards them and you'll either pause profitable campaigns based on bad numbers or keep unprofitable ones because you can't see their real cost.

Question 3: Is reporting continuity more important than accuracy for the next quarter?

If yes, defer the switch to quarter-end. Pick a date, document the switch in your reporting deck so the upper-funnel ROAS bump doesn't look like a campaign-performance change, then switch. The key is that the cosmetic revenue increase in upper-funnel campaigns is a credit-redistribution artefact, not an actual change in account performance — explaining that once is easier than explaining it monthly.

Question 4: Are you running an A/B experiment that depends on stable attribution?

If yes, hold the current model until the experiment ends. Mid-experiment model switches make ROAS comparisons unreadable.

For most POD accounts the answer ends up being DDA — because most accounts run at least one upper-funnel campaign type, and the redistribution is genuinely useful. For the focused walkthrough on the singular-keyword version of this question, see attribution model Google Ads explained for POD sellers.

How attribution feeds Smart Bidding (the part that actually moves spend)

The connection between attribution model and Smart Bidding is the part nobody explains and the part that most affects your account's actual budget allocation.

Smart Bidding strategies — Target ROAS, Maximize Conversion Value, tCPA, Maximize Conversions — read the conversion column for the campaign and optimise toward it. Whatever attribution model is selected on the conversion action is the column the bidder reads. So:

  • If the model is last-click, the bidder thinks branded Search is the highest-ROAS campaign on the account. It bids confidently on branded keywords and cautiously on upper-funnel campaigns whose reported ROAS is lower.
  • If the model is DDA, the bidder sees Performance Max and YouTube as more valuable than last-click suggested. It bids more aggressively on upper-funnel impressions, which generates more upper-funnel touches, which seeds more conversion paths, which DDA then credits.

This is a feedback loop, and it's the actual reason attribution-model selection matters more than the report numbers suggest. The model isn't just a reporting choice — it's the optimisation signal. Switching from last-click to DDA on an account that runs Performance Max typically increases Performance Max budget consumption by 10–25% over the following four to six weeks as the bidder adjusts its valuation upward and the campaign's auction-win rate climbs.

What this means for your decision: pick the model your account's spend allocation should reflect. If you want Smart Bidding to fund upper-funnel campaigns appropriately, DDA is the right input. If you want it to ruthlessly chase last-click intent capture, last-click is the right input. The Smart Bidding feedback loop will move budget over weeks, not days, so allow four to eight weeks for the model switch to stabilise before judging campaign-level ROAS shifts. For more on the bidding-strategy interaction, see Google Ads attribution explained for POD sellers.

How to change the attribution model in Google Ads

The model is set per-conversion-action. In the 2026 interface:

  1. Go to Tools → Measurement → Conversions.
  2. Click the conversion action you want to update — usually Purchase for a POD store.
  3. Click Edit settings.
  4. Scroll to Attribution model.
  5. Choose Data-driven or Last click.
  6. Save.

The change applies retroactively to historical conversions in the report — past data is recomputed under the new model. Your historical ROAS numbers in the Google Ads UI will shift to match the new model's credit distribution. This catches operators off guard: it looks like history changed, but only the credit assignment did.

If you have multiple primary conversion actions (Purchase plus Add to Cart, for instance), set the model on each. Mixed models across primary actions create messy attribution because Smart Bidding sees aggregated value across actions. Pick one model, apply it everywhere.

Reading the Model Comparison report for a POD store

The Model Comparison report at Tools → Measurement → Attribution → Model comparison is the only place inside Google Ads that puts a dollar value on the model gap. It shows campaign-level conversion volume and value side-by-side under both DDA and last-click, with a percentage delta column.

How to read it on a POD account:

  • Set the date range to last 90 days. Default is 30 days, which on a multi-touch POD funnel under-samples paths longer than four weeks.
  • Set the conversion action to Purchase. Don't compare across all conversion actions — secondary actions distort the picture.
  • Look at the % Δ column. Campaigns with positive deltas under DDA are the ones DDA credits more highly than last-click. On most POD accounts these are Performance Max, YouTube, Demand Gen, and Display. Negative deltas usually appear on branded Search.
  • Sum the absolute deltas. The total dollar shift between models tells you how much budget allocation the model choice is influencing. On a small account it's often $2–5K per month; on a large multi-channel account it can exceed $50K per month.

If the deltas are small (under 5% per campaign), the model choice is mostly cosmetic on your account. If the deltas are large (15%+), the model is genuinely shaping which campaigns your bidder funds. The size of the delta tells you how much the question matters before you spend time on it.

POD-specific mistakes when choosing an attribution model

Switching the model and immediately judging campaign performance

Smart Bidding takes four to eight weeks to fully adjust to the new model. ROAS shifts in the first two weeks are credit-redistribution artefacts, not performance changes. Operators who switch and then pause Performance Max because "ROAS dropped" pause a campaign whose number was always wrong under last-click and is now correctly higher.

Comparing Google Ads ROAS to GA4 ROAS as if they should match

They never do. Google Ads attribution credits ad interactions only; GA4 attribution credits all marketing channels including organic, direct, email, and referral. Even with both set to data-driven, the channel scope differs, the lookback windows differ, and the conversion definitions can differ. Treat them as two views, not two attempts at the same number. For more, see Google Ads attribution email organic integration explained for POD sellers.

Trusting attribution-model ROAS without fixing conversion value

The single largest mistake on POD accounts. Attribution decides how credit is distributed; conversion value decides what's being credited. If the value Google Ads receives is order subtotal — the default for both Shopify's Google channel and the GA4 → Google Ads import — every ROAS number you read overstates profitability by the supplier-cost-plus-fees ratio. On Printify apparel that's typically 60–70% of subtotal, so a "5.0 ROAS" is a 1.5–2.0 contribution-margin ROAS. The model choice doesn't fix that gap; only sending margin as conversion value does. See Shopify Google Ads ROAS reporting integration explained for POD sellers.

Setting different attribution models on different conversion actions

Possible technically, harmful in practice. Smart Bidding aggregates value across primary actions, and mixed-model aggregates produce numbers that don't correspond to either model cleanly. Pick one model and apply it consistently across Purchase, Add to Cart, and any other primary actions you've configured.

Using last-click because "DDA is a black box"

DDA is opaque, but last-click isn't accuracy — it's a different bias. Choosing last-click because it's understandable means choosing the bias that systematically under-rewards upper-funnel campaigns. The right response to DDA's opacity is to use the Model Comparison report to verify the magnitude of the redistribution, not to revert to a model whose limitations are well-known.

Forgetting that the model only affects Google Ads

Switching the Google Ads attribution model doesn't change anything in GA4, Shopify, your accounting, or any other report. The change is bounded to the Google Ads UI and Smart Bidding's internal optimisation. Stakeholders who see only quarterly revenue numbers won't notice; campaign managers will see daily shifts in the columns they monitor.

How Victor reads attribution against live POD margin

The right attribution-model question for a POD operator isn't "which model gives me the best ROAS number" — it's "which campaigns make me money after Printify or Printful cost, payment fees, and refunds." Google Ads can't answer that. The platform doesn't know your supplier cost per SKU, your refund rate by campaign, or your shipping subsidy structure.

Victor — PodVector's AI agent for POD operators — joins your Google Ads attribution data (under DDA or last-click, your choice) against Shopify orders, Printify or Printful supplier invoices, payment-processor fee records, and refund history in BigQuery. When you ask "which Performance Max campaign is net-positive after supplier cost?" or "what's my true contribution-margin ROAS by campaign under DDA?" you get the answer in seconds, against live data, without exporting CSVs from five different tabs. Today Victor explains; tomorrow, Victor adjusts the bid against margin directly.

FAQs

What attribution models does Google Ads support in 2026?

Two: data-driven attribution (DDA) and last-click. The first-click, linear, time decay, and position-based models were retired in September 2023 and any conversion action that used them was migrated to DDA automatically.

Is data-driven attribution better than last-click for POD sellers?

For most POD accounts, yes — because most run at least one upper-funnel campaign type (Performance Max, YouTube, Demand Gen, Display) that last-click systematically under-rewards. DDA redistributes credit to those campaigns based on your account's actual conversion patterns. The exception is accounts running only Search campaigns, where the two models produce nearly identical results.

Why did Google retire first-click, linear, time decay, and position-based attribution?

All four were rule-based — they imposed a credit-distribution shape regardless of what the account's data supported. DDA infers the shape from the data, which in 2023's modelling environment produced more accurate results than rule-based shapes for most accounts. Google's stated rationale was that DDA-eligibility was no longer a meaningful constraint, so a single data-driven default served accounts better than a menu of approximations.

Do I need a minimum number of conversions to use data-driven attribution?

No, not since 2023. The old 3,000-clicks-and-300-conversions-per-month minimum was dropped. Smaller accounts get a less-trained model that draws on aggregate cross-account patterns; larger accounts get a model fully trained on their own data. DDA runs on accounts of any size.

What's the difference between data-driven attribution in Google Ads and in GA4?

Both use data-driven approaches, but the channel scope differs. Google Ads DDA credits ad interactions only — clicks and impressions on Google's network. GA4 DDA credits all marketing channels: organic search, direct, email, referral, paid social, plus Google Ads. The numbers will differ even with both on data-driven; that's correct, not a bug.

Will switching attribution models change my historical ROAS numbers?

Yes. The change applies retroactively to historical conversions inside the Google Ads UI — past data is recomputed under the new model. Your reported historical ROAS will shift to match the new credit distribution. The conversions themselves don't change; only how they're attributed across campaigns.

How long does Smart Bidding take to adjust after an attribution model switch?

Four to eight weeks for full stabilisation. The bidder reads the new credit assignment and gradually shifts budget allocation toward newly-favoured campaigns. Don't judge campaign performance during the transition window — ROAS shifts in the first two weeks are mostly credit redistribution rather than real performance changes.

Can I run different attribution models on different conversion actions?

Technically yes, but Smart Bidding aggregates value across primary actions, so mixed-model aggregates produce numbers that don't correspond cleanly to either model. Pick one and apply it everywhere — typically DDA on Purchase, Add to Cart, and any other primary actions.

Does the attribution model affect the conversion value I send to Google Ads?

No. The model decides how credit is distributed across touches; the value being distributed comes from your conversion action's value setting. Whether you send order subtotal, contribution margin, or some other figure is independent of the model. For POD accounts the value choice matters more than the model choice — sending subtotal makes every ROAS number unreadable regardless of attribution model.

Is the Google Ads attribution model the same as the GA4 attribution model?

No. They're separate settings on separate platforms. You can run DDA in Google Ads and last-click in GA4, or vice versa. The choice in one doesn't affect the other. For most POD operators DDA in both is the right default; mixed configurations are common but worth documenting so reports stay reconcilable.


Pick the model. Then read it against margin.

Google Ads gives you DDA or last-click. Neither tells you whether a campaign makes money after Printify or Printful supplier cost, payment fees, and refunds — that requires joining the attribution data against your live Shopify orders and supplier invoices. Victor does it in BigQuery and answers in seconds. Today Victor reads attribution; tomorrow Victor adjusts the bid against margin. Try Victor free.