Quick Answer: The linear attribution model in Google Ads gave equal credit to every ad interaction in a conversion path — a four-touch path to a $32 sale would credit each touch with $8. It was retired in April 2023 along with first click, time decay, and position-based, and is no longer selectable in Google Ads. Conversion actions that used it have been auto-migrated, first to last click and then in most cases to data-driven attribution (DDA). For a print-on-demand seller researching linear in 2026, the practical question isn't "should I use linear" — that option doesn't exist — but "what should I read instead, given that my old guides and dashboards still reference it." This guide walks through what linear actually did, why Google removed it, what intuition from linear is still worth keeping, and the POD-specific layer (Printify and Printful base cost) that no attribution model — linear, last click, or DDA — has ever seen.

What the linear attribution model actually did

Linear attribution was one of six rule-based attribution models Google Ads offered through 2022. The rule was the simplest of the multi-touch options: take the conversion's full credit, divide by the number of ad interactions in the path, and assign each touch an equal share. A two-touch path got 50/50. A four-touch path got 25/25/25/25. A ten-touch path got 10% per touch.

"Ad interactions" meant clicks, plus engaged YouTube views once those started counting toward attribution. A path that opened with a Performance Max click on Tuesday, hit a generic Search click on Wednesday, included a YouTube engaged view on Thursday, and closed with a branded Search click on Friday before purchase was a four-touch path. Under linear, each of those four touches received exactly one quarter of the conversion's credit and one quarter of its conversion value.

The conceptual purpose of linear was to push back against last click's "all credit to the most recent touch" assumption. If a buyer's path had four ad interactions before they bought, last click told you only the fourth one mattered. Linear told you all four mattered equally. Neither claim was perfectly correct — most paths have touches that genuinely contributed more than others — but linear was the rule that took the boldest swing at "treat the path as a path, not just an ending."

For a print-on-demand operator who only ever interacted with Google Ads through the default last-click view, linear was a useful mental rebalancing exercise: it forced you to stop assuming branded Search did all the work. For an operator running Performance Max alongside generic Search, linear redistributed credit visibly toward the upper-funnel campaigns that introduced buyers in the first place.

A worked example of linear credit on a POD path

Concretely, here's how linear handled a typical POD conversion path.

A customer is shopping for a personalized birthday t-shirt. Their actual ad interactions across a week look like this:

  • Day 1. Sees a Performance Max product listing while browsing Google for "personalized birthday gifts." Clicks the listing, lands on the store, doesn't buy.
  • Day 3. Watches a 15-second YouTube preroll for the same store as an engaged view (skippable in-stream, watched past 10 seconds).
  • Day 5. Searches "custom birthday t-shirt with name." Clicks the generic Search ad, lands on a different product page, doesn't buy.
  • Day 7. Searches the store's brand name directly. Clicks the branded Search ad and converts on a $32 t-shirt.

That's a four-touch path: PMax click, YouTube engaged view, generic Search click, branded Search click. Conversion value: $32.

Under last click, the branded Search click receives $32 of credit. Every earlier touch receives $0. The Conversions column shows 1 conversion on the branded Search campaign and 0 on PMax, YouTube, and generic Search.

Under linear, each touch receives $8 of credit (one quarter of the conversion value). The Conversions column shows 0.25 conversions on each of the four campaigns. PMax, YouTube, generic Search, and branded Search all show $8 of attributed conversion value.

Under DDA (the default in 2026), the credit weights would be unequal but determined by the data — PMax might get 0.30 ($9.60), the YouTube engaged view 0.10 ($3.20), the generic Search click 0.40 ($12.80), and branded Search 0.20 ($6.40), with the exact fractions varying by what the model has learned about your account's typical paths.

The linear rule's appeal was its simplicity: no machine learning, no fractional weights to argue about, no "why did this touch get more credit than that one" debate. The downside was that linear's simplicity wasn't actually right — equal credit isn't the same as accurate credit. A single brief YouTube engaged view that briefly held attention on day 3 probably didn't contribute as much to the day-7 purchase as the generic Search click on day 5 that brought the buyer back to the right product page. Linear couldn't tell the difference.

When linear was retired and why

Google Ads retired linear, along with first click, time decay, and position-based, in April 2023. The four rule-based multi-touch models were marked deprecated in late 2022, removed from the model selection dropdown in early 2023, and fully unselectable for both new and existing conversion actions by April 2023. Google's own attribution-model help doc now states explicitly that "the first click, linear, time decay, and position-based attribution models are no longer supported."

The official reason Google gave was that data-driven attribution (DDA) had become available to all advertisers, regardless of conversion volume, with modeled DDA filling in for accounts below the historical DDA threshold. With DDA universally available, the rule-based models were redundant — DDA's machine-learning credit distribution dominates the rule-based options on most paths, and Google didn't want advertisers picking a worse model out of habit or comfort.

The unofficial reason, the one most analysts read between the lines, was that Smart Bidding works dramatically better when the bidder reads from a model whose credit weights reflect actual contribution rather than a fixed formula. Linear's "equal credit per touch" was a fixed formula that systematically over-credited unimportant filler touches and under-credited the touches that actually drove the purchase. A bidder reading from linear bid suboptimally on every campaign. By forcing all advertisers off rule-based models and onto DDA, Google made Smart Bidding visibly more effective across its advertiser base, which in turn validated the Smart Bidding strategies Google was already pushing as the default path.

Either reading lands at the same place: linear, like the other three rule-based models, is gone from the product. There is no path to re-enable it, no advanced setting that brings it back, no support escalation that opens it for select accounts. The two models available in 2026 are DDA (default) and last click. The full background on the migration is covered in Google Ads attribution models explained for POD sellers.

What replaced linear in your account

If your POD account had a conversion action set to linear before April 2023, here's what Google did to it.

The first migration, in spring 2023, moved every linear conversion action to last click as a transitional state. This was a deliberately conservative move — last click is the simplest model, the one most familiar to advertisers, and the one least likely to surprise anyone reading their Conversions column the day after the migration. The change was logged in Tools → Change history → Conversion settings, with a timestamp marking the auto-migration event.

The second migration, rolling through 2023 and 2024, nudged those auto-migrated last-click actions toward data-driven attribution. This happened through in-product prompts asking advertisers to "switch to recommended attribution," automated default-model changes for accounts that hadn't actively touched their attribution settings, and behind-the-scenes flips for new conversion actions that landed on DDA without ever passing through the legacy menu. By the end of 2024, most POD accounts that had used linear before 2023 ended up on DDA, often without the operator explicitly choosing it.

To verify what your account is currently using:

  1. Open Google Ads.
  2. Tools → Goals → Conversions.
  3. Click any conversion action.
  4. Edit settings → Attribution model.

The dropdown today shows two options: Data-driven and Last click. Whatever is currently selected is what Google has been using since the most recent change. The historical timeline of model changes is visible in Tools → Change history → Conversion settings, filtered to that conversion action — useful for reconciling pre-migration data with current data.

The linear intuition still worth keeping

Linear is gone, but two pieces of intuition that linear made visible are still load-bearing for POD operators today.

The first. Multi-touch is the norm, not the exception, in any account that runs more than one campaign type. Performance Max plus generic Search plus branded Search produces three- and four-touch paths almost by default. Most paths in a typical POD account today are not the single-touch "buyer searched the brand and bought" image that last click implicitly assumes. If you skipped over multi-touch attribution entirely because it felt complicated, linear was the entry-level model that taught operators "your paths actually have more touches than your reports show." That lesson predates linear and survives its retirement.

The second. Branded Search systematically over-attributes itself under last click. Buyers who knew about your store from earlier touches return through branded Search to convert because that's the easiest way to find your store again. Last click credits the entire conversion to that branded touch, hiding the work the earlier touches did to make the buyer aware of the store in the first place. Linear's equal-credit rule made this dynamic obvious — under linear, branded Search dropped from 100% credit to 25% credit on a four-touch path, and the missing 75% had to come from somewhere. DDA does the same redistribution today, just with weighted credit instead of equal credit. The takeaway is the same: don't read branded Search ROAS as net-new revenue; read it as the conversion edge of a longer process.

For the deeper walkthrough of these dynamics in current models, the explainer at Google Ads attribution explained for POD sellers covers it; the model-specific deep dive is in Google Ads attribution model explained for POD sellers.

How DDA differs from linear in practice

DDA and linear share the multi-touch frame — both distribute credit across every touchpoint in the path rather than handing all of it to one — but the math underneath is fundamentally different.

Linear used a fixed rule: every touch gets equal credit, regardless of position, channel, ad type, or recency. DDA uses an account-trained machine-learning model that asks, for each touch in each path, how much the conversion outcome changed because that specific touch was present versus absent. The credit weights are unequal, vary path by path, and are calibrated against the actual buying behavior in your account (or, for accounts below the data threshold, against a modeled approximation of advertiser categories like yours).

Practically, this means DDA's credit distribution looks more like the linear distribution than the last-click distribution — both spread credit across the path — but DDA's distribution is closer to what's actually happening. A weak filler touch that linear would have credited 25% of a four-touch path might receive 5% under DDA. A pivotal mid-funnel touch that linear would also have credited 25% might receive 50% under DDA. The total still sums to the same conversion; the allocation is sharper.

The deeper walkthrough of how DDA computes its weights is in about data-driven attribution Google Ads help explained for POD sellers; the case for why DDA is the right default for a POD account is in data-driven attribution default Google Ads help explained for POD sellers.

How last click differs from linear in practice

Last click is at the opposite end from linear. Linear distributed credit equally across every touch in the path; last click hands the entire conversion to the most recent ad click and ignores every earlier touch. There is no fractional math under last click — touches before the last one receive zero credit and zero conversion value, period.

For the same four-touch POD path described earlier — PMax click, YouTube engaged view, generic Search click, branded Search click — linear would have distributed $8 to each campaign. Last click would have credited the branded Search campaign with $32 and the other three with $0. The total is the same; the allocation is wildly different.

The implication for POD reporting is direct. Under last click, your branded Search ROAS looks artificially strong, your Performance Max ROAS looks artificially weak, and your generic Search and YouTube ROAS look near-zero on conversions even though those campaigns are doing real upstream work. Under linear, those campaigns would have appeared roughly equal in conversion credit. Under DDA, they appear in proportion to their actual contribution — usually somewhere between linear's "everyone equal" and last click's "branded gets everything."

If you're stuck on last click for a specific reason (single-channel branded-only account, external measurement match, etc.), the linear-era takeaway still applies: don't read your branded Search ROAS as the full picture. The ROAS reported there includes credit for earlier campaigns' work, and shutting off PMax or generic Search to "save spend" usually reveals that branded Search ROAS was riding on that earlier campaign's awareness all along.

Reading old POD reports tagged to linear

If your account is more than three years old, you almost certainly have historical reports — internal dashboards, monthly board decks, agency monthly summaries — that show ROAS calculated under linear attribution. Those numbers aren't wrong; they just aren't comparable to today's numbers, which are calculated under DDA or last click.

The practical issue is year-over-year comparison. A monthly ROAS series that runs from January 2022 to today crosses two attribution-model regime changes: linear (or whatever you had before April 2023) → last click → DDA. Each change rewrote the historical conversion math going forward, but old saved reports preserve the old numbers in screenshots and exports. A "ROAS down 15% year over year" reading from a 2024 vs 2022 comparison may be entirely an attribution-model artifact, not a real performance change.

Three things to do with linear-era reports:

  • Annotate the boundary. Whatever dashboard or doc carries your historical ROAS series should have a labeled vertical line in early 2023 marking the model change. Without it, you'll spend quarterly meetings arguing about a "trend" that's a measurement artifact.
  • Don't reverse-engineer the linear math from current DDA data. You can't faithfully back-calculate what your numbers would look like under linear today, because Google Ads no longer exposes the touch-level credit fractions linear used. Anyone offering to "approximate linear ROAS" from current data is making it up.
  • Use the cluster pillar as the frame for cross-period reporting. The fuller treatment of how to compare ROAS across attribution-model regime changes is at the complete guide to Google Ads ROAS and attribution for POD, which covers the boundaries to mark and the questions to stop asking.

Smart Bidding fallout from the linear retirement

The retirement of linear didn't just change reports — it changed what Smart Bidding optimizes toward. Any account running Target ROAS, Maximize Conversion Value, Maximize Conversions, or eCPC has a bidder that reads credit weights from the active attribution model and bids on touches in proportion to the credit they're expected to receive.

Under linear, a four-touch path's bidder would have valued each touch at 25% of expected conversion value, so bids on filler upper-funnel placements ran higher than they should have. Under DDA, those filler placements receive lower credit weights and the bidder cools off on them; under last click, they receive zero credit and the bidder ignores them entirely.

For accounts that migrated from linear to last click in spring 2023, the immediate fallout was a 14–30 day window of Smart Bidding redistribution where budget shifted dramatically toward bottom-funnel placements (mostly branded Search) and ROAS reports looked chaotic during the shift. For accounts that subsequently migrated from last click to DDA in 2023–2024, a second 14–30 day rebalancing happened in the opposite direction. POD operators who watched their daily ROAS through both transitions saw two separate volatility windows that had nothing to do with creative, audience, or product changes — they were Smart Bidding recalibrating to a new credit model.

The takeaway for current operations: the daily ROAS volatility from any future model change isn't something to "trade through." Make the change, leave it alone for at least 30 days, and read the post-stabilization numbers as the new baseline. The bid-strategy implications are covered in more depth in the complete Google Ads playbook for print-on-demand sellers.

The POD blind spot: no model has ever seen Printify cost

The bigger story about linear, last click, DDA, and any future attribution model is that none of them, by themselves, can answer the question a POD operator actually needs answered: am I making money on this campaign.

The reason is structural. The conversion value Google Ads sees, under any attribution model, is whatever you sent through the conversion tag. For the default Shopify Google channel app, that value is order subtotal — sometimes gross of shipping, never net of supplier cost. The Printify or Printful base cost on each line item, the Shopify platform fee, the Stripe payment-processing fee, the return rate on personalized SKUs that printed wrong — none of that is in the value Google sees.

This produces a specific failure mode that compounds across attribution models. Linear could split a $32 conversion across four touches at $8 each with perfect equality. DDA can split the same $32 conversion across four touches with sharper, more accurate credit weights. Both are correct given the value Google was told. Both miss the fact that the actual margin on that $32 sale, after Printify's $19.40 base cost and Shopify's ~$3 platform plus processing fees, is closer to $9.60. The campaign needs not just a positive Google Ads ROAS but a Google Ads ROAS above roughly 3.3x just to break even on a contribution-margin basis.

An account running tROAS at 2.5x with default Shopify-pixel revenue tracking is, in POD terms, an account systematically losing money on every sale while reporting "profitable" in the Google Ads UI. The attribution model — whichever one you're on — isn't the source of the error. The value layer that the model is reading is.

Layering margin on top of whichever model you ended up with

There are three workable approaches for closing the gap between Google Ads' attribution-model output and a POD store's actual margin. They differ in setup effort and how dynamically they reflect SKU-level cost differences.

1. Send margin in the conversion value field. The cleanest fix. Replace the default Shopify pixel that sends order subtotal with a custom integration that calculates margin per order — subtotal minus the Printify or Printful fulfillment cost for each line item, minus a flat estimate for Shopify and processing fees. The conversion value Google sees becomes margin, and DDA's credit distribution and Smart Bidding's tROAS targets all align to a number that reflects business reality. Setup effort is high; the calculation has to happen at order-fired time and read live supplier costs that vary by SKU.

2. Adjust tROAS targets manually for the gap. The accountant fix. Leave the default Shopify-pixel revenue tracking in place but raise your Target ROAS in Google Ads to compensate for the unseen cost. If your average POD margin is roughly 30% of subtotal, a tROAS of 3.3x in Google Ads is roughly equivalent to a true 1.0x return on margin. Easy to set up, instantly applicable. The downside: it's a flat correction that can't differentiate a $40 sweatshirt campaign (lower margin %) from a $25 mug campaign (higher margin %).

3. Audit ROAS in a separate margin layer outside Google Ads. The reporter fix. Keep Google Ads' attribution model and conversion tracking as-is, but maintain a separate view (BigQuery, Looker Studio, a spreadsheet) that pulls Google Ads spend and joins it against actual order-level margin data from Shopify, Printify, and Printful. The Google Ads UI continues to report attributed revenue; your separate view reports attributed margin. Decisions get made off the second number.

Most POD operators we talk to start with option 2 because it's a 30-second change, then graduate to option 3 once monthly ad spend crosses $5K and a 2% accounting error turns into real money. Option 1 is the cleanest but requires the most engineering and is the least common in practice. The full trade-off discussion is in the complete guide to Google Ads ROAS and attribution for POD, and the topic-level frame is at Google Ads for POD.

How Victor reads attribution against live POD margin

Victor is PodVector's AI agent for POD operators. It connects to your Google Ads, Shopify, and Printify or Printful accounts and answers questions about your campaigns from live data.

The attribution-model conversation has a fairly small role in how Victor operates day-to-day. The model your account is on (DDA in nearly all cases, sometimes last click) determines how Google distributes credit across your campaign touchpoints. Victor reads that distribution as Google reports it; it doesn't try to re-attribute under a phantom linear or any other retired model. What Victor does add is the margin layer Google has never seen.

When you ask "what's my true ROAS on Performance Max for the last 30 days, after Printify cost," Victor pulls the DDA-attributed conversion value for PMax from Google Ads, joins it against the actual order-level fulfillment cost from your supplier account, subtracts Shopify and processing fees, and returns the margin-based ROAS. No spreadsheet, no scheduled BigQuery job, no manual cost-of-goods column. The model's credit distribution stays Google's; the value layer it gets compared against becomes the real one.

The same query infrastructure handles harder questions: "which campaigns have the widest gap between reported revenue and actual margin," "for last week's converters, what was the average path length and how much credit did upper-funnel touches receive under DDA," "what is the historical ROAS series for branded Search before and after the 2023 attribution-model change, with the boundary marked." These are the questions an attribution model alone can't answer because they require the model's output joined to data the model never sees.

The longer-term direction is moving from answering attribution-margin questions to acting on them. Today Victor reports the gap between attributed revenue and margin; tomorrow it adjusts tROAS targets per campaign based on the SKU mix flowing through that campaign, raises the target on lower-margin sweatshirt campaigns, lowers it on higher-margin mug campaigns, and tells you what changed and why. The attribution model stays Google's. The operational decisions that depend on it become Victor's to suggest or execute, with you in the loop.

FAQs

Is the linear attribution model still available in Google Ads in 2026?

No. Linear, along with first click, time decay, and position-based, was retired in April 2023. The model selection dropdown in Google Ads today shows only two options: Data-driven and Last click. There is no path to re-enable linear.

What attribution model is my account using if I had linear set before 2023?

Almost certainly data-driven attribution (DDA). Linear conversion actions were auto-migrated first to last click in spring 2023, then nudged to DDA through 2023–2024 via in-product prompts and default-model changes. To verify, go to Tools → Goals → Conversions, click your conversion action, and check the Attribution model field.

Why did Google retire linear attribution?

Officially, because DDA became universally available with modeled DDA filling in for accounts below the historical data threshold. Unofficially, because Smart Bidding works dramatically better when the bidder reads from a model whose credit weights reflect actual contribution rather than a fixed equal-credit formula.

How was linear different from data-driven attribution?

Linear used a fixed rule: every ad interaction in the path got equal credit. DDA uses an account-trained machine-learning model that assigns unequal credit weights based on how much each specific touch changed the conversion outcome. DDA's distribution is closer to what's actually happening in the path; linear's distribution was uniform regardless of contribution.

Can I still see linear-attributed numbers in old Google Ads reports?

Old saved reports and exports preserve the linear-era numbers, but Google Ads no longer recalculates current data under linear. The historical conversion math has been rewritten under successive model migrations, so a current view of 2022 data will show last-click or DDA numbers, not the linear numbers your dashboards captured at the time.

Should I compare 2022 ROAS to 2026 ROAS in my POD account?

Only with a labeled boundary marker for the attribution-model regime changes. The 2022 number was likely calculated under linear or another retired model; the 2026 number is under DDA. The comparison is valid for revenue-trend purposes but not for fine-grained campaign performance reads. The cluster pillar covers how to handle this in detail.

If linear redistributed credit toward upper-funnel touches, did it inflate Performance Max ROAS?

Compared to last click, yes — linear gave PMax a share of credit on multi-touch paths that last click hands entirely to branded Search. Compared to DDA, linear's credit was usually too generous to weak filler touches and not generous enough to strong mid-funnel touches. The "linear flattered PMax" reading is true relative to last click, less true relative to DDA.

Can I approximate linear attribution outside Google Ads using path data?

In principle, yes — if you export the conversion path data and apply equal-credit math externally. In practice, Google Ads no longer exposes touch-level path data the way the old Model Comparison report did, so the approximation is rough. For most POD accounts the effort isn't worth it; the better question is what to do with the model your account is actually on, not what the data would look like under a model that's been retired.

Does the attribution model account for Printify or Printful base cost?

No. No attribution model — linear, last click, DDA, or any retired model — has ever seen supplier cost. The model only operates on the conversion value sent through the conversion tag, which is typically order subtotal. Margin has to be layered in separately, either at the conversion-value field, in your tROAS target, or in a separate reporting layer outside Google Ads.

What should I do with old agency reports that still cite linear ROAS as a benchmark?

Treat them as historical context rather than current targets. Linear-era ROAS isn't comparable to current DDA-era ROAS in either direction, and a benchmark from 2022 isn't a 2026 goal. Reset benchmarks against current-model data; the cluster pillar at the complete guide to Google Ads ROAS and attribution for POD covers how to build current-model targets.


See your true Google Ads ROAS, after Printify and Printful cost

Linear is gone. DDA or last click is the current menu. Both correctly distribute credit across the path Google Ads can see — neither sees the Printify or Printful base cost that determines whether the campaign is actually profitable. Victor connects to your Google Ads, Shopify, and supplier accounts and answers margin-based ROAS questions from live data — no spreadsheets, no scheduled jobs. Try Victor free and ask "what's my true ROAS on Performance Max after fulfillment cost" as the first question.