In 2022 I led the research and strategy behind a sustainability campaign for ABB – a global industrial technology company operating across energy, manufacturing and infrastructure. The brief was significant: a six-figure budget, a 200-person quantitative survey that I designed and analysed myself, a creative concept developed with an external partner, and a campaign that eventually went to market.
I wasn’t there when it did. I had moved on before the campaign launched.
The results existed somewhere. Engagement metrics, reach data, some measure of whether the thinking had worked. But I couldn’t claim them. The thread between my work and the outcome had been cut – not by failure, but by time.
That experience sat with me. I had spent months building the foundation and someone else got to stand on it. But it also made something clear: attribution in B2B marketing is rarely a technical problem. It is a timing problem, an organisational problem, a problem of who is in the room when the number gets reported.
At Green Hat, the same dynamic played out at scale. The agency doubled revenue during the years I worked there. I won clients – Mimecast, Grant Thornton, Indeed, MYOB, among others. I built pitch processes, orchestrated teams, handed relationships over carefully so they would grow. The contribution was real. But the revenue line belonged to everyone and no one. I couldn’t separate my thread from the whole.
That invisibility is precisely what attribution models are supposed to resolve. And it is precisely what they consistently fail to do.
Why measuring what’s easy to measure is actively damaging B2B marketing strategy
This essay sits within the broader perspective outlined in Why Marketing Strategy Fails, which examines the structural conditions under which strategy struggles to function as an organising system.
There is a particular kind of organisational comfort that comes from a well-populated dashboard. Numbers populate fields, graphs trend upward, and the impression of control is maintained. The problem is that in B2B marketing, the metrics most likely to fill that dashboard are also the least likely to reflect what is actually driving – or undermining – growth.
Attribution is not a measurement problem. It is a strategic one. When the evidence base is wrong, decisions built on top of it are wrong too. And in B2B marketing, the evidence base has been structurally compromised for years – by models built for a different kind of buying, by platforms that have a commercial interest in their own data looking like the complete picture, and by an internal culture that rewards legibility over accuracy.
The Measurement Comfort Blanket
Organisations invest in measurement to feel in control of outcomes they don’t fully understand.
The marketing technology market reached $6.65 billion globally in 2024, and is projected to nearly double by 2030. The average B2B organisation now operates between 12 and 20 marketing technology tools, with nearly half of organisations allocating 20–40% of their total marketing budget to technology alone. That is a significant investment in the infrastructure of measurement.
Gartner found that the share of marketers using their martech stacks to full capability dropped to just 33% – and the gap between investment and utilisation has widened year on year. The tools are accumulating. The insight is not.
What is being built, in many organisations, is not a measurement capability but a measurement performance – the appearance of data-driven rigour in place of the thing itself. The dashboard becomes a substitute for understanding, and the metrics reported upward are chosen for their legibility to the C-suite rather than their relevance to strategic decisions.
What gets reported is what is easy to count. What drives growth is often neither.
The Vanity Metric Problem
Context is lost at every layer of translation between data and decision.
The difficulty with B2B marketing metrics is not that they are false. It is that they are partial – and that the partial truth, presented without context, actively misleads.
Impressions, click-through rates, MQL volumes, cost-per-lead: each of these measures something real. But each also strips away the context that would make it meaningful. A high MQL volume generated from a campaign targeting the wrong seniority level tells a story of marketing success that sales will immediately contradict. A low cost-per-lead achieved by narrowing targeting to the most conversion-ready slice of the market obscures the cost of ignoring everyone who is not yet ready to raise their hand.
Research from 6sense’s 2025 B2B Marketing Attribution Benchmark found that of the two to three metrics most organisations report to the board, only one to two align with principles needed to understand the full buyer journey – and metrics related to understanding the entire buying process account for only around 10% of all metrics available.
The information that would actually inform strategic decisions – pipeline velocity, buying stage progression, share of consideration among in-market accounts – is rarely what gets reported. What gets reported is what travels cleanly through a slide.
In large organisations and in agency relationships, the problem compounds. Data is summarised on the way up, interpreted on the way across, and contextualised on the way out. By the time a metric reaches a decision-maker, its caveats have been stripped and its confidence has been inflated. The number is clean. The reality it represents is not.
Frederick Reichheld designed NPS to be the simplest possible measure of customer loyalty – one question, one number, one clear signal. His research found it predicted revenue growth better than any other metric across 14 industries. Within a decade, it had become one of the most widely tracked and least actioned numbers in business. CustomerThink discussed how Gartner research found fewer than 30% of companies using NPS could demonstrate a direct link between their score and revenue outcomes (as well as some alternatives).
In summary, the metric became the goal. The signal got lost.
Reichheld, F. (2003). The One Number You Need to Grow. Harvard Business Review.
The Tool Lock-In Trap
The platforms that measure your marketing have a commercial interest in their own data looking like the complete picture.
One of the least discussed drivers of attribution distortion in B2B marketing is structural: the major platforms – Google, Adobe, Salesforce – are not neutral measurement infrastructure. They are commercial ecosystems with a direct financial incentive to make their own data look definitive, and their own environments look indispensable.
Adobe and Salesforce data platforms are deliberately sparse on connectivity to platforms outside their ecosystems – because the whole point is to keep organisations locked in. Integrating with external solutions requires significant additional spend on middleware tools and custom APIs, and migrations between versions of the same platform can cost enterprises weeks of downtime and fees that rival the software itself.
DATAVERSITY’s 2024 Trends in Data Management survey found that 68% of respondents cite data silos as their top concern – a figure that has grown year on year. Those silos are not accidents. They are, in many cases, the product of platform architectures that prioritise ecosystem retention over data portability.
The result is a measurement environment where the picture is only as complete as the platforms that dominate your stack – and where the activity that falls outside those platforms simply does not exist in the data. The offline conversation, the peer referral, the conference encounter that started a relationship – none of it registers. It is not that those things are not working. It is that they leave no trackable trace, and so they receive no credit, and so they get cut.
Why Attribution Models Don’t Work for B2B
The models were built for a different kind of buying. They have been awkwardly retrofitted onto B2B ever since.
Last-click, first-click, linear, time-decay, position-based: these are the models that populate most B2B attribution conversations. Each assumes something that B2B buying rarely delivers – a traceable, digital, individual journey with a clear beginning, a series of identifiable touchpoints, and a single moment of conversion.
HockeyStack’s 2024 B2B Customer Journey Report, analysing 150 B2B companies, found that the average deal requires 266 touchpoints and 2,879 impressions. Forrester data puts the number of interactions in a typical B2B buying journey at 27 – up from 17 in recent years. Attributing a deal to any one of those interactions, or even to a weighted subset, is not measurement. It is an allocation of credit that tells a story convenient to the model, not necessarily true to the buyer.
The deeper problem is that B2B buying is not an individual activity. RevSure’s 2025 State of B2B Marketing Attribution report found that 91% of marketers focus attribution only on the primary decision-maker – ignoring the other five to nine people who influence the purchase. Attribution models track the champion because the champion is the one who fills in the form. Everyone else in the buying committee is invisible to the model.
A LinkedIn video viewed to completion with no subsequent click registers as nothing in standard attribution. The conversion that follows weeks later gets attributed to the direct session or branded search that finally brought the buyer back. The channel that built the relationship receives no credit. The channel that took the conversion does.
Budget decisions follow the data. The channel that is hard to attribute gets cut. The channel that takes easy credit gets funded. Over time, the marketing mix shifts toward what is measurable and away from what works.
The Persona Intelligence Gap
Better measurement starts with better understanding of who you are actually trying to reach.
The attribution problem runs deeper than the models. It starts with the quality of persona intelligence that underpins strategy in the first place.
Most B2B organisations make attribution and targeting decisions based on persona data that is thin, dated, or built from a sample too small to be reliable. The C-suite executives, procurement professionals, and technical evaluators who shape complex buying decisions are notoriously hard to survey – expensive to reach, unwilling to engage, and poorly represented in standard research panels. What fills the gap is assumption dressed as insight: demographic sketches that flatten behavioural nuance, firmographic data that says nothing about how a buyer actually thinks, and technographic signals that tell you what tools an organisation uses but not how decisions get made within it.
This is beginning to change. Platforms like Evidenza – founded by the former heads of the LinkedIn B2B Institute and built on the principles of Ehrenberg-Bass marketing science – generate synthetic personas from layered data sets that span behavioural, demographic, firmographic, and technographic variables. When ServiceNow needed campaign-ready personas for a global AI product launch, traditional research would have taken twelve months. Evidenza replaced that timeline with a thirty-day sprint – producing buying-centre personas, segmentation logic, and creative direction at a fraction of the time and cost.
The credibility of the methodology is being validated at scale. Dentsu announced a strategic partnership with Evidenza in June 2025, integrating synthetic audiences directly into their media planning workflows. Early results showed an 0.87 correlation with traditional research methods – a result their Chief Data and Technology Officer described as proving the approach can match legacy rigour while dramatically accelerating time to insight. The partnership specifically cited B2B as a primary beneficiary, precisely because the hard-to-reach senior decision-makers that synthetic personas excel at modelling are the same ones that traditional panels consistently underrepresent.
The implication for attribution is significant. If the persona intelligence feeding your strategy is more precise – capturing not just who your buyer is but how they behave, what they read, which entry points they use, and what objections they carry – then the measurement built on top of it becomes more meaningful too. Better inputs, better signal.
Anderson, Narus and van Rossum found that fewer than one in five B2B suppliers could quantify the value they delivered to customers in financial terms. The rest relied on assertion. This is not just a sales problem – it is a measurement problem. You cannot attribute what you cannot define. If marketing cannot articulate, in the customer’s own terms, what changes because of your product or service, no attribution model will make that visible.
Anderson, J., Narus, J. & van Rossum, W. (2006). Customer Value Propositions in Business Markets. Harvard Business Review.
What Useful Measurement Actually Looks Like in B2B
Metrics that point forward, not just backward.
The honest starting point is to accept that perfect attribution is not available in B2B – and that pursuing it through increasingly complex models often creates more distortion, not less. The average B2B buyer consumes over eleven pieces of content and requires more than thirty marketing touchpoints before engaging with sales – a journey that looks far more like a pretzel than a straight line. No model captures that cleanly, and the pretence that it does is part of the problem.
What more useful measurement tends to look like in practice is a combination of leading and lagging indicators that together tell a more honest story than any single attribution model can. Pipeline velocity – how quickly opportunities move through stages – is more strategically informative than volume of MQLs. Share of consideration among in-market accounts tells you something about brand salience that click-through rates never will. Buying group engagement, tracking signals across the full committee rather than the declared individual, is available through account-based platforms and meaningfully closes the gap between tracked behaviour and actual buying activity.
None of these are perfect. But they are honest about their limitations in a way that last-click attribution is not. And that honesty is strategically more valuable than the false confidence of a clean number.
The question is not whether your marketing is generating data. It is whether the data you are generating is capable of telling you anything true about where growth comes from.
Conclusion
The map is not the territory. In B2B attribution, the gap between the two is costing real money.
Marketing attribution was adopted in B2B because it promised clarity in a complex environment. That promise has not been fully delivered – not because attribution is wrong in principle, but because the models, tools, and organisational behaviours built around it were never designed for the length, complexity, and multi-stakeholder reality of how B2B buying actually works.
The consequence is a strategic environment where investment follows what is easy to measure rather than what is driving growth. Where channels that build long-term preference are defunded because they cannot prove their contribution in a 30-day window. Where the dashboard looks confident and the strategy underneath it is flying partially blind.
Fixing attribution in B2B is not primarily a technology problem. It is a willingness problem – a willingness to accept that some of the most important things marketing does will never show up cleanly in a report, and to build measurement frameworks that are honest about that limitation rather than obscuring it behind model complexity.
This piece is part of a series examining why B2B marketing continues to underperform its potential.
Also in this series: The Buying Committee Nobody Markets To | Why Marketing Strategy Fails | The Always-On Argument