Skip to content
LangtonTools
← All posts
4 min read

Why I'm done with broad match in B2B

Alex LangtonSenior B2B paid media manager · ~$650K/mo industrial spend

Google's account reps pushed broad match hard. They still do. They'll tell you the algorithm is smart enough now to find conversions you'd never think of, that exact match is leaving money on the table, that you have to trust the machine.

I don't.

I've been running B2B paid media at scale for 12 years across $650K a month in managed spend. The overwhelming consensus in that time has been that broad match is necessary, efficient, and the path to growth. Every Google rep I've ever talked to has pushed it. Every conference talk glorifies it. And I've watched it destroy margin after margin in niche manufacturing accounts where the data just isn't dense enough for the algorithm to make anything close to intelligent decisions.

So I killed it. Completely. Every account I manage uses exact match and phrase match only. Zero broad match. And I'm watching my close rates stay stable while my CPL improves.

The mechanical problem with broad match in low-volume categories

Broad match relies on something that doesn't exist in industrial B2B: signal density.

The algorithm trains on massive data sets. Consumer e-commerce generates millions of conversions a month. SaaS lead gen at scale generates thousands. These signals are loud and clear. The machine learns fast and accurately.

Now imagine the opposite. I'm running a $40K laser marking system for industrial manufacturing. The category has maybe 3,000 qualified buyers in North America. My account converts at maybe 3-5 leads a month if we're doing it right. That's 36-60 conversions a year. The algorithm is supposed to learn from 36 data points about what matters in a 9-month sales cycle.

That's not enough data to learn anything. So the algorithm guesses. And in my experience, it guesses by mapping you to cheap, easy conversions that have nothing to do with your actual buyer.

I've audited search term reports where a laser marking system was being matched to "engrave my wedding ring," "laser cutter for hobby," "mark my dog's name on a tag." These aren't close variants. They're completely different intent. But broad match found them anyway because they convert fast and cheap, and that's what the algorithm knows how to optimize for.

What broad match actually does in a low-intent environment

The fatal flaw of broad match is that it doesn't distinguish between intent and availability. It just finds things that convert.

In a crowded marketplace, that's fine. The algorithm can afford to be sloppy because the sheer volume of attempts means some percentage will hit real buyers. But in a niche vertical where your buyer is a specific person at a specific company solving a specific problem, broad match is just burning cash on noise.

Here's what I saw in client audits. Broad match accounts had CPLs that looked amazing on the surface. $45 a lead. $60 a lead. Looked great in a board deck. But when we traced those leads to Salesforce, something like 15-20% of them were actually qualified. The rest were students, hobbyists, people in the wrong industry, competitors researching pricing.

The account was optimizing for cheap junk, not for real pipeline. And because broad match was feeding the algorithm only cheap junk signals, the machine kept optimizing harder and harder toward cheaper, more junk-like traffic.

It's a doom loop. And the only way out is to shut it off.

The Google playbook, and why it doesn't work here

I know the pushback because I've heard it a hundred times. The account rep playbook goes like this:

"Broad match finds intent you wouldn't think of. Exact match limits your reach. You need to trust the algorithm. Smart Bidding works better with more data. Close variants are accurate now."

All of this is true in e-commerce. None of it is true in industrial B2B.

Exact match in a niche vertical isn't limiting your reach. It's filtering out noise. The reach you're losing is worthless reach. It's people who will never buy.

Smart Bidding works better with more data, but that data needs to be good data. Feeding the algorithm signal from random hobbyists doesn't help. It hurts.

Close variants are accurate at scale. At 3,000 qualified buyers a year, accuracy breaks down completely.

What I do instead

I run exact match on all proven, high-intent terms. I run phrase match as a query miner. Zero broad match across all accounts.

The structure is simple. Every exact match keyword gets its own ad group with highly specific copy. Phrase match campaigns run separately and feed me search term data for expansion. Negatives apply at the account level and protect everything.

The result is a system that doesn't hallucinate. Google can't send you traffic from the wrong buyers because you've explicitly told it what traffic is allowed. CPL goes up on paper. Pipeline velocity goes up in reality. Sales stops complaining about garbage leads.

Would I like cheaper reach? Of course. Would I like the algorithm to work without me having to manually build out hundreds of exact match keywords? Obviously. But I'm not going to pretend that giving up control is the same thing as improving performance. It's not. It's just a convenient lie that agencies tell themselves.

If your account is broad match heavy and you're wondering why your CPA looks good but your close rate sucks, stop wondering. Turn off broad match for a week. See what actually happens. In every low-volume B2B account I've tested this in, the answer is the same: less volume, better leads, better pipeline.

That's the trade I want to make.

Alex Langton

Senior B2B paid media manager · ~$650K/mo industrial spend

12+ years running B2B Google Ads accounts in industrial, manufacturing, and B2B e-commerce. Builds Langton Tools because generic PPC SaaS was never designed for the multi-MCC, complex- pacing, B2B-vocabulary reality of the accounts that actually drive industrial revenue.