Benchmarks: The Good, The Bad, and The Ugly

My Recommendations Around Creating Benchmarks, And How To Use Them Appropriately

Benchmarks are fantastic. They’re a great relative measure that advertisers can use to provide context around performance. Oftentimes, benchmarks are the cornerstone that is used to determine whether a campaign is succeeding or failing.

However, just because a benchmark exists, doesn’t mean that it’s valuable and should be used in every scenario. Below I’ll break down how I like to incorporate benchmarks in the work I do with clients, and some of the use cases that I’ve seen which I don’t recommend.

The Good

Almost every client I’ve ever worked with has asked me a question similar to “what does our [insert performance metric here] look like against the industry average?”

This is a completely valid question, but one that I would argue is looking in the wrong direction. When discussing performance benchmarks with clients, I always recommend we develop our own based on the data the client already has.

For example, if we’re looking for a benchmark cost per lead, a helpful exercise can be backing out what the client can afford to pay for each lead. If one customer brings in $1,000 of profit, and leads convert at 5%, then our maximum cost per lead benchmark/goal is $50.

Another common benchmark clients ask for is CTR. Again, rather than turning to an industry report I’ll look back at previous creative tests and use that data to determine what a strong CTR is for the client’s brand against specific audiences.

The Bad

Expanding upon the theme from above, I’ll share industry benchmarks at times, but always call out that they’re directional at best. I’ll also never use industry benchmarks to determine whether or not a campaign/ad has been successful.

The reason that I don’t love industry benchmarks is because they often lack the necessary context to make an apt comparison.

Usually these benchmarks are derived from data pulled across dozens, or even hundreds, of companies. With such a large sample size, it’s much more difficult to account for the individual nuances of each brand used to create the benchmark data.

When thinking about a benchmark CPL, many studies don’t (or can’t) address questions like:

  • What did the creative look like?

  • What did the audiences look like?

  • What did the offers look like?

  • What did the landing pages look like?

  • What are all of these brands considering as a lead?

  • What were the quality of these leads?

Just because an industry benchmark report calls out that the average fitness industry CPL is $100, that doesn’t mean every fitness brand should aim to have a CPL close to $100.

Trying to chase industry benchmarks can be an exercise in aiming at a moving target. Similar to my sentiment from the previous section, my recommendation is almost always to use a brand’s own data to measure continuous improvement.

The Ugly

Even when brands use their own data to develop benchmarks, I’ve still seen some instances where these benchmarks aren’t implemented in a manner that I would recommend.

For example, a brand might have a CTR benchmark based on their average ad performance over the past quarter. At face value, this is great and will help that brand to understand creative performance for all of the upcoming tests this year!

However, where I’ve seen some brands wander off the path is when that benchmark is used as a campaign KPI. Just because a CTR benchmark exists, and the campaign’s goal is to drive consideration, that doesn’t mean this benchmark should be used as that campaign’s north star.

The logic in these scenarios is usually that consideration is difficult to measure, and the benchmark exists, so optimizing toward CTR is the easiest option.

Sometimes the easiest option isn’t always the best option. The simple existence of a proprietary benchmark doesn’t always mean it should be used in place of a more revealing measurement strategy. Measuring a campaign’s performance against CTR doesn’t reveal much in regard to its business impact.

Wrapping Up

Benchmarks are incredibly useful data points for anchoring what performance can and should look like.

However, my recommendation is to build out benchmarks using your own data whenever possible, and only optimize toward these benchmarks when there’s a clear tie to business outcomes.

Have questions, considerations, or critiques? I’d love to hear them! If you’re reading this via email, just hit respond. Otherwise, you can find me on LinkedIn.