Building Your Own Sales Email Benchmarks

The trick to establishing benchmarks should not concern someone else's metrics. Try out this email benchmark creation process and let us know how it goes.

Building Your Own Sales Email Benchmarks
Trees that symbolize reference points are scattered in a lavender field. 

“What’s a good baseline for cold emailing? How should I set email benchmarks?

We get these questions all the time. You’ll reach a time when you want and need benchmarks in every role or career level.

A reference point for your performance is helpful and essential regardless of your experience level. In B2B especially, sales email benchmarks often tell you where your efforts are working and where you can improve.

Many folks lean on traditional (potentially less helpful) ways to generate benchmarks. “Ask a peer” or “Find data for your specific industry” are common pieces of advice.

I rarely hear someone say something like, “Ignore that external noise. Put something out there that’s yours.”

Create your own benchmark. Here’s how:

1. Work backward

Consider what revenue number you need to hit. You can then work backward through your sales funnel to understand what conversions need to look like to hit your number.

For example, if you close 33% of your inbound demos, use a similar assumption about your outbound efforts. Then you can start to play with the math.

Let’s play out this scenario. Say 50% of your cold email replies turn into booked meetings. That means you must receive double the replies to hit your meeting target. To calculate the number of replies you need, it's even simpler. Replies = volume of emails sent X reply rate.

Next, determine how many emails you can reasonably (and responsibly) send per day. The average sales rep spends 15 to 20 minutes per personalized email, but it's closer to three to five minutes with Lavender.

That means you'll be sending out six well-crafted cold emails per hour.

With a 16% reply rate (below the Lavender average of 20.5%), you'd receive about two replies per hour. Since we determined above that half of your replies book a meeting, you're looking at one meeting booked per hour.

6 Emails per Hour (10 min per email)

2 Replies per Hour (16% reply rate)

1 Meeting booked per hour (50% booked meeting rate)

2. Do the work

The best data to build a benchmark will come from your team. The old way of systematizing outbound through rigid templates is dead. Personalized, non-automated emails see 1200% more replies, according to our email data. That's up from 100% the year before. This learning means your team needs to be set up correctly so they can do the work and see the results referenced above.

Personalization doesn't need to be time-consuming. Use my personalization process to think through target personas and how you’ll approach them.

1. What do you do?

2. What does that solve?

3. Whose problems are solved by that? (Be specific.)

4. How would they describe those problems?

5. How are they approaching that problem now?

6. What indicates that problem exists?

7. Do different combinations of indicators change the problems at hand?

8. Where can you find those indicators?

9. What's the fastest path to finding and scanning those indicators?

Answers to the above questions are indicators, which become the observations that trigger your hypothesis for your prospect’s problems. This combination is at the heart of any good cold email sequence.

This process also unlocks logical writing. It means sellers are working efficiently and effectively. They know where they're going, why they're going there, what they're looking for, and how they will use their research when it comes together.

Roll that logic into an easy-to-execute email framework. Here's one of my favorites:

  • Make an observation
  • Tie that observation to an insight or challenge you think your buyer might be experiencing (emphasis on might)
  • Provide credibility to speak to that challenge
  • Explain what you provide to solve that problem in one sentence
  • Start a conversation with a question

You can easily follow up with a thoughtful bump. This framework is a one- to three-sentence email comprised of context + a bump.

  • Ask if your recipient had thoughts on your prior message
  • Explain you're curious because of the original reason you reached out

This approach works because you're seeking to start a conversation. You're being thoughtful by providing context for why you're reaching out. And you're considering your reader’s time and attention by asking a quick question in a "bump" format.

Frameworks are crucial when you’re building benchmarks. Trying to find benchmarks with a subpar strategy means you'll never find the optimal result. This is why creating a scalable solution to personalization that works is key.

Email frameworks are essential because they typically produce superior emails and allow you to iterate effectively to an optimal result. Every aspect of a framework is testable and should constantly be tested.

3) Test and iterate

Many teams operate from old-school, volume-based mindsets and fail to iterate thoughtfully. Here’s how you can do this differently.

You can test:

  • Different email frameworks against one another
  • The order of the building blocks that make up a framework by swapping in/out different elements into the framework
  • The phrasing within each building block

With this much opportunity, you can easily get overwhelmed. Take it one step at a time, and have a clear learning goal.

Here’s a good litmus test to let you know you're running a good experiment: If you know what C will be, based on the test results of A vs. B, then you're off to a good start. That isn't because the answer will always be C; it's because you're reducing the chances that another variable will confound your results. In other words, you know what you're testing for.

It’s essential to think critically about how we prioritize our learning.

Learning about what triggers, pains, and solution descriptions work most effectively is more important than testing frameworks against one another.

Start top to bottom with your email messaging.

Your observations will inevitably shift depending on the prospect, but you can change the insight or problem as those triggers appear.

For example, if I'm contacting a company because they're hiring, I would use that to split my testing.

In one test, I'd focus on ramp time. In another, I'd focus on learning from the new hire’s perspective.

I’m looking to learn if performance is more important than visibility.

From that experiment, I can start to iterate on how I describe the solution we provide, given the persona and their situation.

You might be wondering where subject lines play into this experiment. You can take a couple of approaches to this:

1) Tie the subject line to the observation to stay consistent across the tests.

Example: SDR Hiring

2) Tie the subject line to the solution to stay consistent across the tests.

Example: In-Inbox Coach

3) Adjust the subject along with the insight/problem statement

Example: "Ramp Issue" vs. "Onboarding Issue"

If you are adjusting subject lines, recognize you should try to keep the preview text the same (except for in the first option). More variability means you won't know why the reader opened the email. You won't know why they replied. Since the subject line depends on the first line in option one, it is less likely to confound your learnings.

While option three may seem like it adds too much variability, it can be your most potent option if you want to understand if the problem resonates.

Since the problem or insight moves into the subject, its visibility is no longer dependent on an open, and the open can serve as a proof point in the experiment. This approach can accelerate your time to learning.

The trick to establishing benchmarks does not concern someone else's metrics. It's about doing the right things, having a clear learning goal, and iterating. Try out this benchmark creation process and let us know how it goes.