Bruce Martinus | 2/8/2021 | 8 min read
It can be extremely difficult to perfect your cold email campaigns on the first attempt.
In fact, even when you find a sequence that works, you’ll soon find that it’s far from perfect.
Sending successful cold email campaigns requires a lot of learning by trial and error and heaps of experimenting.
Along with many other forms of copywriting, your email copywriting needs constantly tweaking if you want to fully optimize your campaigns.
In reality, there is no universal hack to building great cold email campaigns.
The trick is to never settle. There is always something to improve.
So, in this blog we’re going to look at;
Running A/B tests for your cold emails is arguably the quickest way to identify what works well, and what doesn’t as well, in your outreach campaigns.
Normally, we split the list of prospects into 2 identical groups. It’s important to keep these groups similar in terms of quantity, and ideal customer demographics.
Furthermore, we’re also able to split the contact base into more groups depending on the size of the list, and how many campaign versions you want to test. Again, it’s ideal to keep these groups as identical as possible in terms of quantity and quality.
Ultimately, the reason for split testing cold email campaigns is to find a sequence that generates the most actions we were looking to achieve.
The first thing you should figure out is what you’re aiming to increase out of these 3 areas;
Once you’ve found what you’re looking to achieve, you’re then able to easily figure out what elements need to be tested in order to achieve the desired results.
For example, if you’re looking to increase;
Open Rates: you need to focus on the subject line and introduction.
Response Rates: you need to focus on the body copy.
Click Rates: you need to focus on the offer and call-to-action.
Testing to increase open rates is probably the fastest element to test, but don’t let that sway you from the importance of finding a subject line that works.
To quickly test the open rates, we can split the list of prospects into 2 groups and run 2 different subject lines for each group. For example, for one half, we might ask a question to our prospects, and the other one might be a personalized subject to arouse curiosity.
Either way, the main goal of your headline is to grab the reader’s attention straight away.
One thing that many cold emailers tend to do is send emails with multiple links. Now, this is a bad idea because it can 1) cause your reader to distrust the content you’re providing, and 2) too many links can cause confusion. And a confused mind will always say no.
So, in regards to testing your body copy, you should audit the number of links you’re sending, the destination of the links, and the value they actually receive from looking at what you’re sending.
Another question to answer is if you should use long copy, short copy, formal writing, or conversational.
And that’s the exact reason why you should be testing because as mentioned earlier, there is no universal hack to creating winning cold email campaigns. Ultimately, it all comes down to your prospects, their personality, and what you’re offering.
This element of your cold email involves split testing your offer and your call-to-action. Without a specific CTA, you’ll have sent a meaningless email to your prospects.
Here are a couple of examples of things to test in your CTA:
Normally, the CTAs that offer the least friction tend to generate higher click rates, but if you’re able to write effective copy, it’s highly possible to generate high click rates for a booked meeting.
At the end of your experiments, you should analyze the results to compare which of the 2 campaigns generated the best results for what you were looking to achieve.
Once you find the campaign that outperformed the other, you should keep that same campaign rolling during the next test.
But don’t rest on your laurels.
Ideally, you should create another campaign that will compete against the winner of the last test with the same elements that you’re focused on optimizing.
What’s more, it’s a good idea to not completely right-off the loser of the last test – you should store it in a file to possibly use again in the future. What works now might not work again, and the same can be said for what doesn’t work now.