BIDVANCE ACADEMY

CREATIVE TESTING

As Facebook and Google’s shifts in direction of automation, they started to remove most of the advantages that other adtech companys used to deliver.

However, creative is still an opportunity, because even though the algorithms can testdifferent elements of ads, they are unable to create those creative elements. In this case, human beings are still the best crafters when it comes to creative.

There´s still a problem: vast majority of new creative fails. If an ad can’t beat the control, the only result you´re going to get is your empty wallet.

That´s why the real competitive advantage lies specially in creative testing – to identify creative winners as fast as possible for the lowest amount of spend per variation.

Creative Testing

creative testing

Creative is the differentiator between winning or loosing, is where the majority of the big wins are.

Here are the elements that tend to win the most:

  • 60% of videos
  • 30% text
  • 10% headlines and calls to action

Now you already know where you can start your testing.

However, it isn´t as easy as that! Taking, for exemple, Facebook user acquisition advertising in consideration there are several hidden challenges that includes:

Multiple strategies for testing ads – Choices can complicate things. You can test creative on Facebook with their split-test feature, you can also set up one ad per ad set, or even set up many ads within an ad set. The one you´ll choose, will affect the testing results.

Data integrity – Some ads will get more impressions than others. The CPM for different ads and ad sets will vary. And this complicate things because it will make a lot of noise in the data, which will make it harder to determine the winning ad.

Cost – Testing has a very high ROI, but it can also have an extremely high investment cost. That´s why you need to set up your creative testing right, because if you don´t, it can be extremely expensive.

Bias – Facebook’s algorithm prefers winning ads, the system will always favor the winning ad. This will skew the data even more and will make it harder to establish which ad won.

Running tests in Google Ads has similar challenges.

 

Perfect Creative testing or Cost-Effective Creative Testing

In classic testing, you need a 95% confidence rate to declare a winner, but getting a 95% confidence rate for in-app purchases will cost you 20,000 dollars per variation.

To be able to reach a 95% confidence level, you’ll need around 100 purchases. With a 1% purchase rate (typical for gaming apps), and a 200 dollars cost per purchase, you’ll end up spending 20,000 dollars for each variation in order to accrue enough data for the 95% confidence rate.

And that’s the best-case scenario, because that means you’d be able to find a variation that beats the control by 25% or more for it to “only” cost you 20,000 dollars.

A variation that beat the control by 5% or 10% would have to run even longer to achieve a 95% confidence level.

Very little advertisers can afford to spend that amount of money per variation, especially if 95% of new creative fails to beat the control.

What do you do then?

You will move the conversion event you’re targeting up a little in the sales funnel. So, instead of optimizing for purchases, for mobile apps, you optimize for impression to install rate (IPM). For websites, you optimize for impression to top-funnel conversion rate.

 

Impression to action rate

Ads with high CTRs and high conversion rates for top-funnel events may not be true winners for down-funnel conversions and ROI / ROAS. While there´s a risk of identifying false positives with this method, it´s better to take this risk than the risk and expense of optimizing for bottom-funnel metrics.

 

But if you decided to test for bottom-funnel:

  1. You would be increasing the spend per variation and you’d introduce substantial risk into your portfolio’s metrics.
  2. You’d need to rely on fewer conversions to make decisions, which runs the risk of identifying false positives.

 

There´s another benefit: When you’re optimizing for IPM (installs per thousand impressions), you’re effectively optimizing for relevance score. 

A higher relevance score (Engagement Rank, Quality Rank or Conversion Rank) comes with lower CPMs and access to higher-quality impressions. Ads with higher relevance scores and lower revenue per conversion will often outperform ads with lower relevance scores and higher revenue per conversion because Facebook’s algorithm is biased towards ads with higher relevance scores. 

That´s why optimizing for installs works better than optimizing for purchases. It also means you can run tests for 200 dollars per variation because it only costs you 2 dolars to get an install. For many advertisers, that alone can make more testing possible.

creative_testing_bidvance_academy

Here are other good practices to make creative testing system to work:

Facebook’s split-testing feature – There are several ways to test ads, even within Facebook. Skip the other options and use their split-testing feature. 

 

Test against a top-performing control – If you don’t test every variation against your control, you’ll never know if it will beat your control. You’ll only know how the new ad performed compared to the other new ads you tested. 

 

Test only on Facebook’s news feed – There are 14 different placements available in Facebook’s ad inventory. Testing all of them at once creates a lot of noise in the test data. Keep the data clean and just test for the news feed. 

 

Optimize for app installs – This is very important if you want to get your costs down. It´s not the perfect solution, but it works. 

 

Aim for, at least, 100 installs  – To reach statistical significance you need at least 100 installs. 

 

Always use the right audience – Use a high quality audience so it’s representative of performance at scale, but one that isn’t being used elsewhere in your account. This will minimize the chances of audiences used in test cells to be exposed to other ads running in other ad sets. 

 

Consistent data = higher confidence – It´s not the first day of results that will dictate the final results. If test data is consistent, your test results will be far more reliable than if cumulative variation performance change day by day. 

 

Predicted Winners

If you follow the best practices, you might have a few new ads that performed well against the control. That´s what we call “predicted winners.” 

These newly-tested predicted winners performed well against the control in a limited test. However we don’t know what will happen if we increase their exposure and test them against purchases and for ROAS.

To find that out how they will perform, each winning variation should be launched into existing ad sets. They should be allowed to compete for impressions against other top ads. This will allow us to verify whether these new predicted winners are holding up at scale. 

If you un through this process long enough, you’ll find the ad that beats your control. 

 

Mobile User Acquisition Testing

Here are some precious tips:

 

1 – Get back to testing – Your winners will get tired and the performance will deteriorate. The best way to offset performance fatigue is to replace old creative with new. 

 

2 -Competitive creative analysis – If you’re a new media buyer, spend your spare time doing competitive creative analysis. It will help you generate better ads. 

 

3 –  Don’t trash the “almost” winnersWe don’t kill the “near winners.” We’ll send them back to the creative team so they can improve those ads to get a better performance. 

 

CONCLUSION

Quantitative creative testing is the single best way to improve the ROAS for your accounts, so, the more you do it, the better results you can get.

START TODAY

Start working with a smart ad network able to provide you a better performance.