Left arrow iconReturn to All Resources

Top testing methods for digital marketers

11 Minute Read

Top-testing-methods-for-marketers

CordialMake a connection.

Today, marketers can tap into a variety of testing methods, from the are age-old processes of A/B testing to the latest-and-greatest methods that tap into machine learning functionality. However, to maximize the value of testing, marketers must be disciplined in their methodology to continuously learn, apply, and improve.

Although valuable points of interest and knowledge can be gleaned from any individual test, marketers should remember the combination of results over time often provides the most insights from their customer data. And, in the end, it is the application of those results to future actions and tests that ultimately yields compounding dividends for the marketer and the business.

But, there are countless factors that can influence digital marketing testing, including:

  • Seasonality
  • List composition
  • Time of day
  • Various other ongoing marketing initiatives
  • Outside influences from competitors

For example, if an individual is testing a discount promotion for umbrellas, the results may vary dramatically between testing during the rainy season versus testing during the drier summer season. Or, results may vary between an audience located predominantly in the desert Southwest where it rains infrequently, versus an audience located in the Pacific Northwest where it rains quite often. The same analogy could be used for seasonal clothing, grooming or cosmetic items, or household items.

If the test results had demonstrated that a 20% discount on umbrellas outperformed an expedited shipping offer, then the conclusion may be to always use the 20% offer. However, if the test was run immediately after it had started to rain, that outcome may not be the case. In that situation, time may be more important than cost, and the expedited shipping offer may resonate better than the discount. The point is that a single test run at some moment in time may not represent that same test run at all points in time.

To accommodate for unknown influences, Cordial encourages implementing a testing strategy that includes a combination of:

  • Time-based experiments for validating both broad and detailed ideas and hypothesis
  • An ongoing optimization strategy for maximizing business results across changing conditions

Testing is both art and science, and is inherently evolutionary. There are various testing methods out there depending on the specific type of analysis or hypothesis being evaluated. Let’s explore some of the more popular options available to marketers.

Popular marketing testing methods

1. A/B or A/B/N testing

A/B or A/B/N testing is by far the most common and easily applied testing method for marketers. The structure of the test is to break the audience into an even number of groups based on the number of variations to be tested.

For example, assume a marketer is deciding between a discount offer for first-time customers in a welcome message. The options are:

  • A) Buy one, get one free
  • B) 35% off any one item
  • C) 20% off a first purchase

In this A/B/C test example, there are three segments to consider, so the marketer will need to create three evenly numbered groups randomly selected from the total audience. If the total audience was 90,000, then each group of A, B, and C would contain 30,000 contacts. Depending on the technology being used, the marketer would then create three separate messages or segments using the appropriate creative and then send the test to each segment simultaneously. After the message is sent, the marketer can then evaluate the key performance indicators (KPIs) of each variant to determine which one performed the best.

For email, the KPIs could include open rate, click rate, conversion rate, or revenue driven directly from the email. Based on the combination of KPIs, the marketer may then determine which email variation was more successful. In some cases, the marketer may place higher value on how many prospective customers were driven to the website versus how many actually purchased, and in other cases may value a different combination of KPIs such as total sales revenue, average order value, revenue per email, or percentage of first time purchasers.

2. A/B or A/B/N longitudinal testing

As another variation of A/B/N type testing, longitudinal A/B/N testing involves adding a time-based element into the mix.

For example, assume the marketer now wants to extend this welcome program to include a series of three messages, but also wants to maintain the previous groupings and offer treatment strategy. The goal with a longitudinal test is often similar to the single message, except it now combines the net results of the messages to determine success.

For example, if the goal is simply to drive a first purchase with an average order value (AOV is an e-commerce metric that measures the average total of every order placed with a merchant over a defined period of time) of greater than $50, then the accumulation of orders driven by any one of the three messages is factored into the resulting criteria.

3. A/B or A/B/N champion/challenger testing

Another variation of A/B/N testing is where the amount of each group is not the same. This approach is sometimes referred to as a champion/challenger test or a 10/10/80 Split test.

In this example, let’s say two groups each contain 10% of the overall audience and one group contains the remaining 80%. There are two use cases where a marketer may decide to take this approach versus breaking it into even groups. The first is where the marketer opts to conservatively test one or more new ideas against the current and known performing incumbent or the champion.

In the 10/10/80 example, two new variations (or challengers) are sent to 10% each, while the incumbent champion is sent to the majority remainder of 80%. The logic in this scenario is that the marketer does not want to jeopardize the known results beyond the 20% allotted.

The second case is similar but is more aggressive where the ratios are in reverse. The incumbent is labeled the control and the challengers are tested against each other, but also against the control. An example of this is a 40/40/20 test, where the two challengers are sent 40% each and the incumbent control is sent only 20%. Given the various outside influences as mentioned previously, it is always advisable to have a control, even if there is a pattern of repeatable results from the incumbent.

4. A/B/N with delayed champion testing

The A/B/N with delayed champion test is similar in some ways to other A/B or A/B/N testing, but it serves a slightly different purpose. The objective of A/B/N with delayed champion testing is to maximize the results of the overall campaign by first testing two or more smaller samples to see which performs best.

Take the previous example with the A/B/C offers. This test may now be structured as 10/10/10/70 where the three offers are first experimented with in groups of 10% each for a total of 30%. The test is then conducted and allowed some period of time to determine the winner based on the selected KPI (or KPIs).

For example, if the success criteria is based on click-through, the marketer may elect to delay sending to the remaining 70% until the test has run two hours. Once the two-hour mark is reached, the variant with the highest click-to-open ratio is determined and is selected to go to the remaining 70%. The theory is that the winner at that point will most likely perform the same at-scale to the larger audience.

One subtle drawback about this type of testing is that the conditions could possibly change between the time the initial test started and the time when the winner is determined, thus altering the control slightly. Marketers must determine the proper delay based on how much time is needed to get an accurate assessment and weigh that against factors that may change the conditions over time.

5. Multivariate testing

A more advanced testing technique is multivariate testing, where more than one element of a campaign is tested simultaneously with the goal of identifying which permutation, or combination, of variants of those elements performs the best.

Multivariate testing is becoming more common in website optimization, but has not yet had significant traction in email marketing.

An example of a multivariate email test might include five different hero images, four versions of introduction copy, three offers, and three different call-to-action buttons. This example alone creates a total of 5 x 4 x 3 x 3 or 180 individual permutations to test using a full factorial distribution. As a result, there are some fundamental drawbacks to multivariate testing that limit most email marketers from using this approach, even with the simple example just mentioned.

Drawbacks of multivariate testing include:

  • Difficulty in achieving statistical significance given the many variations to evaluate
  • Limitations in how many elements can be tested due to the number of permutations it creates
  • Repeatability of the test under changing conditions or audience variations

Multivariate testing can be effective where mass amounts of data exist and variations in population are minimal and consistent over time. There are a number of algorithms for conducting multivariate testing, and many of these apply a blend of mathematics and or heuristics to augment and overcome the need for mass data.

6. Taguchi testing

Taguchi testing is a derivative of multivariate testing that employs a form of heuristics to limit the actual number of tests needed. The goal is effectively the same as full-factorial multivariate, which is used to find the best combination of variants across the key elements or components of the campaign.

However, the key difference with Taguchi testing is not to test every permutation possible, but only the ones that are expected to largely influence the decision. Tests of this type often require large amounts of time and considerable subject matter expertise to orchestrate and manage.

Although Taguchi does, theoretically, reduce the number of tests, it is still extremely resource-intensive. If a marketer is working with a top digital marketing agency, they may elect to perform this type of testing to scientifically dissect how design and form factors impact results.

Recapping the testing methods

The most common tests marketers run are simple A/B tests, where one variant is compared to another to determine the better performing variant at that time. This approach usually takes the form of a 10/10/80 split test, a 50/50 champion challenger evaluation, or something similar.

A slightly more advanced testing technique is multivariate testing where more than one variant is tested simultaneously with the goal of identifying which permutation or combination of variants performs the best. No matter the testing option, the test is run, the results are analyzed, and a conclusion of the winner is made at that point in time. Or, marketers can opt for the much slower and involved testing process, the Taguchi method.

Whatever you do, however, don’t just test for the sake of testing and do little to evaluate the methodology behind the tests you run. If you test with the expectation to evaluate your learnings and to apply them to future marketing efforts, then you can maximize the benefits from the get-go.

Related resources on Cordial:

 

Person smiling while using phone

Amp up engagement and revenue with personalized marketing

No doubt about it: personalization is here to stay. People of all ages appreciate it when companies they know and trust provide customized information and offers. The only question is will your company step up and embrace this reality? By harnessing the power of an advanced customer data platform (CDP), your company can achieve best-in-class personalization tactics.

A good CDP platform needs to:

  • Unite robust data management with email, SMS, and mobile app marketing — all in one platform
  • Consolidate all data from anywhere in your tech stack and activate it to power your outreach
  • Provide easy-to-use workflows to simplify and accelerate campaign development
  • Leverage predictive analytics to let you delight customers by anticipating their needs
  • Enable you to deliver the consistent, cross-channel experiences customers expect

Companies are on the edge of an unprecedented opportunity to transform how they engage with customers. Those that move swiftly and embrace personalization can dramatically improve their customer experience, increase retention and remain powerful players in the market.

Cordial automates billions of data-driven emails, SMS, and mobile app messages to create lifetime customer connections for leading companies. Get in touch to schedule a demo. Get in touch to schedule a demo and find out how we can put our insights to work for you.