Why Your A/B Tests Aren’t Working for CRO (& How to Fix Them)
Many D2C eCommerce brands tend to rinse and repeat an A/B testing strategy that fails to see the bigger picture. With growth hacks, you may see small gains today, but you’re sacrificing an opportunity for long-term growth.
Brands often ask our team: “How do I get our numbers up? CPCs and CPMs are getting out of reach, my team’s optimization tactics are not working, and I promised my investors a minimum of 50% YoY growth.”
Here’s what’s happening on a macro level:
- CPMs: As of 2021, the average cost to reach 1,000 people with digital advertising ranges from $3-$10. CPMs are up 41% for social and 62% for brand awareness, traffic, and reach compared to the previous year.
- YoY Growth Expectations: Generally, eCommerce DTC brands set their expectations to be between 20% – 30% YoY revenue growth. VC-backed brands tend to expect 40% or higher.
- iOS14: When Apple released iOS14, they killed how we can identify users. An estimated 900 million users can’t be accurately tracked, making it much harder for eCommerce brands to target campaigns based on user interests and browsing history.
Optimization is the only way to fight all of these challenges. But optimization only sometimes works. It usually doesn’t.
In eCommerce, if you run ten tests, often only 1 or 2 will win. Aaron Orendorff, the former Editor in Chief of Shopify Plus, has said: “I’d estimate that 80 to 90% of the A/B tests I’ve overseen would count as ‘failures’ to executives.”
And when we dig further into why optimization tactics aren’t working, our team almost always uncovers that all the tactics used are just traditional growth hacks. Small incremental changes via A/B tests that show gains on the initial test reports but don’t move the needle significantly when you look at the quarterly revenue reports.
So, if you’re a product manager or head of eCommerce who often finds yourself wondering: “Why aren’t my A/B tests working?” Or, if you’d like more insights into an effective A/B testing methodology for CRO, continue reading.
3 Reasons Your A/B Testing Isn’t Working for CRO
First, let’s determine why testing often doesn’t work.
1. You’re Testing Too Far Down the Funnel
Too many tests are conducted too far down the funnel. When that happens, fewer users are exposed to the test; therefore, the apparent gains don’t impact the larger visitor segment.
Here’s a graphic to illustrate what I mean:
When you single out just one small portion of your customer funnel or only segment for “average users,” you miss a more significant opportunity to collect data on user behavior. In addition, optimizing your conversion rate becomes much more difficult when you’re only looking at one piece of the puzzle.
You’ll see a far better return in the future by collecting test data on essential components of your eCommerce site, like performance.
All this to say, try to take a holistic approach when testing your funnel instead.
2. You’re Not Running Enough Tests
A single test will not increase conversions or your confidence in the data that the test yields. So if you’re running a traditional optimization program, you’re probably looking for a needle in a haystack. You’ll only find the needle after sifting through hundreds, if not thousands, of tests.
That’s because traffic is the currency for testing, and your brand needs to have the right level of traffic or development power to run enough tests.
For example, by using Optimizely’s calculator, to prove a 50% uplift, you’d need 538 visitors per variation. If you have unlimited traffic, then you may be able to run enough tests.
However, if you need more for proper traffic allocation, you may be wasting your resources. You’d see better return from lead generation via paid advertising or SEO.
In A/B tests, you have to be sure that the test results you receive illustrate a difference between the two variables (your control and your test version) and not due to errors or random coincidence. What you’re trying to reach is a clear statistical significance. And the only way you attain statistical significance is through traffic.
Therefore, low-traffic sites may need several months to get close enough to providing actionable data.
3. You’re Not Thinking About A/B Testing with the Right Mindset
Believe it or not, the traditional leadership philosophy of “let’s just test it” is more than likely a waste of your budget and time. The issue is by trying to test everything, you lose the time and resources necessary for conducting the high-value tests that are usually less expensive and more efficient to begin with.
Conducting a split test and analyzing the test results can often be a bigger endeavor than it sounds. And testing random ideas that aren’t based on a strong hypothesis is an incredibly inefficient conversion rate optimization method. In fact, 90% of your ideas shouldn’t be tested.
When ideas are viewed from the lens of potential business value versus the cost of implementation and testing for the idea, you won’t see a gain worthy enough to run the test.
Tal Raviv, product manager at Patreon, has said that: “A/B testing is not an insurance policy for critical thinking or knowing your users. Inappropriately suggesting to A/B test is a good way to sound smart in a meeting at best, and cargo cult science at worst.”
4 Parts of Effective A/B Testing in D2C eCommerce
If you want to see real growth—sustainable growth—you have to adopt a thoughtful optimization strategy that makes logical sense and is based on addressing real user problems.
Here’s the testing approach I use with every client:
Each part of your conversion funnel that you conduct tests for should have the appropriate segmentation. Additionally, prioritize testing more than just the very bottom of your funnel so you can learn more about your target audience and their behavior on your site.
Generally speaking, effective CRO A/B testing has four essential parts.
1. A Strong Hypothesis
As I mentioned earlier, you don’t want to waste time and resources on testing everything. Your A/B test ideas should be grounded by a proper hypothesis that directs your tests and clarifies your goals.
The core function of a hypothesis is to uncover why a problem is occurring and what solutions could help solve that problem.
If you need help formulating a hypothesis (or uncovering where problems are happening), try utilizing Google analytics to uncover where your site visitors are coming from, which landing pages are converting, and the demographics of your users.
Information about where your website conversion rate is low or how quickly a mobile user bounces from a particular page will provide valuable insights that lead to a much stronger hypothesis.
Compare the information you find from GA with heat maps, direct customer feedback, and any other existing user behavior data, and you can begin your A/B conversion optimization efforts with a precise hypothesis. Not just instinct, or the mindset of “testing just to test.”
- Here’s one example of a hypothesis: Our cart abandon rate increases after the total price (product cost plus shipping) is shown to a user in the checkout process. If we display the actual cost on our PDPs, users will abandon less and convert more.
2. Clear Test Goals
A higher conversion rate is the end goal of any conversion rate optimization effort. But more likely than not, you’re testing a change to improve your user experience, which in turn will hopefully lead to more conversions. So get clear on the actual test goals first, but keep your goal to increase conversions (or other meaningful metrics) as the guiding point.
Here’s a good example of a clear A/B testing goal:
We want to provide a better experience for a mobile checkout page by reducing the number of form fields a user needs to complete before completing their purchase.
Now the question for your split test becomes:
Does a reduction to your form fields actually result in a higher conversion rate?
3. Why Statistical Significance Matters
Remember, you will need high traffic to reach an adequate significance level. Anything less than 95 to 99% will not give you enough confidence to act, as there’s more room for error or false positives.
And, to reiterate, a low-traffic site will take much longer to acquire the right significance level than a high-traffic site.
Here are a few tips for effective A/B Testing on low-traffic sites:
- Avoid multivariate testing until your website starts bringing in more traffic. With A/B testing, your visitors are split in half to test two web page versions. Multivariate testing tests multiple variations at once — meaning you’d need to segment your traffic even further into quarters, sixths, and so on.
- Test higher-impact items instead. For example, an A/B test of your site-wide banner will take much less time to reach the appropriate statistical significance.
4. Evaluate Data & Implement
If your testing was a success, you should have an evident winning variation. If the winner isn’t your control variation, then implementation would come next.
However, there are likely additional opportunities to learn about user behavior on your site through the data you’ve collected. Try to dig deeper into your test results beyond just “winners” and “losers.” Take time to understand why the better version was successful.
Does the winning variation confirm your hypothesis that moving your CTA button higher on your landing pages increases conversions? Or does it uncover something about the copy itself?
Here’s an example of what your A/B test process could look like in action:
Guidelines on A/B Test Segmentation
Following the steps above will help lend greater accuracy to your tests and save you from the pitfall of the “just test it” philosophy. Now I want to talk about proper segmentation.
Turn to Existing Persona Data for Better Real-Time Personalization Testing
Real-time personalization features can deliver long-term benefits to your eCommerce brand, namely more conversions and stronger customer loyalty. In fact, 71% of today’s consumers expect companies to have personalized interactions and features.
But to ensure your real-time personalization works, you need effective A/B testing, which begins with the correct segmentation of users.
When you conduct tests on your real-time personalization features, segment your test results by your already established customer personas.
Start with a good hypothesis about a specific group of users and the site features they would most likely find helpful. Then, segment your split testing based on those personas to help validate the data you already have.
Usability Improvement Tests
For usability improvements, you should segment for something other than user behavior. Instead, measure results based on the user experience between different devices.
For example, if you’re improving site load speed for mobile users, consider measuring KPIs like bounce rate or session duration for visitors on their smartphones. Or vice-versa for desktop users.
Building Your Brand’s CRO Testing Program? Get in touch with Anatta.
At Anatta, we approach CRO differently than most agencies. Instead of running hundreds of tests just to find tiny, incremental gains, we prioritize uncovering “big swings” that actually increase your bottom line.
We believe that CRO programs only drive impact when they can help your brand unlock $200,000… $500,000… and higher opportunities.
Get in touch with our team to learn about our CRO testing process.
- Authors
- Name
- Nirav is the CEO and founder of Anatta. Nirav received his engineering degree in 2006 from George Washington University. Prior to Anatta, he served as founder of Dharmaboost, a software company working with Cisco Systems, Hewlett Packard, and New Leaf Paper. He is also cofounder of Upscribe, a next-level subscription software for fast growing eCommerce brands.