Quantitative and Qualitative Research for eCommerce Sites
At certain points in the life cycle of any brand, paid traffic performance will drop measurably. Throw in major external factors (hello, COVID-19), and CPM and CPC will dip even further.
So when your Cost of Acquisition (CAC) is sky high, what’s an eCommerce brand to do?
The answer is to rethink your Conversion Rate Optimization (CRO) strategy.
Why CRO is failing for most eCommerce brands:
Most eCommerce CRO delivers a disappointing return (<10%), or in many cases, is negative. Why? Because the industry generally treats CRO like a game of Pin The Tail on The Donkey – pure trial & error.
Do you recognize a scenario like this?:
Idea
↓
Quick Approval
↓
Design
↓
Develop
↓
Run The Test
↓
Fail
Despite our industry’s love of metrics, most eCommerce brands are investing major time and money into CRO tactics that ignore vetted quantitative and qualitative data.
Without a fundamental understanding of WHY something works, this approach makes your CRO strategy no different than gambling. You’ve got a 1 in 7 chance of achieving a positive outcome.
So what can you do to increase your odds?
One option will move the needle a bit. The other (highly recommended) option will create significant, long-term returns. Let’s start with the micro improvements first.
8 Steps To A Slightly Better CRO Strategy:
- Translate ideas into a hypothesis format
- Rate each hypothesis for impact vs. effort
- Prioritize tests from low-effort/high-impact to high-effort/low-impact
- Draft test requirements so UX designers can create a prototype
- Conduct a small user test to gauge interest/satisfaction
- Pass to your engineering team to develop the test
- Set goals for the test and monitor closely
- Reach statistical significance and report back
Implementing these steps will greatly improve your CRO outcomes, so if you have no such structure in place, go for it.
However, we believe that this process alone is not the most sustainable, comprehensive solution.
A testing process can only be its most effective AFTER you’ve rigorously vetted the reason for the test in the first place.
Let’s unpack those two questions, starting first with clarifying what eCommerce “evidence” is, and the best places to find it.
How to find trustworthy eCommerce research:
As of 2020, there are 20+ million online stores worldwide, and global eCommerce revenue exceeds $3.7 trillion, with $600+ billion in the U.S. alone.
The eCommerce industry has been around for 25+ years, so there’s ample research not only about the industry as a whole, but also about the sub-industry you belong to.
And sure, you might be tempted to say, “I’m unique. What works for others won’t work for me”, but with all the respect in the world, that would be a mistake.
There are a few established sources of eCommerce user experience research that we rely on heavily at Anatta.
Nielsen Norman Group (NN/g)
Founded by two of the most distinguished UX experts in the world, Don Norman and Jakob Nielson, NN/g has one central principle: study real user behavior, because what people actually do on your site is way more important than what they say they do.
NN/g studies user behavior year after year, with large sample sizes to ensure the highest degree of accuracy. While their focus is general UX principles, they’ve placed a lot of emphasis on eCommerce over the last several years.
For over 20 years, they’ve evaluated and consulted some of the world’s leading brands, and they publish a significant amount of their research findings in a free online library of over 100 articles on UX best practices. They also have an established UX Research examination which trains and tests a designer’s expertise on topics like: Analytics and User Experience, Assessing UX Designs Using Proven Principles, How to Interpret UX Numbers: Statistics for UX, Journey Mapping to Understand Customer Needs, Measuring UX and ROI, Usability Testing, and User Interviews.
Baymard
Baymard is a web usability research institute based in Copenhagen, Denmark. Their research methodologies are built on academic principles, and translate those results into actionable articles, reports, and benchmark databases.
With a new article published every other week, they offer 49,000+ hours of large-scale eCommerce UX research, boiled down into 9 (as of 2020) core reports. The reports are broken into 78 subtopics that move through the entire eCommerce user journey: Homepage & Category, Onsite Search, Product Lists & Filtering, Product Page, Checkout, Account & Self Service, Mobile, Industry Specific, and Special Interest.
Baymard also offers a certification program that allows only 3 attempts to pass. Theirs is one of the most difficult tests we’ve seen in the market.
Every member of the Anatta UX team has to obtain both NN/g and Baymard certifications before they can be deployed to a project. That’s how much confidence we have in the validity of their research practices.
And there’s one more. While they’re not a research firm per se, we’ve found their best practices summaries and tips to be in a class of their own:
CXL
CXL focuses on conversion optimization with data-driven marketing trainings. And their blog has some of the most in-depth marketing, analytics, and optimization content on the web.
Of course there are exceptions and outliers, but by and large, the data and analysis from these sources continues to be irrefutable, no matter the industry or product category. Their findings are based on real users, real behavior, and verifiable metrics.
If you’re overlooking data, you’re ignoring facts. Integrating substantiated research findings is the only way to build an effective CRO practice with evidence-based testing.
But there’s one more form of research we should also mention…
The Pros and Cons of Community Sharing
Community data sharing is quasi-research – it can be valuable, but it should always be questioned and verified.
In the health and wellness industry, many informal social groups have developed between various CPG brands. Basically, a group gathers together on a Slack channel or over a monthly coffee date, to share the acquisition and retention tactics that are working (and not).
Typically, these conversations go like this:
“We did X. And [insert quantitative metric – AOV, conversion rate, churn, bounce rate, etc] did Y.”
If you’re part of one of these groups and you collect enough data points, this ‘research’ could become statistically significant. However, when it comes to community data sharing, anecdotal evidence is not enough.
Before jumping into replicating a feature that you heard about via community sharing, we suggest you ask the source a few meaningful questions, like:
- Exactly how much of an increase or decrease did you see, month over month?
- Did you take another action around that time that could’ve influenced the outcome (press features, marketing campaigns, etc)?
- Can you explain why you think this worked for your customers?
- Have you received any feedback from customers about this experience?
Whether you’re relying on data scientists or community sharing (we usually recommend a combination of both), you should now be clear that step one of any CRO testing initiative has to be research, research, research.
So let’s move onto Step Two, which answers this key question: Can you explain why it works?
Why qualitative metrics are just as important:
As we’re writing this, daily news reports are flooded with claims that this medication or that one is the cure for the pandemic. However…
A medication’s efficacy can’t be conclusively judged based only on a death rate decline, or the number of people reporting their symptoms have gone away. Those quantitative metrics are helpful, but they have to be validated with double and triple blind studies.
So let’s apply this logic to eCommerce…
Quantitative metrics like conversion rates, AOV, click-through rates, bounce rate, or time on site, provide one part of the picture. But when someone says that XYZ feature will increase or decrease one of these metrics by 15%, is that a reliable claim?
Has the same test been conducted more than once?
More than 5 times? How about more than 20?
And even if that feature was tested on 10 different eCommerce sites, does it automatically follow that it will generate the same results on your site? It doesn’t.
Without qualitative data, you can’t be sure that a new feature will work for you.
Qualitative metrics reveal the deeper impact of a feature – from a customer perspective. (Notice the emphasis there.)
Going back to our medical analogy above, if a test conclusively shows that XYZ drug successfully suppresses a particular virus, that’s useful quantitative data.
But it’s not yet safe to roll that drug out to the entire population until we understand exactly why this drug is generating that result. Are other factors at play? Are the test results dependent on a variable that only applies to a small section of the population? We need qualitative data to answer these questions.
Qualitative metrics are calculated with things like:
- Customer interviews
- Post-purchase polls/surveys
- User testing
These metrics start piecing together what customers had difficulty with, and how your feature helped resolve that for them. This information is only acquirable when you talk directly to customers to get answers to detailed questions.
Knowing the WHY gives you certainty that the proposed test would even be relevant to your customers, and every action we take should always be about the customer.
In Conclusion…
eCommerce without thorough research and data analysis, is just smart people throwing darts with a blindfold on. By gathering both quantitative and qualitative metrics, your CRO tests will be far more time and cost-effective, and lead to the conversion rate increases you’re looking for.
And lastly, if it helps, take these 5 ideas back to your team:
What We Teach The Anatta CRO Team
- Not all ideas should be tested.
- If an idea doesn’t include a solid research foundation, don’t test it.
- Every test idea must include a clear “why”.
- Tests must have one primary KPI, for both quantitative and qualitative metrics.
- Don’t assume big brands have tested and verified every feature. Most people are just following the herd. Trust the research and the “why”. Always.
- Authors
- Name
- Nirav is the CEO and founder of Anatta. Nirav received his engineering degree in 2006 from George Washington University. Prior to Anatta, he served as founder of Dharmaboost, a software company working with Cisco Systems, Hewlett Packard, and New Leaf Paper. He is also cofounder of Upscribe, a next-level subscription software for fast growing eCommerce brands.