Supporting Digital Experimentation with Evidence
Today’s digital customers vote with their clicks, and websites with a user experience that is worse than their competitors will not survive.
As expressed by Jeff Bezos in his recent 2018 letter to shareholders:
“Customers are divinely discontent. Their expectations are never static – they go up. It’s human nature…People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’”.
User behaviors and preferences are a constant moving target and never cease to confound businesses. To keep up and stay ahead of customer expectations, it is critical to understand what your customers want through digital experimentation – and avoid any sort of struggle along the way.
To solve a problem, you must first understand it. To understand a customer’s online behaviors, you need to ask, “What is driving the behavior or lack thereof in our visitors? Where is this behavior occurring?”
Over the past few years, Conversion Rate Optimization (CRO) has remained a high priority within the digital marketing strategies of many brands. Particularly for sites involving any sort of eCommerce function, minimizing drop off rates at each step of the customer journey is a key focus for all marketers.
Within CRO strategies, digital experimentation is used to contribute to website improvement and increased shopper satisfaction, resulting in higher conversion rates. The vast majority of organizations who combine structured experimentation with the right measurement solutions saw increased conversion rates as a result.
Spend your time on the right tests
Digital experimentation allows you to make the most out of your existing traffic. Even small changes to your site or app can result in significant increases in conversion and can represent a better ROI than, for instance, the cost of acquiring paid traffic.
It’s great to test and learn – but ensure that you are able to quickly identify why things don’t work, correct them, and apply that to how to build your tests. Use all data available to validate or disprove any assumptions up front to avoid building flawed test cases.
A common path to failure and wasted time with experimentation is not basing your tests on a hypothesis and sound data. You need to complete proper conversion research to discover where the problems lie, and then perform analysis to figure out what the problems might be, ultimately coming up with a hypothesis for overcoming the site’s issues.
Test data-driven hypotheses: Understand where your customers are struggling to form the most sound hypothesis – based on real data, before spending the time to build and deploy a test. Know where you are today before trying to change course – you never know where you’ll end up!
For example: In a recent engagement, a Quantum Metric client was planning on changing the location of the ‘Add to Cart’ button on mobile devices because their analytics indicated that no one was clicking on the button. After looking into Quantum Metric to understand why, the client discovered that the location of the button was not the reason why customers weren’t clicking. It turned out that the button had an image background that was taking too long to load on slower networks. This discovery saved the time and energy to invest in implementing the A/B test of different button locations.
Know What your Tests are Telling You
Your hypothesis is sound, your research thorough, and your development team efficient, yet how can you be sure that every executed test is error free and performs consistently across every device, operating system, and browser type. And when your new, modern design fails miserably in conversion tests, do you know why? The test that gives the best conversion rate is the clear winner right? Not always.
It’s not always as simple as looking at that final KPI – there are many other factors that influence the outcome of any A/B/n or Multivariate test. Any time you change something, you could end up influencing other factors that you did not intend to – therefore influencing the result of the test. Even the smallest of issues or frustrations can lead to a significant drop in conversions and long term loyalty of customers. Take a closer look at your results, beyond just the high-level KPIs to truly understand why each test performed the way it did:
- Avoid false positives by not understanding the real reasons a variant is under-performing
- Don’t throw-away your experiment based on the initial results – understand the real “why” to have the necessary evidence. Simple fixes can be identified to flip the results or learn what needs to be adjusted from the original hypothesis.
Increase Your Testing Velocity and Accuracy
As part of any site optimization strategy, it is key to:
- Understand your site and your customers – bring together survey, voice of customer, struggle and experience data into one place to understand the end-to-end customer journey and highlight the key areas of improvement.
- Identify insights from this full dataset – where do you need to focus the most, what will bring the most value, what are the worst areas of struggle to tackle first, what is working well that you can better leverage in other areas?
- Prioritize this insight and plan your approach, hypothesis, and test plan
- Run your A/B tests and fully review the results.
You need a to ensure that the right tests are being run and results can be evaluated in real-time to make certain that there are no new errors that were introduced that are impacting your results. As code releases become faster and more frequent, it becomes more difficult to test every permutation to make sure that you are not inadvertently introducing errors to important segments of users.
Why did your test fail? Was the design bad, or were people just confused? Take the guesswork out of failure and use those insights to continue to refine your designs and conversion funnels.
Your customers are exposed to so many options. When they have a great experience on one site/app, they raise their expectations and demand the same experience everywhere. You’re competitors are continually innovating – you need to as well.
Read more about which tests and metrics you should be using in our latest eBook!