Perspectives

Ready to Replatform? How to Measure the Success of Your Replatforming Initiative

June 4, 2020 By: Guest Author

This guest post is part of our Continuous Product Design (CPD) Evangelist series.

Tom Arundel is Director of Digital Product Performance at Marriott International. In this this post, Tom provides some best practices for measuring the effectiveness of your digital platform migration, how to test, identify and size issues, along with advice for working through common obstacles to tracking and analytics. 

You’ve decided to take the first major leap of your digital transformation: migrating to a new digital platform. Your brand wants to deliver more customer-centric, personalized and omnichannel experiences. But you also appreciate that it takes new cloud infrastructure, API connections, and business processes to do so. This is not just a redesign. It’s a fresh start – an entirely new framework for content management, digital asset management and functionality, affecting how experiences are delivered to users across your website and app.

As exciting as it sounds, it can also seem like a daunting task to digital operations leaders. Taking on a replatforming initiative can be a rewarding career milestone, but is fraught with risk.

Adding to the anxiety of digital analysts and product managers is the potential lack of operational and business visibility during the migration – the inability to adequately gauge performance when so many factors are changing all at once. New modular frameworks function differently than legacy monolithic ones, and content is delivered in ways that make it more complex for apples to apples comparisons. It is an expensive investment so visibility into key performance indicators (KPIs) and customer behavior is paramount.

How can you iteratively roll out and replace the legacy platform and monitor impacts to user behavior, KPIs and page performance in real time? How will you test the new systems and identify and prioritize errors that translate to a poor customer experience? And how will you continue to iterate and improve KPIs over time?

Challenges monitoring the rollout

Split traffic (or “Canary”) rollouts are increasingly popular over the more risky “Big Bang” approach when launching digital platforms. The idea is to rollout new functionality incrementally to a small subset of servers side-by-side with the original version. Once you test the waters, you can slowly phase in changes to the rest of the infrastructure. This can limit the impact of any defects or stability issues, allowing time for product teams to work out the bugs as a greater percentage of traffic is exposed.

However, monitoring and analyzing performance in such dynamic environments can be challenging. It’s almost certain something will go wrong during the rollout. One issue is the level of noise and functional disparity across environments, which can lead to analytical discrepancies. Different pages, urls and content in different systems make comparisons difficult, if not impossible. To make matters worse, analytics tags and implementations are often inconsistent across platforms and pages, leading to further dissonance. And why bother to fix anything in legacy when it’s near the end of life? All of it is enough to frustrate even the most seasoned analyst.

Here are five tips for benchmarking product performance during large, complex platform migrations:

1. Establish consistent tracking across platforms.

In split traffic migrations, users are often randomly assigned via cookie to either new or legacy experiences. However, it’s not uncommon to find inconsistent tagging and analytics implementations, with different versions across environments and pages. First, in order to establish a single view of the customer through traffic measurement, ensure a unique global analytics account is tagged across all pages in the digital ecosystem (new and legacy). Second, wherever possible, set up functional tracking consistently by leveraging a common data layer and definition across applications and environments. And third, leverage performance analytics tools to compare page speeds across environments, and identify potential latency issues in the new one.

2. Distinguish KPIs from metrics.

When defining metrics for success, it’s common and forgivable to confuse “Core KPIs” with behavioral metrics. To limit the noise of complex digital migrations, leaders should first focus on defining critical business objectives. While behavioral metrics tend to focus on things like cart checkout and abandonment, core KPIs track whether you hit key business or operational targets. Business KPIs can include revenue, lifetime customer value, acquisition costs and retention rate. Operational KPIs can include error rates, application availability, latency and the number of issues reported and resolved. Once KPIs are defined, the more engagement-focused metrics, such as cart abandonment, check out duration and conversion rates, can follow. However, comparing these behavioral metrics across platforms can be fraught with error, and thus be taken with a grain of salt (or ignored altogether), especially given inherent discrepancies between experiences.

3. Focus only on high-value, relevant comparisons.

During platform migrations, it can be challenging to determine areas of content and functionality that can be safely compared, as the new digital ecosystem is laden with analytical landmines. Often, legacy pages and functionality will be retired or migrated off platform, or some pages may take on new meaning and purpose. Legacy, monolithic structures might be replaced by microservices or component-based pages or in some cases, single page applications. Either way, it’s important to focus on relevant, high-value comparisons around areas of functionality where customers can accomplish similar tasks. Develop comparable analytics segments based on experiences that exist in both new and legacy platforms, such as the purchase funnel, subscribe or sign up pages.

4. Measure twice, cut once.

Identifying errors and defects before they go live is now more possible than ever. New automated tools to identify issues and friction points in pre-production environments can supplement manual Quality Assurance (QA) efforts and prevent more costly fixes in production. In addition to QA’ing functionality, make sure to QA the analytics code to ensure tags are firing and capturing KPIs as expected. Releasing new functionality with no ability to track how it’s performing can diminish the benefit of replatforming.

5. Focus optimization efforts on the new platform.

As more traffic moves away from legacy to the new platform, the need to fix inadequate experiences on legacy become a lower priority. At the same time, the need to rapidly identify and fix issues on the new platform becomes ever more critical. This means setting up robust optimization and customer experience (CX) analytics tools to rapidly test and optimize funnels and paths, watch customer replays and improve page load times. If you notice a dip in sales or conversion, you need to be able to quickly troubleshoot, identify the root causes and prioritize and fix issues in real time. As traffic scales up, new previously undetected issues will surface, so teams should align around a new agile, iterative and rapid response approach.

Summary

Re-platforming can be a monumental task that will take resources, time and effort. It will change how customers experience your brand, from design and navigation to functionality and page performance. It’s essential to filter out the noise of a shifting digital ecosystem in favor of a customer-centric approach toward measuring what matters. Focus on continuous testing, real-time troubleshooting and iterative product improvements to ensure you’re moving the needle in the right direction.

Want to hear more? www.linkedin.com/in/tomarundel

Interested in Learning More?

Get a demo