The Quality Framework You Need For Customer Journeys Monitoring

Cerberus Testing
7 min readJun 26, 2021

--

Customer journeys are at the heart of a user experience leading to conversion, a critical area to act for organizations searching for digital performance. Companies’ transformations to deliver a successful UX do not happen overnight. Step-by-step, continuous improvements, test & learn is the path to success, with few shortcuts.

Improvement starts with measurement. Customer journeys are no exception, hence the importance of their monitoring. Our quality initiative must incorporate this end-user experience perspective leveraging its capabilities, but how?

This article shares a step-by-step repeatable approach to iterate fast in your customer journey monitoring implementation. We go through a real e-commerce use case with actionable guidelines and recommended best practices.

As an introduction, you can read our previous articles to understand where quality meets customer journey monitoring and its 5 hidden powerful benefits.

Follow Cerberus Testing for more open source test automation.

The Customer Journeys Monitoring Context

The work of identifying the “right customer journeys” in your business context is the first step. You can perform this following the “Question Asker” model of QA in this article: 6 Quality Questions for Testing Customer Journeys. Now we can focus on building the “customer journeys right”.

In our case, we will work on an e-commerce retail and fashion website named Damart; that way, you can refer to this case for the traditional e-commerce journeys. We identify several journeys based upon use-cases ordinarily present in a functional non-regression campaign.

Figure 1: Test cases of the Conversion stage of customer journeys

Our customer journey monitoring will then focus on the Conversion stage of the e-commerce journey. We also included the main non-functional requirements of verifying the login capability is always fine. It is trivial, but forgetting to ensure this monitoring can lead to a drastic decrease in the business performance in case of disruption.

Using a framework is an excellent way to structure an efficient, repeatable process.

The Process to Monitor Customer Journeys

We will leverage the fast feedback loop cycles supported by Cerberus Testing: Define, Execute, Analyze. The framework areas supporting this cycle are respectively the Test Repository, Test Execution, and Test Reporting. Specific functionalities are leveraged to support the customer journey monitoring.

First things first, we start by defining and organizing the test case repository. The structure of the test cases is up to you; we recommend using the folder structure, step library, and application definition for expandable foundations. The key element to pivot between the test repository and execution is the use of labels we cover in the next section.

Figure 2: The steps and Cerberus Testing frameworks areas

The Execution area is the next one we will use. The process is similar to defining a standard test campaign; we can use the labels, test case selector, and choose the concerned environments. The main difference resides in the scheduling, usually much more frequent.

The last areas are Reporting and Analytics to support insights on our customer journey executions. We can switch our perspective from a single campaign to a more broad range of executions using the various dashboards.

Let’s start by tagging our tests for decoupled test and campaign management.

Organizing the Tests with Labels

Our test cases are ready for customer journey monitoring. A good practice is to leverage the non-functional regression test case to maintain alignment and early detection through the software delivery chain.

Customer journey campaigns can have various criteria: test case, application, system, environment, and labels. We prefer to use labels to decouple our test case repository from their execution. If we hard-code a specific test case, application, or environment, we can suddenly execute non-expected test cases. Explicit labels enable better visibility and explicit inclusion inside the monitoring campaign.

Figure 3: The use of labels for identifying the customer journeys cases

In our case, we tag our test cases with “Campaign_Scheduled” to identify them as part of the customer journey campaign. They still have other labels and requirements, useful for reporting purposes we will see later on.

The next step is to configure our test suite for monitoring, known as a Campaign.

Schedule a Monitoring Campaign

The pivot between our test case repository and execution is the label in our case. We recommend the campaign description to be clear on its goal and high cohesion for stakes of reporting and speed of execution.

The second tab, “TestCase List”, enables us to configure our test campaign content. Here we use the label previously defined only. As you can see below, we can scope the suite based on other criteria, such as “Test Case Criterias” or various labels.

Figure 4: The definition of our campaign using the labels

The next tab of “Environments, Countries & Robots List” must be tailored to the touchpoints identified in your customer journeys. Traditionally, the environment will be production, even staging for some cases. The countries and devices will depend upon your context and application deployments.

The execution settings are also flexible in their definition. For customer journeys, we recommend configuring a screenshot on error while limiting detailed tracing capabilities. Our goal is to react fast while measuring the actual performance of our tests; a screenshot is usually very useful, and limiting the data collection avoids biasing our test performance. Retry mechanisms should also be limited to don’t hide stability issues.

Our objective is to ensure the quality of our customer experience, not to have a green dashboard.

The priority of the scheduling is a critical point for customer journey monitoring with the limiting factor of parallel executions. The supervision campaign will usually be small and regularly executed, while non-regression ones tend to be larger and on-demand. We can guarantee that monitoring will pass first using the “Priority” field to 0, or at least lower than other campaigns defined; then, the queue management system will take care of the priorities.

The scheduler area will let you define one or more executions using a crontab format. The frequency of executions is usually a balance between the journeys’ value, criticality, and frequency. A minimum of an hourly execution is recommended. The last part is to define notifications by email and slack as required. It depends on your process; some teams integrate directly with the Cerberus Testing APIs from operational monitoring solutions.

Our customer journeys monitoring campaigns are all set to run regularly, starting to collect valuable data for the future. We can switch to the analytical part.

Report, Analyze and Take Actions

Various dashboards are available to support our visibility requirements. We differentiate Reporting from Analytics from the different perspectives and insights they can provide. Let’s start with the campaign reporting.

The first answer we are looking for is the execution status of a single campaign. We can access the native reporting in the Campaign Reporting area. Various dashboards are available to support our analysis: the global campaign status with a split by status, by test folder and labels, followed by a detailed table of all execution. Each test case execution report is then available with traceability information.

Here’s an example of the status by labels, giving us both visibility on our customer journey monitoring with the “Campaign_Scheduled” tag. In contrast, the other tags provide us the stability perspective of each website area, namely Account, Cart, Connection, Homepage, Products.

Figure 5: The reporting by tag natively available in Cerberus Testing

Our next question is to understand the customer journey performance over time. We need to combine various campaign execution reports and statuses to give that visibility. The “Campaign Reporting Over Time” provides the Analytical dashboards we need with execution time split per devices and environment and the ratio of executions, ending with the stability dashboards.

Here’s an example, providing a visual representation of our flakiness ratio over time. We can then loop back to the other dashboards to understand if the instabilities are happening in a specific context. We can also zoom in on a particular campaign reporting and test case traces to narrow the problems.

Figure 6: The campaign stability dashboards giving a flakiness ratio

From here, we have a repeatable structure that can support our customer experience monitoring.

From Reactive to Proactive Customer Journeys Monitoring

The framework and process described here will support scalable foundations, starting from a good test case design, decoupling various requirements, and proper configuration. The expansion is then a balance to maintain.

Our focus must remain on customer experience monitoring, supporting our quality initiative, and delivering value to our customers. The various elements defined support to evolve from a reactive to a proactive approach and from a siloed to a transversal one.

Sharing this customer perspective in operations is an excellent way to align the various teams, from product management to operations, on a common objective. Have a nice customer journey monitoring experience, starting and ending with a successful user experience.

Start now with Cerberus Testing.

Follow Cerberus Testing for more open source test automation.

You can support open-source test automation

  • Clap this article 👏
  • Star the project on GitHub
  • Offer a coffee

Author: Antoine Craske

Originally published at https://cerberus-testing.com on June 26, 2021.

--

--

Cerberus Testing
Cerberus Testing

Written by Cerberus Testing

Cerberus Testing is then 100% open-source test automation platform to accelerate software delivery. Stop coding, start testing. https://cerberus-testing.com/

No responses yet