Best Practices Overview

Overview

The intent of this document is to navigate you through common user experience decisions when setting up Discover for a retail site. In addition to providing specific guidance on the baseline best practice sort configuration, the objective is to help you understand the logic behind our decision-making, so that you are prepared to make informed adjustments specific to a retail site.

After reading this guide, you will be able to answer the following questions:

  • How do we measure the effectiveness of Discover?

  • What default weights should we be using on a vertical-specific basis and why?

  • In what ways can we augment Discover’s decision logic to better model how people shop on the retailer site?

Note: All sites are different, and what works best for one is not guaranteed to translate into gains for another. As such, treat this guidance as general best practices, a foundation for optimization from which site-specific tuning should occur. What’s most important is that you understand the logic behind our best practices approach and what questions to ask, so that you can adapt and apply as appropriate. 

Following the guidelines set forth in the pages that follow will help safeguard performance of Discover on a retailer's site.

Measuring Category Sorting

Value Measurement

Assessing the value of category sorting requires you to run an AB test in which you compare attributable order versions of Revenue Per Visitor (RPV), Conversion (CVR), and Average of Value (AOV) of customers exposed to Discover versus those exposed to the default experience or an alternative technology. Typically, RPV is the primary success metric closely followed by CVR, though some retailers may prioritize CVR due to the high lifetime value (LTV) of their customers being more important than an independent session’s revenue.

The attributable order versions of RPV, CVR, and AOV use nuanced definitions of “orders” and “revenue” in their calculations. Specifically, an “order” refers to a transaction that contains at least one Discover-attributable item and “revenue” refers to the total sales from these orders, attributable and non-attributable items included. The reason for this nuance is two-fold:  

  1. Our hypothesis is that by presenting a more personally-relevant assortment, Discover will get customers to buy more items from the Category page. Therefore, our KPIs must hone in on relevant transactions rather than indiscriminately factoring in session revenue unrelated to the Category page experience. 

  2. In an AB test, we are trying to measure a statistically significant difference between two or more experiences. The lesser the observed difference, the harder it is to achieve high statistical confidence and leave the experiment with a credible result. Discover has an exposure challenge; because it exists on a single page type that may only get viewed 1 or 2 times in a broad shopping session, it’s impact on session revenue is often hard to detect, and a conventional AB analysis ends up looking like an AA result. By using nuanced versions of our success metrics, ones that more closely focus on Discover’s impact, we’re able to detect a bigger difference between the competing experiences, and thereby achieving higher confidence levels. 

With Discover, we expect to deliver 1-2% (Attributable Order) RPV lift against an alternative technology, whether that be a competing technology (third party or homegrown) or basic sorting.

Note: Algonomy has no knowledge of engagement on non-Discover content. As such, we must partner with the retailer’s Analytics team to generate the AB results. 

Utilization Measurement 

The Clickthrough Rate (CTR) metric informs us of the usefulness and relevance of Discover, indicating how and to what extent customers are actually utilizing the solution. If the sort order does not resonate and the most relevant items aren’t pushed to the top, we expect to see lower engagement. That said, CTR is not an infallible proxy for economic value since clicks do not always result in bigger spends, and we should laud the experience that actually produces revenue. 

Utilization analysis can help build an experiential narrative around Discover. It can provide insight into how the solution alters customer behavior enroute to producing the observed RPV lift. For example, as a supplement to the standard AB analysis, we should also examine slot-level CTR. With these, we typically see that Discover substantially increases engagement on the first 2-3 product slots, sometimes seeing in excess of 25% CTR lift on the initial position. 

Guiding Principles

The following precepts guide the formulation and execution of our optimization best practices: 

  • Optimizations must be vertical-sensitive. People consume goods differently across verticals/categories; within Grocery and CPG, they buy replenishable products in fairly predictable cycles whereas in Electronics and Appliances, their behavior is much different. In Apparel, customers often exhibit brand loyalty, while in Home Furnishings, there is a degree of agnosticism. When setting Discover configurations, it is important to consider the nuances of the vertical as well as any unique characteristics of the retailer.

  • Optimization is iterative. The term “best practices” is a bit of a misnomer due to the fact that it’s impossible to be highly prescriptive in regards to a personalization solution with the configuration granularity of Discover. As such, consider the configurations guidance provided in this document as “better practices”—a baseline from which iterative testing and optimization must occur in order to realize Discover’s lift potential for a specific retailer. 

  • Rules can ruin. Discover has the biggest impact on the first 2-3 product slots in the Category page since items are re-sorted based on descending relevance, with the strongest matches being at the top. Displacing Discover’s relevant selections with products promoted via merchandising rules can substantially impact its efficacy, particularly on mobile which has a smaller viewport. Even more than with Recommend, we discourage the broad use of merchandising rules.