In a previous blog post, I touched on how major e-commerce companies employ different approaches to A/B testing. Today, I want to share an inside look into experimentation at a popular financial services company that offers payment processing services and APIs for e-commerce applications¹.
Conversion is a massive driver for the e-commerce businesses that this company supports. This naturally focuses a lot of the company’s efforts on improving conversion in multiple ways. For example, their core product optimizes for conversion in two ways:
While their core product processes billions of dollars in payments for millions of users, teams building new products for a much smaller number of users use experiments for essential product validation.
For example, the team working on the flow to add bank accounts sees a few thousand flows per day. They use feature gates to control new feature releases, to validate new feature functionality, and to measure conversion with the new product. For example, they ask: Are people adopting this new flow? What percentage of users are engaging with the flow with high intent? What parts of the product are they engaging with?
Each team chooses their target metrics based on the user experience that they directly drive and optimize. The checkout teams track completed payments whereas the teams working on the flow to add bank accounts track when the user receives the first payment.
Yet, moving top line business metrics, even those set at the team level, may not be easy. This is especially true when teams are making improvements upstream in the flow. For this reason, each team tracks metrics at each step in the flow. The team working on the bank account flow tracks how many users enter the flow, how many users input their credentials, how many users find their bank account, and so on. Their goal is to understand how users are engaging with the product, are these high intention users, and are they able to complete the happy path.
In spite of wide usage at the company, experimentation isn’t free of challenges. Two of the most common challenges are (a) instrumenting the product to capture the right events, and (b) software engineers being wary of handling data.
For example, software engineers may take an initial pass at instrumenting their application. But when the data scientist sees the data and finds gaps in stringing together the signals, the engineers must often redo a lot of the instrumentation.
Software engineers are also hesitant about wading into the data to analyze the results. One leader at the company said, “The space is painful to play with. Seeing data coming in live, ensuring it is loaded reliably, and ensuring all data fields are treated correctly requires handling 50 different exceptions. We need data scientists to focus on these problems because engineers don’t.”
In the absence of data scientists, experiment analysis may fall apart upon deeper inspection because of noisy data. On one occasion, when the team was rolling out a product to new geographies, conversion rate plummeted from high 90s to less than 10%. The team initially rationalized that this wasn’t impossible; a lot of new factors could be causing the conversion rate to decline. However, on closer inspection, they found that the metrics were including a large volume of ineligible transactions in the denominator, in particular from countries where the product wasn’t yet available.
Regardless of the challenges, the company believes that experimental data is key to meaningfully moving their numbers and uncovering insights about user behavior. In a recent instance, the team found that a new frictionless way to add bank accounts using OAuth (vs. manual entry) resulted in a 60% uptick in adoption even when users were offered no incentive. This automatically cut short long discussions and reviews around what incentives should be implemented to change user behavior. The data naturally focused the team on what’s already proven to work!
As the company embarks on the next phase of growth, they’re all in on experimentation. For them, data trumps intuition everyday.
Join the Statsig Slack channel to learn about how the most innovative companies use experiments to accelerate their growth.
 The company remains unnamed here at their request.
Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.
This summer I had the pleasure of joining Statsig as their first ever product design intern. This was my first college internship, and I was so excited to get some design experience. I had just finished my freshman year in college and was still working on...
The 95% confidence interval currently dominates online and scientific experimentation; it always has. Yet it’s validity and usefulness is often questioned. It’s called too conservative by some , and too permissive by others. It’s deemed arbitrary...
Statsig’s Journey with Druid This is the text version of the story that we shared at Druid Summit Seattle 2022. Every feature we build at Statsig serves a common goal — to help you better know about your product, and empower you to make good decisions for...
💡 How to decide between leaning on data vs. research when diagnosing and solving product problems Four heuristics I’ve found helpful when deciding between data vs. research to diagnose + solve a problem. Earth image credit of Moncast Drawing. As a PM, data...
Have you ever sent an email to the wrong person? Well I have. At work. From a generic support email address. To a group of our top customers. Facepalm. In March of 2018, I was working on the games team at Facebook. You may remember that month as a tumultuous...
Run experiments with more speed and accuracy We’re pleased to announce the rollout of CUPED for all our customers. Statsig will now automatically use CUPED to reduce variance and bias on experiments’ key metrics. This gives you access to a powerful experiment...