Free Beer!

Vineeth Madhusudanan
Mon Feb 07 2022
A-B-TESTING EXPERIMENTATION FEATURE-FLAGS

Every feature is well intentioned but…

written with Bella Muno (PM @ Tavour)

Every feature is well intentioned… that’s why we build them. However, our experience is less than a third create positive impact. Another third of the features require iteration before they land the desired impact — users might not discover it or are confused by it. The final third of the features are bad for users — they have a negative effect on product metrics, and the best bet is to unship them.

This split varies with both product maturity (it’s harder to find wins on a well optimized product) — and the product team’s insights and creativity. Yet, these three buckets almost always exist. Failing to critically analyze prevents knowing which bucket a feature is in.

How can they do this?

Automatic A/B Tests

Statsig turns every feature rollout into an A/B test with no additional work. In a partial rollout, people who’re not yet getting the new feature are the Control group of the A/B test. People who are getting the new feature are the Test group. By comparing metrics you’re already logging across these groups, Statsig can tell if your feature rollout is impacting KPIs and by how much. Statistical tests identify differences between the groups that are unlikely due randomness and noise.

A/B testing new feature rollouts

This type of testing enables product teams to understand the impact of features and determine which of the three buckets above the feature is likely to be in!

Simple feature flagging systems let you turn features on and off and control gradual rollout, but don’t offer automatic A/B tests and this analysis. Simple experimentation systems let you do something similar — but introduce too much overhead to to make them practical to use on every feature rollout. Hence, Statsig :)

But you mentioned beer…

Tavour is an app that helps fill your fridge with unique, craft beers you can’t find locally. They use Statsig to manage features and run experiments.

Examples of products Tavour helps you find!

Address Auto-complete

A problem the product team at Tavour wanted to solve was friction in the user onboarding process. This friction prompted the team to build an “Address Auto-Complete” feature. They expected this feature to increase speed and accuracy in the user sign-up flow — resulting in more users signing up.

Autocomplete makes typing an address fast!

They put this behind a Statsig feature gate and rolled it out to a small % of users to make sure nothing was broken. What they saw next surprised them!

Statsig’s “Pulse” view shows impact on metrics

The problem

Tavour was expecting “Address Auto-Complete” to increase the proportion of successfully activated users. Instead — they found users exposed to this feature churned out at a higher rate than users in the control group who didn’t see this feature.

Initial suspicion lay with the quality of data. Could an issue related to logging bugs or bad data pipelines be causing this? After vetting this and not finding issues, the team looked at other metrics impacted by the rollout. A system event — “Application Backgrounded” had also shot up. A new feature causing users to abandon the app suggested something weird could be going on.

The insight

The Tavour team started investigating usability for the new feature. Looking at other apps, they noticed that they displayed more address results to the user than the Tavour app did without scrolling. They formed a hypothesis that the partial list of autocomplete suggestions displayed in the Tavour app did not convey to users that they were additional suggestions. When they sliced this data by phone size, they saw a marked difference between small and large phones. With a small phone, fewer address suggestions were visible without scrolling. With a large phone, more address suggestions were visible. This finding provided evidence for the hypothesis that fewer addresses displayed confused users and prompted them to abandon registration.

The Result

Tavour decided to tweak the feature to let users see more auto complete suggestions without having to scroll.

“new user activation rate increased by double digit percent points”

The revised feature increased new user activation rate, giving them the confidence to finish rolling out this feature!

Free Beer?

Use this link to get $10 off your first order!


Try Statsig Today

Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.

MORE POSTS

Recently published

My Summer as a Statsig Intern

RIA RAJAN

This summer I had the pleasure of joining Statsig as their first ever product design intern. This was my first college internship, and I was so excited to get some design experience. I had just finished my freshman year in college and was still working on...

Read more

Long-live the 95% Confidence Interval

TIMOTHY CHAN

The 95% confidence interval currently dominates online and scientific experimentation; it always has. Yet it’s validity and usefulness is often questioned. It’s called too conservative by some [1], and too permissive by others. It’s deemed arbitrary...

Read more

Realtime Product Observability with Apache Druid

JASON WANG

Statsig’s Journey with Druid This is the text version of the story that we shared at Druid Summit Seattle 2022. Every feature we build at Statsig serves a common goal — to help you better know about your product, and empower you to make good decisions for...

Read more

Quant vs. Qual

MARGARET-ANN SEGER

💡 How to decide between leaning on data vs. research when diagnosing and solving product problems Four heuristics I’ve found helpful when deciding between data vs. research to diagnose + solve a problem. Earth image credit of Moncast Drawing. As a PM, data...

Read more

The Importance of Default Values

TORE

Have you ever sent an email to the wrong person? Well I have. At work. From a generic support email address. To a group of our top customers. Facepalm. In March of 2018, I was working on the games team at Facebook. You may remember that month as a tumultuous...

Read more
ANNOUNCEMENT

CUPED on Statsig

CRAIG

Run experiments with more speed and accuracy We’re pleased to announce the rollout of CUPED for all our customers. Statsig will now automatically use CUPED to reduce variance and bias on experiments’ key metrics. This gives you access to a powerful experiment...

Read more

We use cookies to ensure you get the best experience on our website.

Privacy Policy