Validate before launching

Wed Mar 24 2021

Building great products is not easy. Getting product market fit is even harder. This usually requires multiple iterations based on product feedback to get it right. And given the choice it’s always better to gather this feedback from a small set of users rather than a 100% “YOLO” launch. At a minimum you want to make sure that new features don’t harm product metrics.

Traditionally, companies collected early feedback from volunteer users with design mocks in a process known as Qualitative Research. This works for the most part but has a couple drawbacks.

  1. This process is expensive and time-consuming
  2. Asking users for feedback doesn’t always result in the right insights
Show new features only if the user passes a feature gate

So, what do modern product teams do to validate new product features? They are regularly launching and testing new features in production. They do this safely with development tools called “feature gates” or “feature flags”. Using feature gates, teams can open up access to a brand-new feature for just a small portion of the user-base — this could be 5% of all users, 10% of just English-speaking users, or 15% of users that have previously used a similar feature. This allows teams to monitor relevant product metrics for just the set of people that have access to these features. And if those product metrics look good, then they know that the feature is good and ready for a broader launch.

This model of feature development has the additional benefit of decoupling teams from each other, so partially developed features can still go through a release cycle safely hidden behind a feature gate, without blocking other teams.

Ready to experiment with (pardon the pun) feature gates? Checkout

Try Statsig Today

Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.


Recently published

My Summer as a Statsig Intern


This summer I had the pleasure of joining Statsig as their first ever product design intern. This was my first college internship, and I was so excited to get some design experience. I had just finished my freshman year in college and was still working on...

Read more

Long-live the 95% Confidence Interval


The 95% confidence interval currently dominates online and scientific experimentation; it always has. Yet it’s validity and usefulness is often questioned. It’s called too conservative by some [1], and too permissive by others. It’s deemed arbitrary...

Read more

Realtime Product Observability with Apache Druid


Statsig’s Journey with Druid This is the text version of the story that we shared at Druid Summit Seattle 2022. Every feature we build at Statsig serves a common goal — to help you better know about your product, and empower you to make good decisions for...

Read more

Quant vs. Qual


💡 How to decide between leaning on data vs. research when diagnosing and solving product problems Four heuristics I’ve found helpful when deciding between data vs. research to diagnose + solve a problem. Earth image credit of Moncast Drawing. As a PM, data...

Read more

The Importance of Default Values


Have you ever sent an email to the wrong person? Well I have. At work. From a generic support email address. To a group of our top customers. Facepalm. In March of 2018, I was working on the games team at Facebook. You may remember that month as a tumultuous...

Read more

CUPED on Statsig


Run experiments with more speed and accuracy We’re pleased to announce the rollout of CUPED for all our customers. Statsig will now automatically use CUPED to reduce variance and bias on experiments’ key metrics. This gives you access to a powerful experiment...

Read more

We use cookies to ensure you get the best experience on our website.

Privacy Policy