Your team is busy shipping features, and they are keeping a close eye on product metrics. While itās easy (with Statsig) to understand the impact of individual feature launches, one question that often comes up is:Ā Whatās the collective impact of multiple features?
At Facebook, every team calculated the cumulative impact of all features shipped over the last 6 months. And to do that accurately, we use āHoldoutsā that we create at the beginning of each half. A holdout is usually small (1ā5%) in size, and as the name implies, holds that set of people out of any new features launched during that half.
This provides us with a baseline to measure the cumulative impact of multiple launches over 6 months. At the end of the half, we release that holdout and create a new one for the next half. Holdouts are powerful and have many other uses including measuring long-term effects, quantifying subtle ecosystem changes, and debugging metric movements.
Today, weāre making all this available viaĀ HoldoutsĀ onĀ Statsig. Setting up a holdout is a cinchāāāyou pick the size and the features you want held from users. You can also make holdouts be āglobalā which means all new features will automatically respect the holdout. And occasionally (hopefully, sparingly) you might want to run a āback-testā and you can do that by turning on holdouts to an existing set of features.
Once set up, our Pulse engine will automatically compute the impact of all those features compared against the baseline. No additional configuration or code necessary.
Go ahead, try it out today! We haveĀ a free planĀ that allows you to get going right away without needing to talk to any salesĀ teams.
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how weāre making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾