This provides a complete view of each feature’s performance against your company’s suite of metrics. The best part… this is all done automatically; Zero overhead, no extra steps, and this doesn’t change how long your rollout takes.
But what if you needed something even more powerful? What if you want to run A/B/n experiments? Or need to avoid experimental collision? What if you prefer a detailed report focussed on validating your hypotheses? Introducing Experiments+.
Experiments+ is Statsig’s formal A/B/n testing and experimentation tool. It lets experimentalists:
Set up hypotheses, define key metrics, and set a target completion date
Set audience targeting rules
Deploy multiple test groups, at custom percents
Set up layers (aka Universes) allowing you to avoid collisions by running mutually exclusive experiments
Receive a detailed report to evaluate your hypotheses against the key metrics.
Holistically understand the experiment’s impact on your company’s entire suite of metrics. This is done with our popular “Pulse” view that helps you understand the primary, secondary and ecosystem effects. This allows you to generalize learnings, creating new hypotheses and ideas.
At Statsig, we firmly believe A/B Testing doesn’t need to be difficult. In line with this philosophy, we’ve built a 3-step creation flow for Experiments+.
This is just the first version of Experiments+. We have many more exciting features in the works and would love to hear from you on what else we should be building and whether this is works for you.
Want to check out Statsig? You can signup for a free account at https://www.statsig.com. You’ll be able to use our SDK and console and start building immediately. You can also play with our demo account at https://console.statsig.com/demo which includes Experiments+.
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾