As a product manager, it's important to constantly assess and evaluate the impact of the features you're shipping to users. By measuring the impact of every single feature, you can gain valuable insights into how your product is being used, what's working well, and what areas may need improvement.
Here are just a few of the benefits of measuring the impact of every feature you ship:
By gathering data on how your features are performing, you can make more informed decisions about which features to prioritize and how to improve existing features. This can help you avoid making costly mistakes and ensure that your product is consistently meeting the needs of your users.
By gathering feedback from users and measuring the impact of specific features, you can identify areas where the user experience can be improved. This can help you create a more intuitive and enjoyable product for your users.
By measuring the impact of your features, you can gain insight into which features are most popular and which ones are being underutilized. This information can help you create more engaging content and drive higher levels of user engagement.
By gathering data on how your features are being used, you can identify unique value propositions and differentiators for your product. This can help you stand out in a crowded market and attract more users.
Statsig offers multiple ways to measure the impact of experiments by default.
The scorecard panel displays Primary and Secondary experiment metrics for each variant. These metrics are in the context of experiment hypothesis, which is highlighted at the top of the scorecard.
By default, all scorecard metrics have CUPED applied in order to shrink confidence intervals and reduce bias.
The all metrics tab shows the metric lifts across all metrics in the metrics catalog.
To further adjust the results, significance levels can be tweaked in the following ways:
Apply Bonferroni Correction: This reduces the probability of false positives by adjusting the significance level alpha, which will be divided by the number of test variants in the experiment.
Confidence Interval: Choose lower confidence intervals (e.g.: 80%) when there's higher tolerance for false positives and fast iteration with directional results is preferred over longer/larger experiments with increased certainty.
CUPED: Toggle CUPED on/ off via the inline settings above the metric lifts. NOTE- this setting can only be toggled for Scorecard metrics, as CUPED is not applied to non-Scorecard metrics.
Sequential Testing: This helps mitigate the increased false positive rate associated with the "peeking problem". Toggle Sequential Testing on/ off via the inline settings above the metric lifts. NOTE- this setting is available only for experiments with a set target duration.
Measuring the impact of every single feature you ship to users is a crucial part of the product development process. By gathering data and feedback, you can make more informed decisions, improve the user experience, increase user engagement, and better differentiate your product.
If you're a product manager, make sure to incorporate impact measurement into your workflow and start reaping the benefits.
It's one thing to have a really great and functional product. It's another thing to have a product that feels good to use. Read More ⇾
The authoritative guide on the design and implementation of an in-house feature flagging and AB test assignment platform. Read More ⇾
Standard deviation and variance are essential for understanding data spread, evaluating probabilities, and making informed decisions. Read More ⇾
We’ve expanded our SRM debugging capabilities to allow customers to define custom user dimensions for analysis. Read More ⇾
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾