Today, we’re starting to roll out a set of improvements to the Power Analysis Calculator. Here's what's changing:
Improved UX:
The new Power Analysis Calculator is a full-blown hub for creating, storing, and looking up previous power analysis calculations.
Qualifying event audience generation:
Now you can use an event as a qualifying threshold to define the audience you would like to run a power analysis on. For example, if you’re an ecommerce company and would like to run a checkout experiment you could use a “tap_checkout" event to define the audience you want to calculate power for your experiment on.
Past analyses:
We’ve introduced a new tab “Past Analyses” into the Power Analysis Calculator, where all previous calculations will live. You can rename these analyses for easy lookup/ collaboration, and view the results inline (as well as play with parameters like MDE and target allocation inline without submitting a new calculation). Each past analysis has a Share Link for easy sharing with your team!
Today we’re rolling out changes that will make it easier to discover and consume product insights from Dashboards. Now you can take advantage of all the power of our main analytics feature, Metrics Explorer, in Dashboards as well.
Sometimes, after drilling into your metrics in Metrics Explorer you may want to save and share the results of those enlightening moments, and consolidate them in one view. Now, you have the ability to save charts from Metrics Explorer onto an existing or new dashboard. From any chart in Metrics Explorer, click the “…” and select the option to Export to Dashboard.
Insight, curiosity, and inspiration don't stop once a Dashboard has been created. Starting today, you can continue analyzing the data from any of your newly saved charts, straight from a Dashboard. Charts saved to dashboard from Metrics Explorer offer the same power and flexibility as the ones in Metrics Explorer. You can modify queries to examine things from a different perspective and, if desired, update the existing chart or create a new one.Â
Today, we’re excited to start rolling out an easy way to export a shareable summary of your experiment via PDF.
To export a PDF of your experiment summary, go to the Pulse tab in your finished experiment, tap Export, and select Experiment Summary PDF. Your PDF summary will contain-
Key Setup Information, such as hypothesis, actual vs. target duration, primary/ secondary metrics, experiment variants (w/ group descriptions and images), etc.
Results Overview, such as a snapshot of your experiment’s Pulse results, experiment settings (CUPED enabled, etc.), and granular metric-by-metric raw stats
In the future, we’ll also be adding a surface for experiment decision-makers to add more free-form recap text, to provide future viewers of this experiment with additional helpful context.
Stay tuned for continued updates on this surface! And in the meantime, let us know if you have any feedback or feature requests.
We’re starting to roll out a new way to visualize your Pulse metric lifts inline within your Scorecard.
You can now select whether you want to visualize your Pulse results in “Cumulative” view (default), “Daily” view, or “Days Since Exposure” view. You can easily toggle between views via a new toggle inline within your Pulse view controls.
Check it out and let us know what you think, or read more deeply about Pulse in our docs here.
To aid in keeping your Metrics Catalog streamlined and current, we are launching automated metric archival. Any metric that has been inactive for the last 60 days will be automatically scheduled for archival with the option for metric owners to extend or mark a metric as permanent.
Experimentation best practice dictates that an experiment should have a highly targeted set of metrics that you’re actively trying to move, along with a broader swath of metrics you’re monitoring to ensure you don’t regress.
Today, we’re adapting our Scorecard to reflect this best practice, and putting in place some smart limits on the Scorecard—max 10 Primary Metrics and 40 Secondary Metrics. Coming soon will be the ability for Enterprise customers to specify an even tighter limit on Scorecard metrics via their Org Settings if desired.
One bonus implication of these limits is that we’re auto-expanding tagged metric groups, making it even easier to see (and manage) all the individual metrics being added to your Scorecard when you add a metric tag.
Let us know if you have any feedback or questions on this change!
We've just started rolling out Experiment Policy controls to customers with Enterprise contracts. Configure good defaults for experiment settings like Bayesian vs Frequentist, Confidence Intervals (or optionally even enforce them). Find it under Organization Settings âžś Settings âžś Experiment Settings
Quickly see where in your source code a feature gate or experiment is referenced to get context for how it is being used. Simply enable Github Code References to see this light up in!Â
Last week, we launched a refreshed version of the Experiment Setup checklist to make it easy for anyone on your team to configure experiments quickly and correctly in Statsig. In the new checklist, you’ll see -
Two top-line guides, “Set up your Experiment” & “Test your Experiment” - Skip straight to testing if you’re a pro or get more help with setup if you are newer to running experiments on Statsig.
Ability to test experiment setup in a specific environment - Turn your experiment on in lower environments to verify it’s working as expected before going live in Production.
Same Overrides controls - Leverage ID or Gate Overrides to test your experiment setup for a specific user or segment of users in any configured environment.
We’d love to hear feedback as you and your teams get up and running with the new checklist!
You've told us you want more trustworthy experiments, not just more experiments. We are making Hypothesis and Primary Metrics required on experiments. Enterprise customers will soon be able to define experiment settings as policy.