(Coming soon to Statsig Analytics)
Retention AnalysisĀ helps you drive product adoption by showing you how often users return to your product after taking a specific action. People using Metrics Explorer this week will be opted into the beta early!
We're excited to extend our ability to serve experiments at the edge with our new Fastly integration. Developers can now render web pages with no latency or flicker by putting flag evaluation and experiment assignment as close to their users as possible. We're taking advantage of Fastly Config Stores to light up this feature. See docs for Fastly (or Cloudflare and Vercel).
We now support Bayesian Analysis for Experiments. You can turn this on by selecting the option under Experiment Setup / Advanced Settings and see your results through the Bayesian lens, including statistics like Expectation of Loss and Chance of Beat.
Youāll be able to access this through theĀ Experiment Setup / Advanced SettingsĀ tab. This is a philosophically different framework from standard AB testing based on frequentist analysis and there are many nuances to using it. For more information, please see the documentationĀ here.
Related: Try the Statsig Bayesian A/B testing calculator.
We just shipped bar charts in our analytics product! This lets you slice and dice metrics into easy-to-understand visuals that highlight trends, comparisons, and distributions. You can group by or filter using properties like device type, operating system, country or even custom properties.
We just launched a Warehouse Native version of Statsig - it runs directly on data in your warehouse. This is optimized for data teams who value quick iteration, governance, and the ability to avoid duplicating core business metrics in multiple systems. Learn more...
We just shipped an Enterprise feature to customize roles you use to assign permissions in Statsig. You can now create new roles beyond Admin, Member and Read-Only and choose what permissions these roles have. Common use cases include creating a Metrics Admin role or a Datawarehouse Admin role (for the Statsig's Warehouse Native).
Enterprise customers can find this under Project Settings -> Basic Settings -> Roles & Access Control
Weāre excited to share a limited beta of Metrics Explorer : an analytics surface with powerful slicing for metrics. Break out a metric by device type, country or user tier. Explode a ratio metric and see how the numerator and denominator have moved.
Get data you can trust and insights you need to take action and drive growth. Find this under Metrics -> Explore
The Users tab enables you to diagnose issues for specific users, by helping answer questions like "which experiment group was this user in?" Or "when did the user first see this feature?" We've just upgraded the backend for this - lookups now take ~5 seconds, instead of ~10 minutes.
We've just started rolling out the ability to apply targeting on Holdouts. Holdouts work by "holding-back" one set of users from testing and comparing their metrics with normal users. Statsig now lets you apply a Feature Gate to your Holdout. e.g. if you wanted an iOS User Holdout, you could apply a Feature Gate that passes only iOS users.
Holdouts are the gold standard for measuring the cumulative impact of experiments you ship. (Learn more)
As teams have grown their Statsig usage, so has old experiment clutter. A few months back we launched a suite of tooling to manage the lifecycle of your feature flags, and today weāre rolling out automated clean-up logic for old experiments as well.
Starting this week,Ā Statsig will be setting a default Pulse Results compute window of 90 days for all new experiments going forward, after which your Pulse Results will stop being computed.Ā Please note this only applies to experiments, not feature gates, holdouts, or any other config types.
You will be able to extend this window at the individual experiment level as you approach the 90-day cap, and your user assignment will not be impacted even if results stop being computed. Read more in ourĀ docs.
In the coming days, experiment owners of impacted experiments will receive an email notification and 14 days to extend the Results compute window, if you wish to. As always, donāt hesitate to reach out if you have any questions- our hope is that this both cleans up your Console and saves teams money long-term!