We're excited to announce a powerful new addition to Statsig's feature management capabilities: the Cross-Environment Feature Gate View. This new view provides DevOps teams, SREs, and Release Managers with unprecedented visibility into feature gate states across all environments from a single, unified interface.
Comprehensive grid view showing all feature gates and their states across Dev, Staging, and Production environments
At-a-glance status indicators and gate checks for quick state verification
Simplified Operations: Eliminate the need to navigate between different environments to check gate states
Enhanced Release Management: Quickly verify feature gate configurations across your deployment pipeline
Improved Collaboration: Give platform teams and operations staff the high-level view they need for effective feature management
Risk Reduction: Easily spot inconsistencies in gate states across environments before they issue becomes significant
You can turn this view on by clicking on the top-right toggle in the feature gates list page. Ready to get started? Let us know if you have any feedback on this feature for us.
We’re reaching out to give you a heads-up about an important change we are making to the auto-generated event_dau
metric for Cloud customers in the Statsig Console.
Note: Customers on Statsig Warehouse Native will not be impacted.
In two weeks, from Wednesday, October 16 2024 onwards we plan to stop auto-generating new event_dau
metrics for incoming events in Statsig. We will continue to auto-generate an event_count
metric for each logged event as we do today.
Any existing event_dau
metrics that have been used in a gate, experiment, dashboard, or other Custom Metrics will NOT be affected by this change.
Existing event_dau
metrics that have been archived or not been used in another config will NO longer exist in the project. See ‘Next steps’ below if you want to retain the unused metrics.
Going forward, new event_dau
metrics will need to be created manually as a Custom Metric. See this guide to learn how to create a DAU metric.
We will be making this change on October 16, 2024. If you have any questions or concerns, please don’t hesitate to reach out!
Historically, we have automatically generated an event_count
and event_dau
metric for every incoming event into Statsig. After working closely with hundreds of customers, we have seen that auto generating two metrics for every event leads to confusion and clutter inside Statsig projects. The proposed change will lead to cleaner Metrics Catalog and faster Console performance, while still retaining your ability to create event_dau
metrics for the events you care about most.
If you wish to keep any unused event_dau
metrics going forward, you can earmark that metric by performing any of the below actions:
Adding a Tag (RECOMMENDED)
Adding a description
Referencing in a gate/experiment/dashboard
These actions will mark your unused metric as active, signaling us that you don’t want them to be deprecated.
Funnels are a first-class citizen on Statsig WHN Experimentation. You can specify order, conversion windows, sessions, and more to build a clear picture of user journeys in your product. Now, you can also use funnels to measure “time to complete” a funnel in experiments.
These metrics, alongside the funnel completion rate, give a much richer view into what’s going on with users. For example - success rate didn’t change, but did your changes make your signup flow faster, or slower?
This is a valuable view for anyone who has a series of actions they care about their end users being able to do in a timely fashion - and it’s available to all Statsig Warehouse Native users today!
Funnel metrics are one of the most popular metric types in product analytics. They are especially helpful to measure user journeys through a series of steps. For example, if you want to measure user conversion through a subscription flow, e.g. Start → Description Page → Payment Info → Confirm; or identify pain points in a registration flow, e.g. Welcome Page → Ask for Phone Number/Email → Authentication → Logged-in Page.
Statsig has had powerful funnels you can analyze for experiment impact in Warehouse Native for a while now, including session-level funnel metrics. Now, we’re rolling out even more enhancements. These include:
Configurable completion windows per-step, i.e. how long this step can take to occur after the previous step
Treating the “exposure” of the experiment as the implicit start event of the funnel, meaning your reported funnel conversion rate maps to the actual rate at which people finished them in your experiment
Built-in allowance for timestamp noise, which is useful to mitigate deviations in logging; this includes control over comparison type, and a configurable grace period for clock jitter
For more information about this feature, you can check the documentation. These features are available to all Statsig Warehouse Native users today.
We’re excited to announce a new feature that makes it easier to understand metrics in context. You can now view metrics broken down by (grouped-by) an event property, expressed as a percentage of the total metric value, available in both bar charts and time series line charts.
This update allows you to quickly gauge the proportionate impact of different segments or categories within your overall metrics. For instance, you can now see what percentage of total sales each product category represents over time, or what portion of total user sessions specific events constitute.
By presenting data in percentages, this feature simplifies comparative analysis and helps you focus on the relative significance of different data segments.
We’ve made some quality of life improvements to the Time to Convert view in Funnel charts.
We now indicate where median time to convert
We support custom configuration for the conversion time window to examine. You can now adjust out automatically configured distribution chart by defining a time window to examine by bounding it with a minimum and maximum conversion time. You can also set the granularity of your analysis by selecting a interval size.
Together these quality of life improvements make it easier to better understand the distribution of times it takes to convert through funnels, and zoom in on specific areas in that distribution for more granular understanding.
You can now compare conversion funnels across different time periods. Now, you can select a specific comparison range—1, 7, or 28 days ago—and view a side-by-side comparison of the entire funnel for each step against the selected time period.
This feature allows you to observe how product changes impact user behavior over time. By comparing different periods, you can easily identify trends, assess the effectiveness of recent changes, and make data-driven decisions to improve your funnel strategy.
Time period comparisons are available in all funnel views including Conversion Rates, Time to Convert, and Conversion Rate over Time.
You can now analyze distributions for any numerical property on an event, This removes the limitation of only allowing distribution analysis on the default “Value” property. This enhancement provides you with the flexibility to explore and visualize distributions across diverse numerical properties such as session length, purchase amounts, or any numerical property associated with specific events.
This refinement allows for a comprehensive view of the distribution’s shape, going beyond specific percentiles like p90. This broader perspective is useful for identifying significant points within the distribution, helping you detect trends, pinpoint anomalies, and address potential issues more effectively.
We're excited to launch Pause Assignment, a new decision type for experiments in Statsig. This feature allows you to halt new user enrollments in an experiment while continuing to analyze the results for previously exposed users.
Example Use-case
Pause Assignment offers flexibility in various scenarios. For instance, consider an e-commerce company running an experiment to measure the impact of discounts on repeat purchases. With a limited budget, you may need to cap the number of discount offers. However, measuring long-term effects on repeat purchases requires ongoing analysis for an extended period. Pause Assignment addresses this challenge by allowing you to stop user enrollment once you've reached your budget limit while maintaining result analysis to assess the impact on repeat purchases.
Implementation in Statsig
To implement Pause Assignment, simply select it from the Make Decision dropdown in your experiment interface. Note: this feature requires Persistent Assignment to be configured.
For detailed information on Pause Assignment, please consult our Statsig Docs. We value your input and encourage you to share your feedback and suggestions for future enhancements.
Statsig users can now turn on/off their email notifications in the Statsig console settings. Simply go to My Account page in the Project settings and update your preferences under the Notifications tab.
This is a especially useful for teams who are Statsig very frequently and might want to turn off specific categories of emails to manage their inbox.
We hope that this helps you reduce clutter in your inbox while still allowing you to stay on important aspects of your projects in Statsig. As always, we welcome your feedback and suggestions for further improvements.