We’re excited to announce a new feature that makes it easier to understand metrics in context. You can now view metrics broken down by (grouped-by) an event property, expressed as a percentage of the total metric value, available in both bar charts and time series line charts.
This update allows you to quickly gauge the proportionate impact of different segments or categories within your overall metrics. For instance, you can now see what percentage of total sales each product category represents over time, or what portion of total user sessions specific events constitute.
By presenting data in percentages, this feature simplifies comparative analysis and helps you focus on the relative significance of different data segments.
We’ve made some quality of life improvements to the Time to Convert view in Funnel charts.
We now indicate where median time to convert
We support custom configuration for the conversion time window to examine. You can now adjust out automatically configured distribution chart by defining a time window to examine by bounding it with a minimum and maximum conversion time. You can also set the granularity of your analysis by selecting a interval size.
Together these quality of life improvements make it easier to better understand the distribution of times it takes to convert through funnels, and zoom in on specific areas in that distribution for more granular understanding.
We're excited to launch Stop New Assignment, a new decision type for experiments in Statsig. This feature allows you to halt new user enrollments in an experiment while continuing to analyze the results for previously exposed users.
Example Use-case
Stop New Assignment offers flexibility in various scenarios. For instance, consider an e-commerce company running an experiment to measure the impact of discounts on repeat purchases. With a limited budget, you may need to cap the number of discount offers. However, measuring long-term effects on repeat purchases requires ongoing analysis for an extended period. Stop New Assignment addresses this challenge by allowing you to stop user enrollment once you've reached your budget limit while maintaining result analysis to assess the impact on repeat purchases.
Implementation in Statsig
To implement Stop New Assignment, simply select it from the Make Decision dropdown in your experiment interface. Note: this feature requires Persistent Assignment to be configured.
For detailed information on Stop New Assignmet, please consult our Statsig Docs. We value your input and encourage you to share your feedback and suggestions for future enhancements.
Statsig users can now turn on/off their email notifications in the Statsig console settings. Simply go to My Account page in the Project settings and update your preferences under the Notifications tab.
This is a especially useful for teams who are Statsig very frequently and might want to turn off specific categories of emails to manage their inbox.
We hope that this helps you reduce clutter in your inbox while still allowing you to stay on important aspects of your projects in Statsig. As always, we welcome your feedback and suggestions for further improvements.
A common problem in experimentation is trying to connect different user identifiers before or after some event boundary - most frequently signups. This often involves running an experiment where the unit of analysis is a logged out identifier, but the evaluation criteria for the experiment is a logged-in metric (e.g. subscription rate, or estimated lifetime value).
Statsig has upgraded our ID resolution solution to handle multiple IDs attached to one user. ID resolution. In addition to the strict 1-1 mapping that we already support, we now offer first-touch mapping to handle 1-many, many-1 and many-many mappings between different ID types. This is extremely flexible and enables use cases like:
handling logins across multiple devices
mapping users to metrics from different profiles or owned entities
For more information about this feature, check out the documentation. This option is available to all Statsig Warehouse Native experimenters!
We’ve introduced Global Dashboard Filters, a new feature that allows you to apply property filters across all charts on your dashboard at once. This makes it easier to scope your analysis to specific properties and other criteria, with the results reflected across every chart.
With Global Dashboard Filters, you can efficiently narrow down your analysis to explore insights from different angles, ensuring that each chart on your dashboard provides a consistent and relevant view of the data. This feature simplifies your workflow and helps you focus on the most important aspects of your analysis.
The KB acts as a searchable repository of experiment learning across teams. It helps you find shipped, healthy experiments and gain context on past effort and generate ideas on new things to try.
Make it easy for new team mates to explore and find experiments a team ran, or where a topic was mentioned. Our meta-analysis tools offer more structured means to discover and look across your experiment corpus, but when you do want free text search, this exists.
(also updates to Experiment Timeline view)
The "batting average" view lets you look at how easy or hard a metric is to move. You can filter to a set of shipped experiments and see how many experiments moved a metric by 1% vs 10%. Like with other meta-analysis views, you can filter down to a team, a tag or even if results were statistically significant.
Common ways to use this include
Sniff testing whether the claim that the next experiment will move this metric by 15% is a good idea.
Establishing reasonable goals, based on past ability to move this metric
This view now features summary stats (e.g. How many experiments shipped control) so you don't have to sit and manually tally stats here.
We’ve expanded Metric Drilldown to provide you with more detailed options for analyzing user engagement. Now, you can easily explore how user behavior changes over daily, weekly, and monthly time frames, helping you uncover deeper insights.
Expanded Chart Granularities
To give you a more comprehensive view of your metrics, we’ve added support for Monthly Active Users (MAU) alongside the existing Daily Active Users (DAU) and Weekly Active Users (WAU). This expansion allows you to track user engagement over various time frames, making it easier to identify long-term trends or seasonal patterns.
Improved UX for Granularity Selection
We’ve also refined the UX for selecting DAU, WAU, and MAU, making it more intuitive and efficient to switch between these views. Whether you’re analyzing short-term spikes or long-term user behavior, the updated interface simplifies the process of drilling down into the details that matter most.
Weekly and Monthly Granularities in Drilldown Charts
To offer even more flexibility, we’ve introduced support for weekly and monthly granularities in metric drilldown charts. This allows you to view and compare metrics over extended periods, helping you to detect broader trends and patterns that might not be visible with daily data alone.
We’ve completely revamped the User Journeys experience, delivering a modern and intuitive interface that makes it easier than ever to explore and understand user behavior.
Modernized UX
The User Journeys interface has undergone a significant glow up. We’ve introduced a cleaner, more contemporary look and feel, making it easier to navigate and interact with the data. The new design not only improves usability but also enhances your overall analysis experience, allowing you to focus more on insights and less on the interface itself.
Where'd they go AND how'd they get there?
Understanding how users interact with your product is critical. Now, with support for journeys ending with an event, you can choose to analyze the paths users take from a specific starting point or the routes they follow to reach a particular destination. This flexibility allows you to pinpoint critical moments in the user experience, whether you’re interested in the lead-up to a key conversion event or the aftermath of an initial user action.
Sticky Hidden Events
To streamline your analysis, we’ve introduced sticky hidden events. When you hide an event to reduce noise in your journey visualizations, this preference is now preserved across your explorations. No more repeatedly decluttering your data—focus on the paths that matter without unnecessary distractions.