We really really like funnels. We think they’re great. Then we thought, what if we made them even better? So we did! Our funnels now offer richer insights, more configuration options, and are easier to understand at a glance.
Table Overhaul
We’ve overhauled the table in the main Conversion view for funnels. It now presents a detailed tabular view of the key metrics at each step of your funnel. This table is also configurable, allowing you to choose between a high-level summary or more detailed insights. You can now see exactly what’s happening at each step, with metrics like conversion rates, drop-offs, total conversions, and more, all in one place.
Conversion Rate vs. Overall Conversions
You now have flexibility in how you view your funnels. By default, the y-axis displays the conversion rate for each step. We’ve added a toggle to switch the y-axis to show the total number of conversions. The conversion rate view is ideal for comparing how different groups perform in terms of conversion rates, while the total conversions view helps you understand the overall number of users progressing through your funnel.
Information Richness
Funnels now include even more insights at a glance. Under each step, you’ll find summaries of the conversion rate, drop-off rate, total number of conversions, number of drop-offs, and median time to convert.
Rename Funnel Steps
You can now rename steps in your funnel, making them more legible or descriptive when the event names aren’t clear.
We’ve completely revamped the User Journeys experience, delivering a modern and intuitive interface that makes it easier than ever to explore and understand user behavior.
Modernized UX
The User Journeys interface has undergone a significant glow up. We’ve introduced a cleaner, more contemporary look and feel, making it easier to navigate and interact with the data. The new design not only improves usability but also enhances your overall analysis experience, allowing you to focus more on insights and less on the interface itself.
Where'd they go AND how'd they get there?
Understanding how users interact with your product is critical. Now, with support for journeys ending with an event, you can choose to analyze the paths users take from a specific starting point or the routes they follow to reach a particular destination. This flexibility allows you to pinpoint critical moments in the user experience, whether you’re interested in the lead-up to a key conversion event or the aftermath of an initial user action.
Sticky Hidden Events
To streamline your analysis, we’ve introduced sticky hidden events. When you hide an event to reduce noise in your journey visualizations, this preference is now preserved across your explorations. No more repeatedly decluttering your data—focus on the paths that matter without unnecessary distractions.
We’ve added a new feature that lets you download your entire dashboard as a PDF, making it easier to share your insights with others. Whether you’re preparing for a meeting, sharing results with your team, or simply saving a snapshot of your data, this feature provides a convenient way to package and distribute your dashboard.
With just a few clicks, you can export your dashboard, preserving all charts, metrics, and visualizations in a format that’s easy to share and view across different devices. This feature ensures that everyone stays aligned and informed, even when working offline or outside of the platform.
To download your dashboards, click the "..." button in the top right corner ever you dashboard and select "Export as PDF".
We’ve introduced key enhancements to Retention charts, designed to give you more actionable insights into user behavior and long-term engagement.
Flexible Retention Definitions
Accurately tracking user retention is crucial for understanding how your product is performing over time. With our new “Return On” and “Return On or After” retention definitions, you can now better align your analysis with specific business goals. For instance, use “Return On” to measure how many users come back on a precise day, or “Return On or After” to understand longer-term retention trends.
Daily, Weekly, Monthly Retention
User engagement varies depending on the nature of your product and user behavior. That’s why we now offer the ability to analyze retention on a daily, weekly, or monthly basis. Whether you need to track daily active users for a fast-paced app or monitor longer-term engagement for a subscription service, these options let you choose the most relevant time frame for your analysis.
Retention Over Time
Retention isn’t static; it evolves as your product and user base grow. With our new Retention Over Time feature, you can visualize how retention rates shift over weeks or months, helping you spot trends, seasonality, or the impact of product changes. This enables you to make data-driven decisions, whether you’re aiming to boost user retention, identify periods of churn, or validate the success of a new feature.
We’re excited to introduce the beta version of Session Analytics, available to a select group of customers, including you. This feature allows you to leverage a special “statsig::session_end” event within Metric Drilldown charts to analyze user sessions in your product.
A session is defined as a period of user activity followed by at least 30 minutes of inactivity. Each session_end event includes a property that records the session duration in seconds. With this data, you can answer key questions such as:
How many daily sessions are occurring?
What is the median (p50) session duration?
How does session duration vary across different browsers?
As this is a private beta release, some functionality is still under development, but we’re eager to hear your early feedback. Your insights will help us refine and improve the feature. If you would like to be added, reach out on our Slack.
We're thrilled to announce the launch of Custom Experiment Checklist, a new feature that empowers admins to tailor the experimentation guidelines to their company's specific needs. This feature allows you to replace the default Statsig experiment checklist with your own custom checklist that follows internal best experimentation practices.
Ensure adherence to company-specific best practices
Foster a unified experimentation culture across your organization
Increase the quality and consistency of experiments
Over the past few months, our customers expressed a desire for more flexibility in configuring experiment guidelines within Statsig. We listened, and Custom Experiment Checklist is our response to this valuable feedback. Custom Experiment Checklist is now available for all users. To get started, navigate to your Organizational settings and look for the new "Experiment Checklist" option.
We're excited to see how this feature will enhance your experimentation process. As always, we welcome your feedback and suggestions for further improvements.
We're excited to start rolling out our Product Analytics suite to Statsig Warehouse Native.
You can see the exact step in a 5-step checkout workflow where half of your users are dropping off. You can filter and slice metrics down by any property, instantly. Your Growth teams can spelunk in data and generate hypotheses to try.
All of this - with centralized data governance in your warehouse - a single source of truth, with do data duplication or drift.
This plays well with experimentation - letting you group events or a metric by experiment groups, or even look at a few sample rows of data when investigating an issue.
Metrics Explorer is in broad beta on Statsig WHN and is free for the rest of the 2024. It has been Generally Available on Statsig Cloud since March this year.
Parameter Stores allow you to think in terms of parameters - things in your app that you want to be configurable remotely. Parameters decouple your code from the configuration in the Statsig Console. It is a level of indirection that allows you to run any set of experiments, or change gating or values on the fly, all without hardcoding an experiment name in your app. Each of these parameters that you define and check can be remapped at will between Statsig entities. Create a boolean parameter to enable a feature, turn it into a Feature Gate for internal testing, then run an Experiment when your app is released, and then turn it back into a Feature Gate that ships the winning variant to the right audience - all without a code change, or a new mobile app release.
Statsig will begin filtering out known bot traffic from all exposures data in the Statsig console. These web-crawling bots can sometimes inflate exposure counts but don’t represent the users you’re most often trying to measure. This should improve the accuracy of your experiments, feature gate analytics, and user tracking. Several Statsig customers have requested this feature and we’re excited it’s finally coming.
For more information about this feature, you can check out our updated exposures docs page. Bot filtering will be turned on by default for all Statsig projects, but we’re also building an opt-out setting, which will be available in console.
This is a statistical method that reduces the probability of false positives by adjusting the significance level for multiple comparisons. It is not as extreme as a Bonferroni Correction, because instead of controlling the chance of at least one false positive (Family Wise Error Rate), this controls the expected value of false positives when the null hypothesis has been rejected (False Discovery Rate).
This is now rolling out on both Statsig Cloud and Statsig Warehouse Native.