Delta Comparison condenses two overlaid series into a single line that plots the percent change between your current and comparison periods at each timestamp, letting you see differences at a glance.
In Metric Drilldown, toggle Percent Difference to turn any chart in Comparison mode into a single line showing the percent change at each timestamp.
Scan spikes or dips instantly instead of comparing two stacked series.
Export or share the delta series just like any other chart.
In Metric Drilldown when in a time series view, choose Compare and select a comparison range.
Select the "%" option at the top of the chart to switch to Percent Difference mode.
The chart redraws as one series where delta_percent = (current - comparison) / comparison * 100
Switch back anytime to the traditional overlaid view.
Faster insights: One clear line highlights variance without visual clutter.
Sharpened focus: Positive vs. negative swings stand out immediately, making root-cause checks quicker.
Tighter reports: A single series is easier to share in dashboards and slide decks.
Try the Delta Comparison toggle on your next period-over-period chart and see the difference.
Count distinct values for any property across events or users, no longer limited to IDs.
Select Unique Values as the aggregation type in Metric Drilldown.
Answer questions like âHow many different referrers drove traffic last week?â or âHow many SKUs were added to carts today?â
Combine with filters and group-bys to surface granular uniqueness counts in one step.
Pick your metric or event.
In the aggregation dropdown, choose Unique Values.
Select the property whose distinct values you want counted (e.g., referrer, sku, country).
The chart returns the count of unique values for that property over the chosen time range and granularity.
Broader coverage: Distinct-value analysis now works on any property, not just user_id, stable_id, or company_id.
Faster answers: Skip custom SQL or exports when you need unique counts on the fly.
Try the Unique Values option to see diversity in your data at a glance.
With Statsigâs Braze integration, running experiments across your multi-channel campaign or orchestrating your user journey just got easier!
Customers can now send exposure events from Statsig to Braze that can then be used to assign users into Segments in Braze. This enables you to trigger custom content through feature flags or run different campaigns based on whether users are in treatment or control groups.
Learn more here to get started!
By setting up a Decision Framework for an experiment template, teams can standardize how experiment results are interpreted and launch decisions are made.
Decision Frameworks can be added to any experiment template. Based on different scenarios of Primary and Guardrail metric outcomes, you can configure recommended actions: Roll Out Winning Group, Discuss, or Do Not Roll Out.
Once configured, any experiment created from the corresponding template will display a recommendation message in the Make Decision button when the experiment concludes. Reviewer can be set up when a shipping decision doesnât align with the recommendations configured in the decision framework.
Learn how to setup your decision framework here.
We're streamlining the Experiment Setup page layout! It now includes a TEST button containing helpful resources to validate your experiment setup.
The Advanced Settings section has also been reorganized into Analysis Configuration and Experiment Population categories, with enhanced documentation links for users wanting to learn more about each feature.
You can now break down funnel performance by two properties at once, giving you a clearer view of how different user segments progress through each step.
Apply up to two group-by properties in a funnel (e.g. country and platform)
View combined breakdowns like US / iOS, Canada / Android, etc.
Analyze performance across more specific cohorts in a single chart
Once you apply a group by in your funnel chart, youâll now have the option to add a second property. The chart will display the top combinations of those properties, ranked by event volume in the first step of the funnel.
For example, if you group by platform and experiment variant, the chart will show funnels for the most common combinations like Android / Treatment or iOS / Control.
With support for two group-bys, you can run more detailed comparisons without duplicating charts or manually applying filters. This is especially useful for spotting performance differences across dimensions like geography, device type, or experiment conditionsâall in one view.
You can now format the y-axis in time series charts within Metric Drilldown, giving you better control over how values are displayed.
Choose from multiple y-axis formats:
Number (default)
Percentage
Time (e.g. seconds, minutes)
Decimal Bytes (e.g. kB, MB)
Bytes
Bits
Apply formatting to better match the metric youâre analyzing
When editing a time series chart in Metric Drilldown, youâll see a new y-axis formatting option. Select the format that best fits your metricâfor example, use percentage for conversion rates or time for session duration.
This gives you more clarity when interpreting trends, especially for metrics like load time, bandwidth, or success rates. Instead of manually translating values, you can now visualize them in context directly on the chart.
We're adding the ability to quickly copy metrics used in one experiment to a new experiment.
It's easy to select metrics for an experiment in Statsig, and tags or templates are powerful tools to manage collections. Sometimes, though, you already set up the perfect measurement suite on another experiment and just want to copy that - and now you can!
This is especially powerful for customers using local metrics - experiment-scoped custom metric definitions - since you can copy those between experiments without needing to add them to your permanent metric catalog.
Generally, experimentalists make decisions by comparing means and using standard deviation to assess spread. There's exceptions, like using percentile metrics, but the vast majority of comparisons are done in this way.
It's effective, but it's also well known that means mask a lot of information. To help experimentalists on Statsig understand what's going on behind the scenes, we're adding an easy interface to dig into the distributions behind results.
Here, we can see a pulse result showing a statistically significant lift in revenue for both of our experimental variants.
By opening the histogram view (found in the statistics details), we can easily see that this lift is mostly driven by more users moving from the lowest-spend bucket into higher buckets
This is available today on Warehouse Native - and we're scoping out Statsig Cloud.
We're providing more control over when experiment results load in Warehouse Native - in addition to schedules and API-based triggers, customers can also specify days of the week to load results on for a given experiment or as an organizational default.
In addition, org-level presets for turbo mode and other load settings will help people keep their warehouse bill and load times slim! Read more at https://docs.statsig.com/statsig-warehouse-native/guides/costs