We've made user journeys simpler to build and easier to read. You now have more options for focusing your analysis, with clearer controls and easier-to-read charts.
What You Can Do Now
Choose between including specific events or excluding unwanted ones in your analysis
Control visualization density by setting the number of paths to show at each step
When expanding journey sections, property names and values are easier to read and understand
How It Works
Instead of just percentage thresholds, you can now specify exactly how many paths you want to see at each step. For example, show just the top 5 most common paths to keep your analysis focused. We've also improved the layout when you dig into user properties, making the information clearer at a glance.
Impact on Your Analysis
These changes make it easier to find what matters:
Build your analysis the way that works best for you - by including or excluding events. This makes it easier to build charts that aren't noisy or cluttered.
Keep your charts focused by showing only the most important paths
Spot patterns quickly with clearer property labels
Ready to try it out? Head to the User Journey section to see these changes.
We've improved dashboard building by letting you add widgets exactly where you want them. Instead of always adding to the bottom of your dashboard, you can now insert new visualizations between existing widgets.
What You Can Do Now
Add widgets directly between existing charts
Add widgets to smartly fill empty spaces on the dashboard
Insert new charts right where you're working
How It Works
Click the "+" button between any widgets to add a new visualization right at that spot. You can add widgets between widgets on the same row, or create a new row between sets of widgets. You can also add widgets directly in empty spaces on a dashboard .
Impact on Your Analysis
This improvement makes dashboard building faster and more natural:
Build dashboards in fewer steps
Organize visualizations logically as you work
Spend less time arranging, more time analyzing
Build your dashboard more efficiently by adding widgets exactly where you need them.
Now you can limit your metric analysis to users who saw specific versions of features (through feature gates) or were part of specific experiment variants. This lets you measure exactly the user experience you want to understand.
What You Can Do Now
Filter metrics by feature gate rules to analyze specific user experiences
Focus on experiment variants to understand test results
Track metrics for specific feature versions and variants
How It Works Select your metric, then use the filters to choose any feature gate rule or experiment variant. Your analysis will show data only for users who match those criteria.
Impact on Your Analysis This improvement helps you analyze feature performance more precisely:
Compare metrics for different feature versions
Validate experiment results for specific features
Debug issues by isolating exact user experiences
Deeper Understanding Through Sessions Combine these filters with Session Replays and Session Streams to see exactly how users experience your features. Watch full session recordings or follow event streams to understand the context around specific feature interactions.
Good experimenters who bring others along, are often good storytellers. We've heard from many of our customers that they want to offer a narrative arc with experiments that includes the ability to set up context, dive into conflict and offer resolution - a narrative layer around our out-of-the-box Scorecards.
Today we are adding the ability to include live, interactive widgets in Experiment Summary that allow experimenters to craft this narrative for their audience.
A few examples of how customers are using this:
Embedding the results of a Custom Explore Query to add context from relevant deep dives (e.g. analyzing the experiment time period while removing outliers like Black Friday, or specific days that had data blips)
Adding rich charts such as the conversion funnel being experimented on to contextualize experiment Scorecards
Breaking out metrics to match the mental model for good AI experimenters, for example:
Direct model measurement : latency, tokens produced, cost
Direct user feedback : explicit - thumbs up/thumbs down, implicit - dwell time, regeneration requested
Longer-term effects : user activity level over the next week, retention, subscriptions
The original experiment Scorecard is still available as part of Experiment Summary.
Our eventual goal is to have all the context around an experiment - including experiment design, critique, q&a and readouts- centralized in one place. This is the first step toward that.
This feature is rolling out gradually. To embed rich charts in your Experiment Summary, go to the "Summary" tab in your experiment, tap into "Experiment Notes" and select the "+ Add Charts" CTA on the right. Happy storytelling!
We have enhanced CUPED for ratio metrics by jointly adjusting both group means and variances. Previously, only variances were adjusted. With this update, group means are now adjusted using deducted values, ensuring more accurate results. This improvement reduces the false discovery rate, making CUPED even more reliable.
For more details, head to our Docs!
Starting today (Dec 9th, 2024) Statsig will begin support for auto-resolving metadata from IPV6 domains on our client SDKs. Statsig has historically provided and used our own package (IP3Country) for resolution of IP addresses to country codes, which we've decided to stop relying on as IPV6 traffic continues to grow. Going forward, we'll leverage our load balancer's country resolution, which will provide more accurate IPV4 support, and fulsome IPV6 support.
Visit our docs on the transition for more info, or reach out to us in Slack!
Athena is now a supported data warehouse for Warehouse Native Experimentation! We've unlocked the same capabilities available on Snowflake, BigQuery, Redshift and Databricks to Athena users too.
You can reuse existing events and metrics from Athena in experimental analysis. You can also use typical Statsig features - including Incremental reloads (to manage costs), Power Analysis using historic data, and even use features like Entity Properties to join categorical information about users and use them in analysis across experiments!
Your data tells clearer stories when you can see how different groups stack up. We've added horizontal bar charts in Metrics Explorer to make these comparisons easy and intuitive.
What You Can Do Now
Compare metrics across any business dimension (time periods, segments, categories)
Track usage patterns by user type, location, or platform
Spot trends in any grouped data, from engagement to transactions
How It Works Apply a Group By to your data, and select the horizontal bar chart option. The chart automatically adjusts to show your groups clearly.
Impact on Your Analysis This visualization makes it simple to:
Identify your top and bottom performers instantly
Handle longer label names easily
Share clear comparisons in your reports
Start turning your grouped data into visual insights today.
Statsig's Javascript SDK now has out-of-the-box support for Angular with the release of our Angular bindings. While we've long helped customers setup Angular in our Slack Community, this release includes bindings and suggested patterns for both App Config and App Module integrations. Along with some angular-specific features like directives, this supports all of the bells and whistles you expect from Statsig's SDKs: feature flags, experiments, event logging, and more. Try it out, and let us know if you find any wrinkles as we roll out support. Get started with our Angular docs or simply run: npm install @statsig/angular-bindings
You can now connect multiple Snowflake warehouses to your account, enabling better query performance by automatically distributing query jobs across all available warehouses. To set it up, you can head over to Settings > Project > Data Connection, and select Set up additional Warehouses.
When you schedule multiple experiments to be loaded at the same time, Statsig will distribute these queries across the provided warehouses to reduce contention. Spreading queries across compute clusters can often be faster and cheaper(!) when contention causes queries to be backed up.
We have a beta of intelligent Autoscaling in works. Reach out in Slack if you'd like to try it!