People running many experiments use Statsig's Meta-analysis tools. When they want to explore this dataset more directly, they've had access to it via the Console API. We're now adding the ability to have Statsig push the final results that are visualized in the Console, into your warehouse also.
This feature is gradually rolling out across Statsig Warehouse Native customers.
This feature automatically flags when sub-populations respond very differently to an experiment. This is sometimes referred as Heterogeneous Effect Detection or Segments of Interest.
Overall results for an experiment can look "normal" even when there's a bug that causes crashes only on Firefox, or when feature performs very poorly only for new users. You can now configure these "Segments of Interest" and Statsig will automatically analyze and flag experiments where we detect differential impact. You will be able to see the analysis that resulted in this flag.
Learn about how this works or see how to turn this on in docs. This feature shipped on Statsig Warehouse Native last summer and is now available on Statsig Cloud too!
Server Core is a full rewrite of our Server SDKs with a shared, performance-focused Rust library at the core - and bindings to each other language you'd want to use it in. Today, we're launching Java Server Core.
Server Core leverages Rust's natural speed, but also benefits from being a single place that we can optimize our server SDKs' performance. Our initial benchmarking suggests that Server Core can evaluate configs 5-10x faster than native SDKs.
You can install Java Core today by adding the necessary packages to your build.gradle - see our docs to get started. In the coming months, we expect to ship Server Core across Node, Python, PHP, and more!
We shipped Interaction Detection on Statsig Warehouse Native last year. We've now brought it to Statsig Cloud customers too.
When you run overlapping experiments, it is possible for them to interfere with each other. Interaction Detection lets you pick two experiments and evaluate them for interaction. This helps you understand if people exposed to both experiments behave very differently from people who're exposed to either one of the experiments.
Our general guidance is to run overlapping experiments. People seeing your landing page should experience multiple experiments at the same time. Our experience is echoed by all avid experimenters (link). Teams expecting to run conflicting experiments are typically aware of this and can avoid conflicts by making experiments mutually exclusive via Layers (also referred to as Universes).
Read more in docs or the blog post.
We've completely redesigned our Console Settings to streamline how you manage your Statsig projects. The new architecture brings three major improvements:
Intuitive Navigation: Navigate effortlessly with our new left sidebar, putting every setting at your fingertips. No more hunting through nested menus.
Product-Centric Organization: Each Statsig product—Experimentation, Feature Gates, and Product Analytics—now has its dedicated configuration hub. Tailor each product's settings to your exact needs, all from one central location.
Hierarchical Control: Configure settings at Team, Project, or Organization level, ensuring consistency while maintaining flexibility. Perfect for enterprises managing multiple teams and projects.
This redesign is live now. Log in to explore the new experience.
Statsig let's you slice results by user properties. Common examples of doing this include breaking down results by user's home country, subscription status or engagement level.
This typically requires running a custom query (from the Explore tab). You can now configure these properties to be pre-computed on the experiment setup page, under the advanced settings. It's also possible to configure team-level defaults for this - or pre-configure it on an experiment template.
This is now rolling out on Statsig Warehouse Native. See docs.
Our data science team noticed rising support tickets around warehouse data not appearing correctly in Statsig. Investigation revealed most issues stemmed from unclear error feedback and limited self-service capabilities, leading to unnecessary delays and support escalations.
Today, we're launching two key improvements:
Error Visibility: Consolidated error table across all data sources with clear, actionable messages and troubleshooting steps. A single view replaces the previous table-by-table navigation.
Self-Service Resolution: Step-by-step diagnosis flow lets users verify their connection setup and SQL queries, with immediate data re-ingestion once fixed.
These updates aim to help you discover any integration issues with your data warehouse connection and fix those issues without needing to depend on our internal support. We'll continue expanding these capabilities based on your feedback.
Today, we’re introducing the ability to filter by User dimensions in Custom Metrics on Statsig Cloud. Previously, you could filter by the Value of a metric, as well as any custom Metadata. Now, you can filter by both Statsig-populated User Object attributes (”User” → “Property”) as well as any Custom user attributes you’ve set in your User Object (”User” → “Custom Property”).
We've made user journeys simpler to build and easier to read. You now have more options for focusing your analysis, with clearer controls and easier-to-read charts.
What You Can Do Now
Choose between including specific events or excluding unwanted ones in your analysis
Control visualization density by setting the number of paths to show at each step
When expanding journey sections, property names and values are easier to read and understand
How It Works
Instead of just percentage thresholds, you can now specify exactly how many paths you want to see at each step. For example, show just the top 5 most common paths to keep your analysis focused. We've also improved the layout when you dig into user properties, making the information clearer at a glance.
Impact on Your Analysis
These changes make it easier to find what matters:
Build your analysis the way that works best for you - by including or excluding events. This makes it easier to build charts that aren't noisy or cluttered.
Keep your charts focused by showing only the most important paths
Spot patterns quickly with clearer property labels
Ready to try it out? Head to the User Journey section to see these changes.
We've improved dashboard building by letting you add widgets exactly where you want them. Instead of always adding to the bottom of your dashboard, you can now insert new visualizations between existing widgets.
What You Can Do Now
Add widgets directly between existing charts
Add widgets to smartly fill empty spaces on the dashboard
Insert new charts right where you're working
How It Works
Click the "+" button between any widgets to add a new visualization right at that spot. You can add widgets between widgets on the same row, or create a new row between sets of widgets. You can also add widgets directly in empty spaces on a dashboard .
Impact on Your Analysis
This improvement makes dashboard building faster and more natural:
Build dashboards in fewer steps
Organize visualizations logically as you work
Spend less time arranging, more time analyzing
Build your dashboard more efficiently by adding widgets exactly where you need them.