We’ve expanded Global Dashboard Filters so you can filter a dashboard using ID List based Segments. This is an additional filter option. Existing global filtering (property filters and other criteria) continues to work the same way.
Apply a Segment filter at the dashboard level where the segment is defined by an ID list
Combine an ID List Segment filter with your existing global property filters
Keep every chart on the dashboard scoped to the same audience without reapplying filters chart-by-chart
Create or select an ID List based Segment (a segment defined by a fixed list of IDs, like user IDs, account IDs, or device IDs)
In your dashboard’s Global Dashboard Filters, choose that segment as a filter
The segment filter applies to all charts on the dashboard, alongside any other global filters you’ve set
Example: set the global filter to the segment “Enterprise accounts (ID list)” to ensure every chart reflects only those accounts.
Use dashboards to answer questions about a specific, known set of users or accounts (for example, a customer list, beta cohort, or internal test group)
Reduce chart-to-chart inconsistencies caused by manually recreating the same ID-based audience filter
Iterate faster when you need to swap the audience across the entire dashboard (for example, compare two different customer lists)
We’ve upgraded authentication for the Statsig MCP Server to support OAuth — supplementing the previous key-based authentication flow. This brings a more secure, scalable, and standards-aligned approach to connecting your MCP tooling with Statsig.
OAuth makes it easier and safer for teams to integrate the Statsig MCP Server with their development workflows and in more tools. It enables clearer permission boundaries, smoother onboarding and persistent sessions, and better alignment with modern enterprise security practices.
Follow the updated setup instructions in our docs to enable OAuth for your MCP Server connection. No changes are required to your existing Statsig feature flags or experimentation setup — just update your authentication method to take advantage of the new flow.
Learn more in the docs here: https://docs.statsig.com/integrations/mcp
Traces show how a single request moves through your system, one step at a time. Statsig new lets you explore those spans alongside experiments and feature rollouts.
Traces Explorer = faster root cause. You no longer need to jump between tools to understand slow or failing requests. Statsig brings traces, logs, metrics, and alerting into one place for critical launches.
Bring observability into your product decision loop.
Traces Explorer (Beta) is available for Cloud customers. View trace setup instructions here.

It shows the smallest effect size your test had power to detect based on actual sample size and standard error of the control group (it does not depend on the observed effect size of the experiment). It’s a great tool to leverage retrospectively near the end of an experiment to make more informed decisions.

Helps you reflect on what your experiment actually could detect
Helps you with iteration decisions like extending the experiment, rerunning with greater sample, or re-evaluating the experiment design
Toggle it on/off anytime in Settings → Product Configuration → Experimentation. To learn more about Reverse Power see our docs here.
For our Cloud users, we’ve added support for viewing cumulative exposure and metric results for experiments enabled in lower environments. Helping users catch issues early and ship experiments with confidence.

This makes it easier to verify that:
Users are being bucketed correctly
Your metrics are logging as expected
To enable this, go to the experiment setup page, select Enable for Environments, run tests in your lower environment, and view real exposure and metric data before launching to production.
For more information on the new lower environment testing features see the docs here.
Statsig WHN users can now quickly check the freshness of their Experiment Metrics Source data directly from the experiment results page. The data displays the most recent timestamp found in a Metric Source during results computation, helping users identify potential issues in their data pipeline that could delay results calculation.

Debugging usually means retracing your steps. You try something, flip tabs, run a new search, and then can’t remember the exact query that actually pointed you in the right direction. It breaks your flow and slows everything down.
Recent Queries fixes that. Logs Explorer now shows your last five searches right in the search menu. No more digging through tabs or guessing what you ran before. Jump back into earlier investigation paths, compare results in seconds, and pick up your workflow without losing momentum.
A small change with a big payoff for staying in the zone.
Topline Alerts work best when they track the exact metric your team relies on. Until now, Topline Alerts could only be created using events, which meant rebuilding the metric logic each time. This increases the potential of duplicated definitions, drift over time, and confusion about what the “real” definition should be defined as.
This update fixes that. You can now create Topline Alerts using existing metrics in your Metrics Catalog. The alert uses the metric’s definition exactly as it is, so everything stays consistent and aligned. No rewriting. No re-creating logic. No guessing which version is correct.
It keeps your metrics clean, keeps ownership clear, and removes the risk of definitions drifting as teams grow.
Today we're excited to announce Table View for experiment results. It's perfect for users who want to examine their experiment results in greater detail while keeping everything consolidated in a single view.
With Table View, users can now see Control Mean, Test Mean, and P-value—in addition to the data points available in the default cumulative view.

Managing a large metric catalog can be challenging—especially when many metrics are slight variants of a single source metric. Ensuring that updates to the main metric cascade correctly to all of its variants can be difficult and error-prone.
For our WHN customers, we’re excited to introduce Metric Families. Users can now create Child Metrics (variants) from a Parent Metric, ensuring that any changes to the Parent automatically flow down to its Children. This makes it easier to manage large catalogs while giving teams the flexibility to create and maintain metric variants without losing consistency.
The feature is available for Sum and Count metric types. Follow this link to learn more.
