You can now control exactly who gets recorded in Session Replay using new global and conditional targeting options. This gives you fine-grained control over session capture so you can focus on users who’ve opted in, track behavior behind feature gates, or limit recordings to specific actions or test groups.
Set a Global Targeting Gate
Define a global gate that determines which users are eligible for session recording. Only users who pass this gate can be recorded. This is useful for:
Recording only users who’ve opted in
Limiting capture to internal users
Scoping recordings to users who meet complex targeting conditions
Set a Global Sampling Rate
Define a global sample rate that determines what percent of sessions will be recorded by default.
This is useful if you want to record some percentage of all user sessions
Conditional triggers are not effected by the global sample rate, only the global targeting gate
Add Conditional Triggers with Custom Sampling Rates
You can define multiple recording triggers, each with its own sampling rate:
Event-based triggers: Start recording when a user triggers a specific event. Filtering on the event’s "Value" property is supported today, with more flexible event property filtering coming soon. This is great for focusing recordings on specific product scenarios.
Experiment-based triggers: Record users exposed to an experiment. You can narrow this to a specific variant to compare behavior across groups.
Feature gate–based triggers: Record users who pass a gate. Helpful for understanding how people interact with newly released features.
You can configure a Global Targeting Gate in your Session Replay settings. If set, only users who pass this gate will be considered for any recording.
Conditional triggers sit on top of this and define when recording should begin. For example, you might record 100% of users who trigger a critical event, 10% of users in a specific experiment variant, and 0% of users who don’t pass the global gate.
These controls let you capture the sessions that matter most while reducing noise. You can zero in on specific behaviors, test results, or user groups, stay compliant with data collection policies, and get more value out of your allotted replay quota by avoiding unnecessary recordings.
Focus your recordings where they count.
Dashboards can now be automatically refreshed on a schedule with results cached for faster loading and a snappier experience.
Set a refresh frequency for each dashboard (e.g. hourly, daily)
Automatically cache results in the background
Open dashboards with results already loaded, no wait time
You can configure a refresh interval in the dashboard settings. To do this:
Navigate to your dashboard and click the settings cog ⚙️.
Scroll to "Schedule Dashboard Refresh" and set the interval.
Click Save
Once set, queries for that dashboard will run on the specified schedule and store the results. When someone opens the dashboard, they’ll see the most recent data instantly, instead of triggering fresh queries.
This feature is only available for customers using Warehouse Native, where queries run directly against your warehouse.
Dashboards load faster and stay up to date without manual effort. This is especially helpful for shared dashboards or recurring check-ins, where you want fresh data ready without delay.
You can now treat experiment exposure events like any other event in Drilldown (time-series) and Funnel analyses. Exposure events include properties such as group, as well as user properties logged with the exposure. We currently only show first exposures to the experiment.
Pick exposure events in Drilldown charts to track how many users saw each variant over time.
Add exposure as the first step of a funnel to measure post-exposure conversion paths.
Group or filter by exposure properties, for example, break down results by variant, region, or device.
Overlay exposure counts with key metrics in Drilldown to check whether metric changes align with rollout timing.
Exposure logging
The first time a user is bucketed into an experiment, an exposure event is recorded with contextual properties.
Event selection
In both Drilldown and Funnel charts, exposure events appear in the same event picker you already use.
Property handling
Any custom fields travel with exposures, enabling the same group-by and filter controls available for other events.
Drilldown
Validate rollout health by confirming traffic splits and ramp curves over calendar time.
Catch logging issues early—spikes, gaps, or duplicates stand out immediately.
Align timing with metrics by viewing exposure and conversion lines on one chart.
Funnels
Measure post-exposure journeys starting the moment users see a variant.
Pinpoint variant-specific drop-offs by breaking down each step.
Ensure clean attribution because exposure proves the user entered the test.
Segment by exposure fields (e.g., region or device) to uncover cohort-level insights.
This feature is now available on Statsig Cloud and coming soon to Warehouse Native. Give it a try the next time you validate an experiment. Seeing exposure data side-by-side with core metrics speeds up debugging and sharpens your reads on variant performance.
Chart Annotations put experiment, gate, and config updates directly on your metric timeline. You see exactly when each change landed in your chart. No more hunting through logs or history.
To get started, open Metrics Explorer or Dashboards and toggle on "Show Annotations". Use the filter bar to pick which event markers you want to display. Your charts update with markers at the precise points of change.
Chart Annotations give instant context for every trend. Try it out today!
Log Explorer lets you diagnose issues quickly alongside your Statsig data. No more juggling tools or context switching.
Metrics point you to a change. Logs reveal the root cause.
Open any log entry to get started. Our point-and-click UI makes it easy for anyone to zero in on things like timestamp, service name, or metadata value. When you need more control, write queries from scratch using our flexible search.
Built-in OpenTelemetry support gets you up and running with minimal effort. No extra instrumentation required.
Try Log Explorer today!
You can now opt in to using Fieller Intervals when calculating % lift confidence intervals. This is a more accurate alternative to the Delta Method when calculating % lift confidence intervals.
Because Fieller Intervals are asymmetric, the scorecard display will look slightly different when this option is enabled:
You can set this up in your Experimentation Settings at the Organization Level.
Historical and ongoing experiments will not have their methodology changed midway through if you opt in, only newly created experiments after opt in are impacted.
Learn more about Fieller Intervals here!
The new Table View makes it easier to compare how different groups perform across multiple metrics and time periods, all in a single table. Each metric becomes a column, and each group (based on your group-by selection) becomes a row. No need to flip between charts or tabs.
What You Can Do Now:
Compare multiple metrics side by side across user or event groups
View how the same group performs across different time periods
Add group-bys to see per-group metric values in one view
How It Works:
Select metrics to display as columns
Add a group-by to generate one row per group value
Toggle time comparisons to populate the table with values from both current and past periods
Impact on Your Analysis:
Quickly spot which segments are over- or under-performing across several metrics
Easily assess how group performance changes over time
Simplifies complex comparisons that previously required multiple charts
Use Data Table View when you want a clear, compact summary of group-level performance across metrics and time.
Manage your Statsig configurations in the same programs that provision your cloud infrastructure.
With the Pulumi Statsig Provider, everything ships through a single version-controlled, reviewable workflow. This unifies progressive delivery with infrastructure as code. You get safer rollouts, automated drift detection, and built-in observability across infrastructure and product logic.
Visit our docs to get started. Or check us out in the Pulumi docs.
Statsig now hosts an MCP (Model Context Protocol) Server. This acts as a bridge between AI applications (clients) and Statsig: it is essentially a smart adapter that translates AI requests into commands that Statsig can understand.
For example - you can connect this in Cursor and request it in English to :
Make changes to your app to put features behind gates in Statsig
Instrument your app to log user interaction events - which can then be analyzed in Statsig
Perform operations like removing unused gates from your codebase - with Cursor directly pulling context from your Statsig project
You can connect it with Claude, and then ask questions based on data from Statsig:
Which experiments have been abandoned?
What are some suggestions for new growth experiments I can run?
Learn more here.
Geotest Experiments are now available to all our Warehouse Native customers, unlocking experimentation when traditional A/B testing doesn’t work. Commonly in marketing campaigns users cannot be reliably split into control and treatment.
With Statsig’s Geotesting, you can measure marketing incrementally in the core business metrics already in your warehouse. Using best-in-industry Synthetic Control methodology, Statsig makes it easy for every team to design and run statistically-rigorous tests using simple geographical controls like postal codes and DMAs.
Visit our docs to learn more and get started!