You can now treat Feature Gate exposure events like any other event in Metric Drilldown and Funnel analyses. This capability is available on both Cloud and Warehouse-Native and currently includes first exposures for each user.
Use Feature Gate exposure events in Drilldown charts to track exposure trends over time
Add a gate exposure as the first step of a funnel to measure post-exposure conversion paths
Group or filter by gate-related properties, such as gate name, pass/fail result, or environment
Compare user behavior across gate conditions to understand the impact of gated rollouts
Exposure logging When a user is evaluated against a gate, a first-exposure event is recorded with relevant gate properties and user context.
Event selection In both Drilldown and Funnel charts, gate exposures appear in the same event picker you already use for other events.
Property handling Exposure events include gate metadata and user properties, enabling the same group-by and filter controls available for other event types.
Drilldown
Validate rollout health by visualizing exposure volume and distribution over time
Debug exposure logging by spotting spikes, drops, or unexpected gaps
Align exposure activity with key metrics to confirm rollout timing and behavior
Funnels
Measure user journeys starting from a user’s first gate evaluation
Identify conversion differences between users who passed or failed a gate
Attribute downstream changes to specific gated experiences
This brings Feature Gate exposure analysis into Metrics Explorer, helping you debug, validate, and measure the real-world effects of gated rollouts across both Cloud and Warehouse-Native environments.
Following 4.5 years in our previous codebase and infrastructure, we're excited to announce the second iteration of the Statsig Documentation!
Docs v2 comes with a vastly updated UI, including revamped codeblocks, tabs and dropdowns, and tidied-up navigation and page structure.
Docs v2 features some new features that make it easy to pull Statsig's docs into your LLMs' context, get answers with AI, and more. We also have a brand new API playground for our Console API and HTTP API, making it easy to grab code snippets in your language of choice or get a sample cURL.
As always, our docs remain open source and free for the community to contribute to. We make our docs better every day, if you find anything that needs some love, let us know in Slack!

Statsig Autocapture allows you to track events on your website such as page views, clicks, scroll depth, etc. with just one line of code. Today, we are excited to announce 4 major updates that will help you measure and provide more context around user behavior and site performance.
🔗 To start using Autocapture, refer to the setup guide in our docs:
Autocapture now tracks additional user actions, including:
Form changes
Clicks across more element types
Text input
Rage clicks (repeated clicks in frustration)
Dead clicks (clicks that lead to no action or broken links)
Copy/paste actions
UTM tags are now automatically captured, allowing teams to understand where traffic is coming from. Captured parameters include source, campaign, medium, content, and term.
You can now track key performance metrics directly through Autocapture:
Cumulative Layout Shift (CLS)
First Contentful Paint (FCP)
Last Contentful Paint (LCP)
Time to First Byte (TTFB)
A refreshed Web Analytics Dashboard is now generated when you set up autocapture. It provides visibility into traffic trends, channel breakdowns, visitor demographics, and performance metrics. Each chart is customizable for filtering and segmentation.

Following two previous iterations under the "Sidecar" title, Statsig's third-generation visual editor is here, simply called the Statsig Visual Editor. The Visual Editor, like Sidecar before it, lets you start experiments without writing code but still use Statsig's powerful Stats Engine to get results faster. The Visual Editor experience is centered around the Statsig console (instead of in a Chrome Extension), meaning your Visual Editor experiments sit alongside your product experiments in the console. Alongside the in-console experience, the Statsig Visual Editor is designed to be vastly simpler to use vs. previous iterations, and have fewer hiccups going from idea-to-experiment.
The Editor is in an open beta, you can get started by reading the docs or choosing the "Visual Editor" experiment type when creating an experiment in the console!
You can now add rich text widgets to any dashboard. This is a new option in addition to existing header text widgets, which remain for simple dividers.
Write context directly on dashboards with formatted text
Use headings, bold, italics, lists, and links
Format with Markdown or the widget’s built-in controls
On a dashboard, add a widget and choose Rich Text.
Enter your content.
Format using Markdown or the toolbar, then save.
Make dashboards self-explanatory with metric definitions, scope, and caveats next to the charts
Reduce back-and-forth by capturing conclusions, decisions, and next steps inline after reviews
Speed up onboarding by explaining how to read the dashboard and why certain cuts or filters are used
Link out to specs, tickets, or experiments so readers can get more context without leaving the page
Available now on all dashboards. Try adding a Rich Text widget to provide context where it’s most useful.
You can now set Change Alerts to track relative shifts in your metrics. Instead of relying on fixed thresholds, these alerts notify you when a metric moves up or down by the percentage or amount you choose.
What You Can Do Now
Create alerts that trigger on % increases or decreases
Catch major swings like a 20% drop in signups or a 50% jump in errors
Use Change Alerts with Threshold Alerts to cover both relative and topline changes
Getting Started
In the left product menu, open Topline Alerts.
Create a new alert and choose your desired Condition Type.
When to Use Each Alert Type
Threshold - use to monitor against a fixed limit. ("Alert me when total daily signups drops below 1000")
Change - use to monitor absolute shifts. ("Alert me when daily signups drop by 200 compared to yesterday")
Change (%) - use to monitor percentage shifts ("Alert me when daily signups drop 20% compared to yesterday")

Sample Ratio Mismatch (SRM) happens when the share of users in experiment groups is different from what you expected. For example, if you set up a 50/50 split between control and treatment but the actual traffic is 60/40, that’s an SRM.
The SRM p-value is a statistical measure that tells you whether the observed imbalance could have happened by chance.
A p-value above 0.01 generally means the imbalance is within expected random variation.
A p-value below 0.01 suggests the imbalance is unlikely due to chance and may warrant investigation.
View SRM results and p-values across experiment groups in Metrics Explorer
Group results by different properties to identify potential causes of imbalance
Start from experiment exposure diagnostics and click on suggested properties to pre-apply them as group-bys in Metrics Explorer
Metrics Explorer applies the SRM formula across experiment groups and shows the resulting p-value. From there, you can add group-bys (such as country, platform, or custom properties) to spot where imbalance is happening.
Experiment diagnostics also highlight properties that may be driving the imbalance. Clicking the icon next to one of these properties takes you into Metrics Explorer with that property already grouped, so you can continue the investigation seamlessly.
This workflow makes it faster to detect and understand exposure imbalances. By moving directly from diagnostics to group-by analysis, you save time and get clearer visibility into which properties are linked to the imbalance.
Sample Ratio Mismatch debugging is available now across Cloud and Warehouse Native.
You can now add manual annotations directly to Drilldown charts in Metrics Explorer. This lets you document notable moments in your data and see them again whenever the same metrics are viewed.
Click any data point on a Drilldown chart to add a custom annotation
Apply an annotation to the metric you clicked, or extend it to additional metrics
See annotation icons whenever a chart’s date range and metrics overlap with saved annotations
Edit existing annotations, including description, date, time, and associated metrics
Each annotation is tied to a point in time and one or more metrics. When you load a Drilldown chart that includes both, an annotation icon appears. Click the icon to view or expand the note. You can adjust the description, date, time, and metrics at any point.
Annotations help you connect changes in the data to events in the real world. For example, you can tag the day a feature shipped or note an outage that caused a traffic dip. These markers appear on charts whenever the same metrics are analyzed, so you never lose the context.
Conversion Drivers are now available in Warehouse Native and Cloud. They surface the most significant factors influencing funnel conversions or drop-offs, helping you quickly understand why users convert or drop off.
Identify high-impact drivers of conversion or drop-off
Analyze event properties, user properties, and intermediary events
View summaries with conversion rate, share of participants, and impact
Drill into a driver for conversion matrices and correlation coefficients
Group funnels by any surfaced driver with one click
Conversion Drivers analyze columns from the metric source used in the first step of the funnel. For best results, configure your metric source as a multi-event metric source on the setup page and ensure all funnel steps come from that source.
From a funnel, click a step and select “View Drop-Off & Conversion Drivers.” You’ll see a ranked list of factors with conversion likelihood, conversion rates, and share of participants. Clicking into a factor opens detailed comparisons and lets you regroup the funnel by that property.
Funnels show what your conversion rate is. Conversion Drivers explain why, so you can investigate drop-offs, explore new funnels, and validate which user groups or behaviors matter most.
Conversion Drivers are available now for all Warehouse Native customers. For Cloud customers, read more about how Conversion Drivers work on Cloud.
We’ve expanded the Decision Framework feature beyond templates.
Now, you can directly configure and manage decision frameworks for each experiment. This gives teams a place to codify decision-making so that users can quickly move to action at the conclusion of an experiment.
To add a decision framework to your experiment select “Add Decision Framework” from the experiment menu.
