Product Updates

We ship fast to help you ship faster
Arrow Left
Arrow Right
Akin Olugbade
Product Manager, Statsig
6/25/2025
Permalink ›

🎯 Conditional Recording Triggers for Session Replay

You can now control exactly who gets recorded in Session Replay using new global and conditional targeting options. This gives you fine-grained control over session capture so you can focus on users who’ve opted in, track behavior behind feature gates, or limit recordings to specific actions or test groups.

What You Can Do Now

  • Set a Global Targeting Gate

    Define a global gate that determines which users are eligible for session recording. Only users who pass this gate can be recorded. This is useful for:

    • Recording only users who’ve opted in

    • Limiting capture to internal users

    • Scoping recordings to users who meet complex targeting conditions

  • Set a Global Sampling Rate

    • Define a global sample rate that determines what percent of sessions will be recorded by default.

      • This is useful if you want to record some percentage of all user sessions

      • Conditional triggers are not effected by the global sample rate, only the global targeting gate

  • Add Conditional Triggers with Custom Sampling Rates

    You can define multiple recording triggers, each with its own sampling rate:

    • Event-based triggers: Start recording when a user triggers a specific event. Filtering on the event’s "Value" property is supported today, with more flexible event property filtering coming soon. This is great for focusing recordings on specific product scenarios.

    • Experiment-based triggers: Record users exposed to an experiment. You can narrow this to a specific variant to compare behavior across groups.

    • Feature gate–based triggers: Record users who pass a gate. Helpful for understanding how people interact with newly released features.

How It Works

You can configure a Global Targeting Gate in your Session Replay settings. If set, only users who pass this gate will be considered for any recording.

Conditional triggers sit on top of this and define when recording should begin. For example, you might record 100% of users who trigger a critical event, 10% of users in a specific experiment variant, and 0% of users who don’t pass the global gate.

Impact on Your Analysis

These controls let you capture the sessions that matter most while reducing noise. You can zero in on specific behaviors, test results, or user groups, stay compliant with data collection policies, and get more value out of your allotted replay quota by avoiding unnecessary recordings.

Focus your recordings where they count.

Akin Olugbade
Product Manager, Statsig
6/25/2025
Permalink ›

🔄 Automatic Dashboard Refreshes

Dashboards can now be automatically refreshed on a schedule with results cached for faster loading and a snappier experience.

What You Can Do Now

  • Set a refresh frequency for each dashboard (e.g. hourly, daily)

  • Automatically cache results in the background

  • Open dashboards with results already loaded, no wait time

How It Works

You can configure a refresh interval in the dashboard settings. To do this:

  • Navigate to your dashboard and click the settings cog ⚙️.

  • Scroll to "Schedule Dashboard Refresh" and set the interval.

  • Click Save

Once set, queries for that dashboard will run on the specified schedule and store the results. When someone opens the dashboard, they’ll see the most recent data instantly, instead of triggering fresh queries.

This feature is only available for customers using Warehouse Native, where queries run directly against your warehouse.

Impact on Your Analysis

Dashboards load faster and stay up to date without manual effort. This is especially helpful for shared dashboards or recurring check-ins, where you want fresh data ready without delay.

Akin Olugbade
Product Manager, Statsig
6/25/2025
Permalink ›

🧪 Experiment Exposure Events Metrics Explorer (Cloud)

Overview

You can now treat experiment exposure events like any other event in Drilldown (time-series) and Funnel analyses. Exposure events include properties such as group, as well as user properties logged with the exposure. We currently only show first exposures to the experiment.

What You Can Do Now

  • Pick exposure events in Drilldown charts to track how many users saw each variant over time.

  • Add exposure as the first step of a funnel to measure post-exposure conversion paths.

  • Group or filter by exposure properties, for example, break down results by variant, region, or device.

  • Overlay exposure counts with key metrics in Drilldown to check whether metric changes align with rollout timing.

How It Works

  1. Exposure logging

    The first time a user is bucketed into an experiment, an exposure event is recorded with contextual properties.

  2. Event selection

    In both Drilldown and Funnel charts, exposure events appear in the same event picker you already use.

  3. Property handling

    Any custom fields travel with exposures, enabling the same group-by and filter controls available for other events.

Impact on Your Analysis

Drilldown

  • Validate rollout health by confirming traffic splits and ramp curves over calendar time.

  • Catch logging issues early—spikes, gaps, or duplicates stand out immediately.

  • Align timing with metrics by viewing exposure and conversion lines on one chart.

Funnels

  • Measure post-exposure journeys starting the moment users see a variant.

  • Pinpoint variant-specific drop-offs by breaking down each step.

  • Ensure clean attribution because exposure proves the user entered the test.

  • Segment by exposure fields (e.g., region or device) to uncover cohort-level insights.

This feature is now available on Statsig Cloud and coming soon to Warehouse Native. Give it a try the next time you validate an experiment. Seeing exposure data side-by-side with core metrics speeds up debugging and sharpens your reads on variant performance.

Laurel Chan
Product Manager, Statsig
6/16/2025
Permalink ›

Looking for precise change markers in your metrics?

Chart Annotations put experiment, gate, and config updates directly on your metric timeline. You see exactly when each change landed in your chart. No more hunting through logs or history.

To get started, open Metrics Explorer or Dashboards and toggle on "Show Annotations". Use the filter bar to pick which event markers you want to display. Your charts update with markers at the precise points of change.

Chart Annotations give instant context for every trend. Try it out today!

Annotations
Laurel Chan
Product Manager, Statsig
6/11/2025
Permalink ›

Want to see your metrics, releases, and logs in a single platform?

Log Explorer lets you diagnose issues quickly alongside your Statsig data. No more juggling tools or context switching.

Metrics point you to a change. Logs reveal the root cause.

Open any log entry to get started. Our point-and-click UI makes it easy for anyone to zero in on things like timestamp, service name, or metadata value. When you need more control, write queries from scratch using our flexible search.

Built-in OpenTelemetry support gets you up and running with minimal effort. No extra instrumentation required.

Try Log Explorer today!

Liz Obermaier
Data Scientist, Statsig
6/11/2025
Permalink ›

📊 Fieller Interval

You can now opt in to using Fieller Intervals when calculating % lift confidence intervals. This is a more accurate alternative to the Delta Method when calculating % lift confidence intervals.

Because Fieller Intervals are asymmetric, the scorecard display will look slightly different when this option is enabled:

fieller

You can set this up in your Experimentation Settings at the Organization Level.

fieller_setting

Historical and ongoing experiments will not have their methodology changed midway through if you opt in, only newly created experiments after opt in are impacted.

Learn more about Fieller Intervals here!

Akin Olugbade
Product Manager, Statsig

📊 Data Table Views in Metric Drilldown

The new Table View makes it easier to compare how different groups perform across multiple metrics and time periods, all in a single table. Each metric becomes a column, and each group (based on your group-by selection) becomes a row. No need to flip between charts or tabs.

What You Can Do Now:

  • Compare multiple metrics side by side across user or event groups

  • View how the same group performs across different time periods

  • Add group-bys to see per-group metric values in one view

How It Works:

  • Select metrics to display as columns

  • Add a group-by to generate one row per group value

  • Toggle time comparisons to populate the table with values from both current and past periods

Impact on Your Analysis:

  • Quickly spot which segments are over- or under-performing across several metrics

  • Easily assess how group performance changes over time

  • Simplifies complex comparisons that previously required multiple charts

Use Data Table View when you want a clear, compact summary of group-level performance across metrics and time.

data table
Laurel Chan
Product Manager, Statsig

Introducing the Pulumi Statsig Provider

Manage your Statsig configurations in the same programs that provision your cloud infrastructure.

With the Pulumi Statsig Provider, everything ships through a single version-controlled, reviewable workflow. This unifies progressive delivery with infrastructure as code. You get safer rollouts, automated drift detection, and built-in observability across infrastructure and product logic.

Visit our docs to get started. Or check us out in the Pulumi docs.

Vineeth Madhusudanan
Product Manager, Statsig

Statsig MCP Server

Statsig now hosts an MCP (Model Context Protocol) Server. This acts as a bridge between AI applications (clients) and Statsig: it is essentially a smart adapter that translates AI requests into commands that Statsig can understand.

For example - you can connect this in Cursor and request it in English to :

  • Make changes to your app to put features behind gates in Statsig

  • Instrument your app to log user interaction events - which can then be analyzed in Statsig

  • Perform operations like removing unused gates from your codebase - with Cursor directly pulling context from your Statsig project

You can connect it with Claude, and then ask questions based on data from Statsig:

  • Which experiments have been abandoned?

  • What are some suggestions for new growth experiments I can run?

Learn more here.

Michael Makris
Senior Data Scientist, Statsig

🪨 Geotest Experiments

Geotest Experiments are now available to all our Warehouse Native customers, unlocking experimentation when traditional A/B testing doesn’t work. Commonly in marketing campaigns users cannot be reliably split into control and treatment.

With Statsig’s Geotesting, you can measure marketing incrementally in the core business metrics already in your warehouse. Using best-in-industry Synthetic Control methodology, Statsig makes it easy for every team to design and run statistically-rigorous tests using simple geographical controls like postal codes and DMAs.

geotest

Visit our docs to learn more and get started!

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"Statsig's experimentation capabilities stand apart from other platforms we've evaluated. The ease of use, simplicity of integration help us efficiently get insight from every experiment we run. Statsig's infrastructure and experimentation workflows have also been crucial in helping us scale to hundreds of experiments across hundreds of millions of users."
Paul Ellwood
Head of Data Engineering
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
OpenAI
"Statsig has been an amazing collaborator as we've scaled. Our product and engineering team have worked on everything from advanced release management to custom workflows to new experimentation features. The Statsig team is fast and incredibly focused on customer needs - mirroring OpenAI so much that they feel like an extension of our team."
Chris Beaumont
Data Scientist
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
OpenAI
"Statsig has helped accelerate the speed at which we release new features. It enables us to launch new features quickly & turn every release into an A/B test."
Andy Glover
Engineer
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy