Product Updates

We ship fast to help you ship faster
Arrow Left
Arrow Right
Akin Olugbade
Product Manager, Statsig
8/21/2025
Permalink ›

🧪 Analyze Exposures in Metrics Explorer (Warehouse Native)

Experiment exposure events are now supported in Metrics Explorer on Warehouse Native. You can select them like any other event, filter or group by properties (variant, metadata), and tie rollout data directly to product metrics.

More details here: Exposures in Metrics Explorer

Akin Olugbade
Product Manager, Statsig
8/20/2025
Permalink ›

✅ Verified Cohorts and Dashboards

Admins can now mark specific cohorts and dashboards as verified. This signals that they are the trusted, official versions while also protecting them from accidental edits.

What You Can Do Now

  • Mark cohorts and dashboards as verified to indicate they are the approved versions

  • Prevent edits to verified entities unless you are an admin

  • Clone verified cohorts and dashboards to create your own editable versions

How It Works

  • Cohorts: Mark as verified when creating a new cohort or by editing an existing one

  • Dashboards: Mark as verified from the settings cog in the top right of the dashboard page

Impact on Your Analysis

Teams can align on a single source of truth for key cohorts and dashboards while still allowing individuals to explore their own versions without risking changes to the verified originals.

This keeps shared analysis reliable and consistent.

Shubham Singhal
Product Manager, Statsig
8/20/2025
Permalink ›

Pre-Post Results on Feature Gates

Pre-post results

Sometimes, you don’t have the luxury of launching a feature partially to your user population (e,g, X% of the users). Maybe you had to ship something immediately, rolled out a backend improvement to all users, or made a change you can’t ethically hold back from part of your audience. That’s where Pre-Post Results comes in.

With Pre-Post Results, you can:

  • Compare metrics before and after a feature reaches 100% rollout

  • See the directional impact on key outcomes, even without a control group

✨ How it works

Statsig automatically detects when a feature gate has been rolled out to all users (0 → 100% or started at 100% in the last 30 days). It then compares the same users’ behavior before and after rollout, showing you whether your feature moved the needle. To read more about our computational methodology, read Statsig Docs.

📍 Available now

Pre-Post Results is live for all Cloud customers in Statsig. You’ll see it automatically when your rollout qualifies - no setup required.

Lin Jia
Data Scientist, Statsig
8/11/2025
Permalink ›

📈 Velocity Dashboard

Introducing the Velocity Dashboard, your new source of truth for understanding how fast your team is shipping. Built to bring visibility and alignment, this dashboard helps you track experiments and gates shipped over time, making it easier to highlight progress, spot trends, and share impact across your organization.

Highlights

  • Unified view of experiment and gate velocity across teams

  • Filter & group by team, tag, or status to focus on what matters

  • Long-term trends at a glance — perfect for quarterly reviews

  • Export to PDF for reporting, decks, or sharing with stakeholders

Available Now

Click on Dashboards → Velocity Dashboard to get started, or from the homepage widget.

Velocity Dashboard
Akin Olugbade
Product Manager, Statsig

📌 Default Dashboard Filters

You can now pin filters directly to dashboards, making it easier to analyze different views of your data without rebuilding filters from scratch.

What You Can Do Now

  • Pin commonly used filters (e.g. “Company Name”) to any dashboard

  • Quickly swap values to see how different users, cohorts, or properties impact the same set of charts

  • Use pinned filters as a starting point for scoped analysis

How It Works

From any dashboard, go to dashboard settings and configure "default filters". The pinned filter will appear at the top of the dashboard and apply across all charts. When you change the value (e.g. from Company A to Company B), the charts update automatically with no need to reconfigure each one.

Impact on Your Analysis

This makes it faster to compare trends across dimensions like company, region, or platform. Instead of duplicating dashboards or editing individual filters, you can reuse the same dashboard with dynamic, scoped filtering.

Great for teams who want to keep a consistent dashboard layout while comparing key segments.

Akin Olugbade
Product Manager, Statsig

🔍 Conversion Drivers in Funnels

Pinpoint what’s helping or hurting your conversions. Conversion Drivers automatically highlight the most significant factors influencing whether users drop off or complete a funnel, so you don’t have to guess where to dig deeper.

What You Can Do Now

  • Identify high-impact drivers of conversion or drop-off without needing a hypothesis in advance

  • Analyze correlations across event properties, user properties, and intermediary events

  • View performance summaries for each driver, including conversion rate and share of funnel participants

  • Drill into a driver to access a conversion matrix and correlation coefficient

  • Group the funnel by any surfaced driver with one click to explore broader trends

How It Works

When viewing a funnel, click on any step and select “View Drop-Off & Conversion Drivers.” A modal will appear showing a ranked list of the most statistically significant factors associated with conversion or drop-off at that step.

You can configure which types of data we analyze:

  • Event properties like plan type, referral code, or platform

  • User properties like country, account age, or signup method

  • Intermediary events, which occurred between the two funnel steps

Each surfaced driver includes:

  • How much more or less likely users with the factor were to convert, expressed as a multiple (for example, users with platform = Android were 1.2x as likely to convert)

  • Conversion rate for users with the factor

  • Share of funnel participants who had the factor

Clicking into a driver opens the drilldown view, where you can explore:

  • A conversion matrix that compares outcomes for users who had the factor versus those who did not

  • A correlation coefficient measuring how strongly the factor is associated with completing the funnel

If a pattern looks meaningful, you can group your funnel by that property with a single click. This reconfigures the chart to show step-by-step conversion performance broken down by the selected value.

Impact on Your Analysis

Funnels tell you what your conversion rate is. Conversion Drivers help explain why.

This feature is especially useful when:

  • You are exploring a new funnel without a clear hypothesis

  • You notice a drop-off and want to identify potential causes

  • You want to validate whether specific user groups or behaviors are influencing conversion

  • You are monitoring changes in funnel performance and need to explain what shifted

By surfacing statistically significant behaviors and attributes, Conversion Drivers gives you a starting point for deeper investigation and helps you move faster from observation to insight.

Available Now

Conversion Drivers are available on any funnel for customers on the Pro plan or Enterprise customers with the Advanced Analytics package. Click on a step and select “View Drop-Off & Conversion Drivers” to get started.

conversion drivers
Shubham Singhal
Product Manager, Statsig
7/30/2025
Permalink ›

Multi-Variant Support for Dynamic Configs

Dynamic Configs just got more powerful. You’re no longer limited to a single config value and a fallback. With multi-variant support, you can define multiple named JSON variants and control which one is served - all from the Statsig console, no deploys required.

Multivariate DC-2
Multivariate DC-1

🧠 Real-world Example

Let’s say you’re using a third party messaging service. You use Twilio, but want a fallback in case it goes down, and you’re testing AWS SES for cost savings.

With Multivariate Dynamic Configs, you can:

  • Define config variants for each provider

  • Roll traffic to AWS SES for 5% of users to validate integration

  • Failover to Sendgrid if Twilio has an outage

  • Tune timeouts and retry logic on the fly - no redeploys

⚡️ Available Now

We will be rolling out Multivariate Dynamic Configs to all Statsig customers over the next few weeks. If you'd like to get an early access, please reach out.

Akin Olugbade
Product Manager, Statsig
7/23/2025
Permalink ›

📼 Playlists in Session Replay

You can now group replays into Playlists to curate and share the sessions that matter most.

What You Can Do Now

  • Add a replay to an existing playlist or create a new one directly from the replay viewer

  • Access all your saved playlists from the new “Playlists” tab next to “Sessions”

  • Cycle through playlist sessions without returning to the full list view

How It Works

From any replay, click the “Add to Playlist” button to save it. You can either select an existing playlist or create a new one on the spot. The Playlists tab lets you view and play through a sequence of sessions in one place.

Impact on Your Analysis

Playlists make it easy to gather replays related to:

  • Bugs or usability issues

  • Experiment test groups

  • Feature launches or onboarding flows

They’re ideal for sharing context across teams without needing to repeat yourself.

Try building a playlist for your next launch review or bug triage session.

Akin Olugbade
Product Manager, Statsig
7/22/2025
Permalink ›

🧑‍💻 Revamped Users Tab

We’ve redesigned the Users tab in Metrics Explorer to make it more useful out of the box and more powerful when you need to dig deeper.

What You Can Do Now

  • View recent users instantly

    Landing on the Users tab now shows a live sample of recent users with basic info, so you can immediately start exploring.

  • Filter to specific user sets

    Narrow the list by applying filters based on:

    • User properties (e.g. country, plan type)

    • Events performed (e.g. signed_up, clicked_button)

    • Experiment or feature gate group (e.g. users in treatment vs control)

How It Works

You’ll see a sample of recent users as soon as the tab loads with no query required. Use the filter panel to drill down into any segment you care about. For example:

  • Find users who dropped out of a funnel after the second step

  • See only users in the “new-nav-rollout” feature gate

  • Inspect event history for users on a specific pricing tier

Impact on Your Analysis

Filtering to specific sets of users helps you move from high-level trends to concrete user behavior. You can:

  • Debug feature exposure by confirming which users were actually in a gate or experiment

  • Investigate drop-offs by checking what users did before and after a key event

  • Validate hypotheses about certain user groups, like whether trial users behave differently than paid ones

Akin Olugbade
Product Manager, Statsig
7/19/2025
Permalink ›

🎯 Segments in Funnels and Metrics Explorer

You can now create and analyze Segments directly from Funnels and use them in Metrics Explorer for deeper, follow-up analysis.

What You Can Do Now

  • From Funnels: Create a new Segment from users who did or didn’t convert at a given step.

  • In Metrics Explorer:

    • Filter your chart to only include users in a Segment.

    • Break down results by whether users are in a Segment.

  • Segment Type: Currently limited to ID List-based Segments (max size: 1000 users).

How It Works

When viewing a funnel chart, you’ll see a new option to “Create Segment” next to conversion results. Clicking it will generate an ID List-based Segment you can name and save.

In Metrics Explorer, you’ll find Segment filtering under the filter panel, and breakdown by Segment in the group-by dropdown. These options appear only when the Segment is based on an ID list.

Impact on Your Analysis

This gives you a fast, integrated way to dig deeper into interesting groups of users, like understanding what users who dropped off at step 2 did before or after, or comparing behavior between converters and non-converters over time.

Looking Ahead

We’re planning to:

  • Support additional segment types beyond ID Lists.

  • Remove the 1000-user limit for ID List-based Segments.

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"Statsig's experimentation capabilities stand apart from other platforms we've evaluated. The ease of use, simplicity of integration help us efficiently get insight from every experiment we run. Statsig's infrastructure and experimentation workflows have also been crucial in helping us scale to hundreds of experiments across hundreds of millions of users."
Paul Ellwood
Head of Data Engineering
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
OpenAI
"Statsig has been an amazing collaborator as we've scaled. Our product and engineering team have worked on everything from advanced release management to custom workflows to new experimentation features. The Statsig team is fast and incredibly focused on customer needs - mirroring OpenAI so much that they feel like an extension of our team."
Chris Beaumont
Data Scientist
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
OpenAI
"Statsig has helped accelerate the speed at which we release new features. It enables us to launch new features quickly & turn every release into an A/B test."
Andy Glover
Engineer
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy