Product Updates

We ship fast to help you ship faster
Arrow Left
Arrow Right
Akin Olugbade
Product Manager, Statsig

🔍 Conversion Drivers in Funnels

Pinpoint what’s helping or hurting your conversions. Conversion Drivers automatically highlight the most significant factors influencing whether users drop off or complete a funnel, so you don’t have to guess where to dig deeper.

What You Can Do Now

  • Identify high-impact drivers of conversion or drop-off without needing a hypothesis in advance

  • Analyze correlations across event properties, user properties, and intermediary events

  • View performance summaries for each driver, including conversion rate and share of funnel participants

  • Drill into a driver to access a conversion matrix and correlation coefficient

  • Group the funnel by any surfaced driver with one click to explore broader trends

How It Works

When viewing a funnel, click on any step and select “View Drop-Off & Conversion Drivers.” A modal will appear showing a ranked list of the most statistically significant factors associated with conversion or drop-off at that step.

You can configure which types of data we analyze:

  • Event properties like plan type, referral code, or platform

  • User properties like country, account age, or signup method

  • Intermediary events, which occurred between the two funnel steps

Each surfaced driver includes:

  • How much more or less likely users with the factor were to convert, expressed as a multiple (for example, users with platform = Android were 1.2x as likely to convert)

  • Conversion rate for users with the factor

  • Share of funnel participants who had the factor

Clicking into a driver opens the drilldown view, where you can explore:

  • A conversion matrix that compares outcomes for users who had the factor versus those who did not

  • A correlation coefficient measuring how strongly the factor is associated with completing the funnel

If a pattern looks meaningful, you can group your funnel by that property with a single click. This reconfigures the chart to show step-by-step conversion performance broken down by the selected value.

Impact on Your Analysis

Funnels tell you what your conversion rate is. Conversion Drivers help explain why.

This feature is especially useful when:

  • You are exploring a new funnel without a clear hypothesis

  • You notice a drop-off and want to identify potential causes

  • You want to validate whether specific user groups or behaviors are influencing conversion

  • You are monitoring changes in funnel performance and need to explain what shifted

By surfacing statistically significant behaviors and attributes, Conversion Drivers gives you a starting point for deeper investigation and helps you move faster from observation to insight.

Available Now

Conversion Drivers are available on any funnel for customers on the Pro plan or Enterprise customers with the Advanced Analytics package. Click on a step and select “View Drop-Off & Conversion Drivers” to get started.

conversion drivers
Shubham Singhal
Product Manager, Statsig
7/30/2025
Permalink ›

Multi-Variant Support for Dynamic Configs

Dynamic Configs just got more powerful. You’re no longer limited to a single config value and a fallback. With multi-variant support, you can define multiple named JSON variants and control which one is served - all from the Statsig console, no deploys required.

Multivariate DC-2
Multivariate DC-1

🧠 Real-world Example

Let’s say you’re using a third party messaging service. You use Twilio, but want a fallback in case it goes down, and you’re testing AWS SES for cost savings.

With Multivariate Dynamic Configs, you can:

  • Define config variants for each provider

  • Roll traffic to AWS SES for 5% of users to validate integration

  • Failover to Sendgrid if Twilio has an outage

  • Tune timeouts and retry logic on the fly - no redeploys

⚡️ Available Now

We will be rolling out Multivariate Dynamic Configs to all Statsig customers over the next few weeks. If you'd like to get an early access, please reach out.

Akin Olugbade
Product Manager, Statsig
7/23/2025
Permalink ›

📼 Playlists in Session Replay

You can now group replays into Playlists to curate and share the sessions that matter most.

What You Can Do Now

  • Add a replay to an existing playlist or create a new one directly from the replay viewer

  • Access all your saved playlists from the new “Playlists” tab next to “Sessions”

  • Cycle through playlist sessions without returning to the full list view

How It Works

From any replay, click the “Add to Playlist” button to save it. You can either select an existing playlist or create a new one on the spot. The Playlists tab lets you view and play through a sequence of sessions in one place.

Impact on Your Analysis

Playlists make it easy to gather replays related to:

  • Bugs or usability issues

  • Experiment test groups

  • Feature launches or onboarding flows

They’re ideal for sharing context across teams without needing to repeat yourself.

Try building a playlist for your next launch review or bug triage session.

Akin Olugbade
Product Manager, Statsig
7/22/2025
Permalink ›

🧑‍💻 Revamped Users Tab

We’ve redesigned the Users tab in Metrics Explorer to make it more useful out of the box and more powerful when you need to dig deeper.

What You Can Do Now

  • View recent users instantly

    Landing on the Users tab now shows a live sample of recent users with basic info, so you can immediately start exploring.

  • Filter to specific user sets

    Narrow the list by applying filters based on:

    • User properties (e.g. country, plan type)

    • Events performed (e.g. signed_up, clicked_button)

    • Experiment or feature gate group (e.g. users in treatment vs control)

How It Works

You’ll see a sample of recent users as soon as the tab loads with no query required. Use the filter panel to drill down into any segment you care about. For example:

  • Find users who dropped out of a funnel after the second step

  • See only users in the “new-nav-rollout” feature gate

  • Inspect event history for users on a specific pricing tier

Impact on Your Analysis

Filtering to specific sets of users helps you move from high-level trends to concrete user behavior. You can:

  • Debug feature exposure by confirming which users were actually in a gate or experiment

  • Investigate drop-offs by checking what users did before and after a key event

  • Validate hypotheses about certain user groups, like whether trial users behave differently than paid ones

Akin Olugbade
Product Manager, Statsig
7/19/2025
Permalink ›

🎯 Segments in Funnels and Metrics Explorer

You can now create and analyze Segments directly from Funnels and use them in Metrics Explorer for deeper, follow-up analysis.

What You Can Do Now

  • From Funnels: Create a new Segment from users who did or didn’t convert at a given step.

  • In Metrics Explorer:

    • Filter your chart to only include users in a Segment.

    • Break down results by whether users are in a Segment.

  • Segment Type: Currently limited to ID List-based Segments (max size: 1000 users).

How It Works

When viewing a funnel chart, you’ll see a new option to “Create Segment” next to conversion results. Clicking it will generate an ID List-based Segment you can name and save.

In Metrics Explorer, you’ll find Segment filtering under the filter panel, and breakdown by Segment in the group-by dropdown. These options appear only when the Segment is based on an ID list.

Impact on Your Analysis

This gives you a fast, integrated way to dig deeper into interesting groups of users, like understanding what users who dropped off at step 2 did before or after, or comparing behavior between converters and non-converters over time.

Looking Ahead

We’re planning to:

  • Support additional segment types beyond ID Lists.

  • Remove the 1000-user limit for ID List-based Segments.

Vineeth Madhusudanan
Product Manager, Statsig
7/15/2025
Permalink ›

Hypothesis Advisor

Writing good experiment hypotheses is key to a strong experimentation culture. Statsig now gives instant feedback on experiment hypotheses—flagging what’s missing. Admins can set custom requirements, which Statsig uses to guide experimenters toward stronger, more complete hypotheses.

This is gradually rolling out; reach out in Slack or to vm at statsig dot com if you'd like early access. When available, enable it from Settings -> Experimentation -> Statsig AI. See docs here.

image
Akin Olugbade
Product Manager, Statsig

🧩 ID Resolution in Funnels

Track users across the point where they go from anonymous to identified. Funnels can now connect actions taken before login with those that happen after, giving you a more complete view of conversion paths that span identity states.

What You Can Do Now

  • Cross the anonymous-to-identified boundary in funnels

    Connect pre-login behavior (e.g. browsing, adding to cart) with post-login events (e.g. checkout, onboarding completion)

  • Toggle ID resolution per funnel

    Click the gear icon when editing a funnel to enable or disable ID resolution for that analysis

  • Configure identifiers

    In Settings → Analytics & Session Replay, choose which identifiers represent anonymous vs. identified users. Defaults are Stable ID and User ID

How It Works

When enabled, ID resolution stitches together events across anonymous and identified IDs if they’re seen on the same device. This turns fragmented journeys into a single user flow, even when a user logs in midway.

Example:

  1. User views a product (Stable ID)

  2. Signs up (User ID)

  3. Completes checkout

With ID resolution on, these events are treated as a single funnel path.

Impact on Your Analysis

Funnels that previously showed drop-off at login steps may now show full completion. You’ll see higher true conversion rates, more accurate attribution, and better insight into how anonymous traffic behaves before converting.

Andre Terron
Software Engineer

🎬 Create a Statsig Sample App

You can now create a simple sample app to try out Statsig with - we've partnered with SampleApp.ai to let you easily create one with only a single prompt. This might help you out with exploring how Statsig works if you're a marketer or non-technical persona, that would still like to see what Statsig might look like once integrated in your app.

On statsig.sampleapp.ai - just enter a prompt or pick one of the samples, and we'll create a simple website for you to play with feature gates, analytics, and more.

Kaz Haruna
Product Manager, Statsig

Experiment Summary Customization

We're excited to announce the ability to add new custom sections and reorder sections in the experiment summary tab for greater customization of your experiment reporting.

Screenshot 2025-07-07 at 3.48.37 PM

These capabilities will also be available for experiment templates, giving you the ability to preconfigure summary sections to standardize formatting across your organization. These changes make the summary section a great place to store your experiment metadata like product research docs, links to design, or details on rollout plans.

Akin Olugbade
Product Manager, Statsig
6/25/2025
Permalink ›

🧪 Experiment Exposure Events Metrics Explorer (Cloud)

Overview

You can now treat experiment exposure events like any other event in Drilldown (time-series) and Funnel analyses. Exposure events include properties such as group, as well as user properties logged with the exposure. We currently only show first exposures to the experiment.

What You Can Do Now

  • Pick exposure events in Drilldown charts to track how many users saw each variant over time.

  • Add exposure as the first step of a funnel to measure post-exposure conversion paths.

  • Group or filter by exposure properties, for example, break down results by variant, region, or device.

  • Overlay exposure counts with key metrics in Drilldown to check whether metric changes align with rollout timing.

How It Works

  1. Exposure logging

    The first time a user is bucketed into an experiment, an exposure event is recorded with contextual properties.

  2. Event selection

    In both Drilldown and Funnel charts, exposure events appear in the same event picker you already use.

  3. Property handling

    Any custom fields travel with exposures, enabling the same group-by and filter controls available for other events.

Impact on Your Analysis

Drilldown

  • Validate rollout health by confirming traffic splits and ramp curves over calendar time.

  • Catch logging issues early—spikes, gaps, or duplicates stand out immediately.

  • Align timing with metrics by viewing exposure and conversion lines on one chart.

Funnels

  • Measure post-exposure journeys starting the moment users see a variant.

  • Pinpoint variant-specific drop-offs by breaking down each step.

  • Ensure clean attribution because exposure proves the user entered the test.

  • Segment by exposure fields (e.g., region or device) to uncover cohort-level insights.

This feature is now available on Statsig Cloud and coming soon to Warehouse Native. Give it a try the next time you validate an experiment. Seeing exposure data side-by-side with core metrics speeds up debugging and sharpens your reads on variant performance.

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"Statsig's experimentation capabilities stand apart from other platforms we've evaluated. The ease of use, simplicity of integration help us efficiently get insight from every experiment we run. Statsig's infrastructure and experimentation workflows have also been crucial in helping us scale to hundreds of experiments across hundreds of millions of users."
Paul Ellwood
Head of Data Engineering
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
OpenAI
"Statsig has been an amazing collaborator as we've scaled. Our product and engineering team have worked on everything from advanced release management to custom workflows to new experimentation features. The Statsig team is fast and incredibly focused on customer needs - mirroring OpenAI so much that they feel like an extension of our team."
Chris Beaumont
Data Scientist
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
OpenAI
"Statsig has helped accelerate the speed at which we release new features. It enables us to launch new features quickly & turn every release into an A/B test."
Andy Glover
Engineer
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy