Product Updates

Statsig Product Updates
Arrow Left
Arrow Right
7/20/2023
Permalink ›

Bayesian Analysis for Experiments

We now support Bayesian Analysis for Experiments. You can turn this on by selecting the option under Experiment Setup / Advanced Settings and see your results through the Bayesian lens, including statistics like Expectation of Loss and Chance of Beat.

You’ll be able to access this through the Experiment Setup / Advanced Settings tab. This is a philosophically different framework from standard AB testing based on frequentist analysis and there are many nuances to using it. For more information, please see the documentation here.

Bayesian Analysis

Related: Try the Statsig Bayesian A/B testing calculator.


7/12/2023
Permalink ›

📊Bar Charts in Metrics Explorer

We just shipped bar charts in our analytics product! This lets you slice and dice metrics into easy-to-understand visuals that highlight trends, comparisons, and distributions. You can group by or filter using properties like device type, operating system, country or even custom properties.

Bar Charts in Metris Explorer


7/11/2023
Permalink ›

Statsig Warehouse Native

We just launched a Warehouse Native version of Statsig - it runs directly on data in your warehouse. This is optimized for data teams who value quick iteration, governance, and the ability to avoid duplicating core business metrics in multiple systems. Learn more...

Statsig Cloud vs Warehouse Native

6/20/2023
Permalink ›

🔒Custom roles for Role Based Access Control

We just shipped an Enterprise feature to customize roles you use to assign permissions in Statsig. You can now create new roles beyond Admin, Member and Read-Only and choose what permissions these roles have. Common use cases include creating a Metrics Admin role or a Datawarehouse Admin role (for the Statsig's Warehouse Native).

Enterprise customers can find this under Project Settings -> Basic Settings -> Roles & Access Control

Role Based Access Control

6/6/2023
Permalink ›

📈 Metrics Explorer (beta)

We’re excited to share a limited beta of Metrics Explorer : an analytics surface with powerful slicing for metrics. Break out a metric by device type, country or user tier. Explode a ratio metric and see how the numerator and denominator have moved.

Get data you can trust and insights you need to take action and drive growth. Find this under Metrics -> Explore

Metrics Explorer

6/2/2023
Permalink ›

⚡️Faster Users Tab for troubleshooting

The Users tab enables you to diagnose issues for specific users, by helping answer questions like "which experiment group was this user in?" Or "when did the user first see this feature?" We've just upgraded the backend for this - lookups now take ~5 seconds, instead of ~10 minutes.

Users Tab

5/30/2023
Permalink ›

🎯 Targeting on Holdouts

We've just started rolling out the ability to apply targeting on Holdouts. Holdouts work by "holding-back" one set of users from testing and comparing their metrics with normal users. Statsig now lets you apply a Feature Gate to your Holdout. e.g. if you wanted an iOS User Holdout, you could apply a Feature Gate that passes only iOS users.

Create Holdout
Select Population
Select Gate
Gate Applied

Holdouts are the gold standard for measuring the cumulative impact of experiments you ship. (Learn more)


5/18/2023
Permalink ›

⌛ 90-day Pulse expiration

As teams have grown their Statsig usage, so has old experiment clutter. A few months back we launched a suite of tooling to manage the lifecycle of your feature flags, and today we’re rolling out automated clean-up logic for old experiments as well.

Starting this week, Statsig will be setting a default Pulse Results compute window of 90 days for all new experiments going forward, after which your Pulse Results will stop being computed. Please note this only applies to experiments, not feature gates, holdouts, or any other config types.

duplicate reviews

You will be able to extend this window at the individual experiment level as you approach the 90-day cap, and your user assignment will not be impacted even if results stop being computed. Read more in our docs.

extended pulse calculation window

In the coming days, experiment owners of impacted experiments will receive an email notification and 14 days to extend the Results compute window, if you wish to. As always, don’t hesitate to reach out if you have any questions- our hope is that this both cleans up your Console and saves teams money long-term!


5/4/2023
Permalink ›

🧑‍🤝‍🧑Cloning Metrics

Have you ever set up a relatively complex Custom Metric and then realized you want another similar metric but with a slight tweak? Yep, we have too! To make that process easy, today we’re introducing the ability to clone Custom Metrics.

To clone a Custom Metric, go to the "…" menu in a metric page, then select “Clone.” You will have the opportunity to name your new metric, add a description and tags, and then we will auto-fill all the inputs of the metric definition from the source metric. Customize to your liking and you're good to go!

cloning metrics

4/28/2023
Permalink ›

Happy Friday, Statsig Community! To cap off a beautiful week here in Seattle ☀️, we have a number of exciting launch updates to share:

🕒 Fast(er) Pulse

Todate, when you launch a new feature roll-out or experiment, you have to wait 24 hours to start seeing your Pulse results. Today, we’re very excited to shorten that time significantly with the launch of more real-time Pulse. Now, you will see Pulse results start to flow through within 10-15 minutes of starting your roll-out or experiment.

faster pulse 2

A few things to consider-

  • For the first 24 hours, results do not include confidence intervals; early metric lifts are meant to help you ensure that things are looking roughly as expected and verify the configuration of your gate/ experiment, NOT make any launch decisions

  • The Pulse hovercard view will look a bit different; time-series and top-line impact estimates will not be available until the first 24-hour daily lift calculation

☁️ Environments in Overrides

At some companies, an user may have a different ID in different environments and hence want to specify the environment to override a given ID in. To enable this, we’ve added the ability to specify target environment for Overrides in Experiments. For Gates, you can achieve this via creating an environment-specific rule.

environments in overrides

⌛ Experiment Duration by # Target Exposures

(vs. Strictly Time Duration)

We’re introducing more flexibility into how you can measure & track experiment target duration. Now, you can choose between setting a target # of days or a target # of exposures an experiment needs to hit before a decision can be made.

experiment duration

To configure a target # of exposures, tap “Advanced Settings” in Experiment Setup tab, then under “Experiment Measured In” select “Exposures” (vs. “Days”). The progress tracker at the top of your experiment will now show progress against hitting target number of exposures.

experiment duration 2

See our docs for more details.


Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy