Product Updates

Statsig Product Updates
Arrow Left
Arrow Right
8/29/2023
Permalink ›

đź“ť Smart Scorecard Limits

Experimentation best practice dictates that an experiment should have a highly targeted set of metrics that you’re actively trying to move, along with a broader swath of metrics you’re monitoring to ensure you don’t regress.

Today, we’re adapting our Scorecard to reflect this best practice, and putting in place some smart limits on the Scorecard—max 10 Primary Metrics and 40 Secondary Metrics. Coming soon will be the ability for Enterprise customers to specify an even tighter limit on Scorecard metrics via their Org Settings if desired.

One bonus implication of these limits is that we’re auto-expanding tagged metric groups, making it even easier to see (and manage) all the individual metrics being added to your Scorecard when you add a metric tag.

Let us know if you have any feedback or questions on this change!

experiment scorecard limits product screenshot

8/21/2023
Permalink ›

👩‍💻 Github Code References

Quickly see where in your source code a feature gate or experiment is referenced to get context for how it is being used. Simply enable Github Code References to see this light up in! 

Github Code References

8/17/2023
Permalink ›

 ✅ New & Improved Experiment Setup Checklist

Last week, we launched a refreshed version of the Experiment Setup checklist to make it easy for anyone on your team to configure experiments quickly and correctly in Statsig. In the new checklist, you’ll see -

  • Two top-line guides, “Set up your Experiment” & “Test your Experiment” - Skip straight to testing if you’re a pro or get more help with setup if you are newer to running experiments on Statsig.

  • Ability to test experiment setup in a specific environment - Turn your experiment on in lower environments to verify it’s working as expected before going live in Production.

  • Same Overrides controls - Leverage ID or Gate Overrides to test your experiment setup for a specific user or segment of users in any configured environment.

new experiment checklist

We’d love to hear feedback as you and your teams get up and running with the new checklist!


8/4/2023
Permalink ›

Better Experiment Defaults

You've told us you want more trustworthy experiments, not just more experiments. We are making Hypothesis and Primary Metrics required on experiments. Enterprise customers will soon be able to define experiment settings as policy.


7/31/2023
Permalink ›

Custom retention reports

(Coming soon to Statsig Analytics)

Retention Analysis helps you drive product adoption by showing you how often users return to your product after taking a specific action. People using Metrics Explorer this week will be opted into the beta early!

Retention Analysis

7/21/2023
Permalink ›

Experiment on the edge with Fastly

We're excited to extend our ability to serve experiments at the edge with our new Fastly integration. Developers can now render web pages with no latency or flicker by putting flag evaluation and experiment assignment as close to their users as possible. We're taking advantage of Fastly Config Stores to light up this feature. See docs for Fastly (or Cloudflare and Vercel).

Fastly

7/20/2023
Permalink ›

Bayesian Analysis for Experiments

We now support Bayesian Analysis for Experiments. You can turn this on by selecting the option under Experiment Setup / Advanced Settings and see your results through the Bayesian lens, including statistics like Expectation of Loss and Chance of Beat.

You’ll be able to access this through the Experiment Setup / Advanced Settings tab. This is a philosophically different framework from standard AB testing based on frequentist analysis and there are many nuances to using it. For more information, please see the documentation here.

Bayesian Analysis

Related: Try the Statsig Bayesian A/B testing calculator.


7/12/2023
Permalink ›

đź“ŠBar Charts in Metrics Explorer

We just shipped bar charts in our analytics product! This lets you slice and dice metrics into easy-to-understand visuals that highlight trends, comparisons, and distributions. You can group by or filter using properties like device type, operating system, country or even custom properties.

Bar Charts in Metris Explorer


7/11/2023
Permalink ›

Statsig Warehouse Native

We just launched a Warehouse Native version of Statsig - it runs directly on data in your warehouse. This is optimized for data teams who value quick iteration, governance, and the ability to avoid duplicating core business metrics in multiple systems. Learn more...

Statsig Cloud vs Warehouse Native

6/20/2023
Permalink ›

đź”’Custom roles for Role Based Access Control

We just shipped an Enterprise feature to customize roles you use to assign permissions in Statsig. You can now create new roles beyond Admin, Member and Read-Only and choose what permissions these roles have. Common use cases include creating a Metrics Admin role or a Datawarehouse Admin role (for the Statsig's Warehouse Native).

Enterprise customers can find this under Project Settings -> Basic Settings -> Roles & Access Control

Role Based Access Control

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy