Product Updates

We help you ship faster. And we walk the walk
9/26/2022
Permalink ›

Broader ID Support in Autotune and Downloadable Events Explorer Results

Happy Friday, Statsig Community! Ending the week on a high note with a few new product launches for y'all- 

Broader ID Support in Autotune

This past week we added support for Stable & Custom IDs into Autotune, broadening the scope of use-cases you can run an Autotune experiment on. To learn more about leveraging Autotune, check out our docs here.

New ID Types Autotune

Download Events Explorer Results

We’ve made double-clicking on data generated in Statsig even easier, by enabling you to download your Events Explorer results in CSV format. Please note that this is only available for Table and Sample views.

Downloading Events Explorer Results

P.S.- Keep your eyes peeled for something special in the Statsig Console…   

Good morning, Statsig Community! Fun launch update to start off your Thursday- announcing your new Home Tab! 

Home Tab

Today, we’re starting to roll out a brand new tab in your Statsig Console, the Home Tab. The Home Tab serves as a launchpad into all the most important things happening in your team’s Statsig project.

Key features include-

  • Velocity Charts- helps teams easily track their experimentation and launch velocity

  • Core Metrics- A preview of the metrics with the "Core" tag to will show up on your Home Tab. If you haven’t tagged any metrics with "Core" yet (or you want to change which metrics are marked "Core"), you can manage this tag via the Metrics tab.

  • Quick Links- Shortcut links to Statsig resources, ability to invite new team members to your Project, and one-tap creation of a new Experiment, Feature Gate, etc.

  • Feed- Surfaces recent activity, making keeping tabs on what your team is testing and launching easy

Snowflake, Redshift, BigQuery Data Warehouse Support

We’re thrilled to announce that today we're rolling out native integrations with popular data warehouses (DWs) including Snowflake, Redshift, and BigQuery. Now you can import metrics directly from your existing DW tables and automatically include them in the results of all of your experiments, feature gates, and holdouts.

Ingesting precomputed metrics from DWs was been a common request from customers who have a well-established modern data stack. Our new native DW integrations enable you to immediately start measuring your team’s core metrics and KPIs for every product launch as you scale out product experimentation with Statsig 

You can now simply enter your DW connection string in the Statsig console, map data in your tables to Statsig fields, and start to automatically ingest precomputed metrics into your project to observe metric shifts for every product update.

To get started, navigate to Metrics from the left-hand navigation panel and click on the Ingestion tab to add new metrics. In addition to ingesting metrics daily, you run an initial backfill to bootstrap your Metrics Catalog and validate that the integration is working as expected. To learn more, visit our documentation for Data Warehouse Ingestion 

You can start ingesting precomputed metrics and raw events from Snowflake, Redshift, and BigQuery today, with support for Databricks coming soon. To request access, please hit us up!

8/31/2022
Permalink ›

Explore Tab, Search Improvements, and More!

Some launch announcements to spice up your mid-week  As always, don't hesitate to reach out if you have questions or feature requests! 

Explore Tab

Today, we’re rolling out two new surfaces within Pulse to enable you to more easily analyze custom cuts of Pulse via an improved Custom Query interface. The Explore tab will enable creating quick, inline Custom Query explorations on your Pulse results, building a history of queries authored across the team that can be re-used/ modified by anyone leveraging the Explore tab. If a particular Custom Query is useful to look at on an ongoing basis, you can easily schedule the query to automatically run daily, which will live in the Scheduled tab.

Explore is currently live on Experiments only, and will be coming soon for Feature Gates. 

Search Improvements

You asked, we listened! As teams have scaled their usage of Statsig, we’ve seen the need for increasingly powerful search capabilities. Today, we’re starting to roll out Statsig Search 2.0, which includes the ability to search by Creator and Tag in addition to entity name, as well as includes a list of “Recently Searched” history as the default search null state to make getting back to your recently viewed entities extra easy. 

Adding Tags at Experiment & Gate Creation

Today, we’re opening up the ability to add tags to Experiments and Feature Gates at the point of creation, making it even easier to organize Experiments and Gates by team, company goal, etc. This addition will enable adding both existing tags, as well as creating new tags directly inline from the Experiment and Gate creation modals. 

New Types of Custom Metrics

As more and more teams have started leveraging Custom Metrics, we’ve heard a consistent ask to support more metric types. Today, we’re debuting four new types of Custom Metrics:

  1. Composite Metrics:

    Ratios of two other already-existing metrics; this option exists within the “Ratio” metric type

  2. Event User (Count Any and Count All): 

    Users that have any or all of a set of (non-filtered) events

  3. Event Count Custom: A count of a set of (non-filtered) events

  4. Event User Max Rollup: Users that have logged a target event at least once

Please note- previews do not yet exist for these new Custom Metric types, but will be coming soon.

8/15/2022
Permalink ›

Reviews at Various Levels, Metrics Bulk Management, New Users Tab, and More!

Starting the week off strong with a bevy of launches. As always, feel free to message us with questions/ ideas/ feedback. 

Enable Reviews at XP/ Gate/ Config Level

To-date reviews have been enabled at the Project level, with no ability to set more granular controls for specific configs. This week we’re rolling out the ability to require reviews at the entity-level, even if reviews are not required at the Project-level. This capability is controlled via the “…” menu and is available for Experiments, Gates, Segments, and Dynamic Configs. 

Metrics Bulk Management

To make metrics management within the Metrics tab more streamlined, we’ve added bulk actioning on metrics. Bulk actions include tagging, hiding, or comparing all selected metrics. 

Using Tags in Experiment Scorecard Metrics

In addition to adding individual metrics to your Primary and Secondary Metrics sections within the Experiment Scorecard, you can now add metric tags directly. Adding a tag to the Scorecard will automatically add all metrics in the tag to your Pulse results. 

Exposures in Events Explorer

To continue to make debugging easier using Events Explorer, we are adding the ability to attach Experiment/ Gate exposures to events. Exposure annotations are controlled at the data settings level- to specify which Gates/ Experiments you want to see events annotated with exposure logs for, go to the “Data Settings” dialog at the top of Events Explorer and select up to 5 gates/ experiments. You will now be able to see pass/ fail/ group name on every event in Events Explorer for the gate/ experiment. Statsig starts logging these exposures with your application events after you update the Data Settings and doesn’t backfill exposures for past events. 

New Users Tab UX

The User's tab enables customers to diagnose issues for specific users. Previously, the Users tab listed your application's users as of the previous day. Now, you can you can query for a user who used your application within the last hour. To query for a user simply tap on the “+ Load User” CTA in the upper lefthand corner. A history of all queried users enables easy access to be able to go back and find previously-queried users.

7/21/2022
Permalink ›

Images for Experiment Groups

Just as critical as a good experiment creation experience is a good experiment viewer experience. To that end, we’re launching the ability to add images to each group in an experiment to better convey the changes between Control and Treatment(s).

Adding Group Images

For experiment creators, simply tap on the image icon next to each experiment group. You can add multiple images to each experiment group to convey the full context of that variant experience.

Viewing Group Images

To view the images associated with each group, simply tap on the “View Images for Each Test Group” CTA in the upper right-hand corner of the Metric Lifts unit above the Hypothesis.

7/12/2022
Permalink ›

Launch Announcements: Metrics Logstream and Experiment Checklist

Metrics Logstream

Late last week, we launched an additional logstream on the “Metrics Catalog” tab within “Metrics” to provide more visibility and easier debugging for pre-computed metrics being ingested via our API or one of our integrations (Snowflake/ Redshift/ Bigquery, etc.) NOTE- this additional logstream will only show up if you're ingesting pre-computed metrics.

This is the first part of a multi-step project to improve our pre-computed metrics ingestion experience, from setup through to ongoing usage and debugging. Stay tuned for a slew of improvements in the coming weeks… (and if you have feedback on this process/ specific pain points, don't hesitate to ping me directly!) 

Experiment Checklist

As usage of the Statsig platform grows within teams, we’re seeing more and more first-time experiment creators. To this end, we’ve improved our “Setup Checklist” in the “Setup” tab of each experiment. The new checklist includes additional functionality to test your experiment variants inline using ID-based overrides, as well as the ability to test your experiment allocations as they will appear in Production before even starting your experiment.

Note that the new checklist is entirely optional and can be collapsed for our pro experimenters who have been around the block a few times. 

Product Updates: Experiment Scorecard and CUPED

Rounding out the week with two exciting product launches! As always, don't hesitate to reach out here or 1:1 with product feedback, ideas, questions, etc. We love to hear from folks! 

Experiment Scorecard

Today, we’re introducing the ability to include an experiment hypothesis and primary/ secondary metrics at experiment creation, which will manifest in the form of an experiment “Scorecard” in your results tab.

While these fields are optional, the hope is that this feature makes it easier to standardize your experiment design process within the Statsig console, as well as improves the experience for non-experiment-creators reading experiments, enabling them to more fully understand key experiment context. 

CUPED (Controlled experiment Using Pre-Experimental Data)

As part of our bigger investment in a true Experiment “Scorecard”, we have implemented CUPED to automatically reduce variance and bias on all Scorecard metrics. CUPED is a statistical technique first popularized for online testing by Microsoft in 2013 that leverages pre-experimental data to reduce variance and pre-exposure bias in experiment results. Tactically, CUPED can significantly shrink confidence intervals and p-values, ultimately reducing the sample size and duration required to run an experiment. Which means you can run more experiments, faster!

CUPED will be applied by default to all Scorecard metrics (both Primary and Secondary), however you can toggle it on/ off directly above your Pulse results in the Scorecard. CUPED will not be applied to non-Scorecard metrics.

To read more about CUPED, check out our data scientist Craig’s awesome blog post here.

6/23/2022
Permalink ›

Metrics Tab Updates: Custom Metrics, Filtered Search, and more!

Today we are completing the rollout of a Metrics Tab refresh. As always, we love to get feedback & new feature requests from our community, so don't hesitate to reach out on this thread or 1:1! 

Metrics Tab Refresh

This refresh was aimed at streamlining the Metrics tab and increasing flexibility of how you view your Events and Metrics.

Key updates include-

  1. Custom metrics now live within the Metrics Catalog - To create a custom metric, tap the “+Create” button. Once the custom metric is created, it will live in the Metrics Catalog and is searchable + taggable inline, alongside all your other metrics.

  2. Filtered search - We’ve added filters to Metrics Catalog and Events to enable more easily drilling down to the set of metrics or events you care about most. Filter by Tag, Source (e.g. Statsig SDK vs. ingested metrics via integration), and Type (e.g. event_count, event_dau, funnel, etc.)

  3. Different views - Toggle between a listview for easy scanning and chart view to better understand trends in both the Metrics Catalog and Events tabs. The view toggle is in the upper right-hand corner of each tab.

  4. Lineage - Understand the “family tree” of any event or metric with the Lineage unit at the top of all Event Detail and Metric Detail pages.

  5. Funnels - We’ve improved our funnel UX, moving funnel views onto the funnel metric detail view itself (all funnels will also exist in the “Charts” tab). The Lineage unit at the top of the funnel metric detail view indicates which events are included in the funnel.

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy