Happy FRIDAY, Statsig Community! We've made it to the end of the week, which means it's time for another set of product launch announcements!
Today, we’re excited to debut a sleek new configuration UX for experiment groups and parameters. Easily see your layer allocation, any targeting gates you’re using, experiment parameters, groups, and group split percentages in one, clear visual breakdown.
We believe this will make setting up experiments more intuitive for members of your team who are newer to Statsig, as well as give experiment creators and viewers alike an intuitive overview of how the experiment is configured.
It’s oftentimes considered best practice to regularly ensure the health of your stats engine and your metrics by running periodic A/A tests. We’ve made running these A/A tests at scale easy by setting up simulated A/A tests that run every day in the background, for every company on the platform. Starting today, you can download the running history of your simulated A/A test performance via the “Tools” menu in your Statsig Console.
We run 10 tests/ day, and the download will include your last 30 days of test results. Please note that we only started running these simulations ~1 week ago, so a download today will only include ~70 sets of simulation results.
Happy Friday, Statsig Community! Ending the week on a high note with a few new product launches for y'all-Â
This past week we added support for Stable & Custom IDs into Autotune, broadening the scope of use-cases you can run an Autotune experiment on. To learn more about leveraging Autotune, check out our docs here.
We’ve made double-clicking on data generated in Statsig even easier, by enabling you to download your Events Explorer results in CSV format. Please note that this is only available for Table and Sample views.
P.S.- Keep your eyes peeled for something special in the Statsig Console…  Â
Good morning, Statsig Community! Fun launch update to start off your Thursday- announcing your new Home Tab!Â
Today, we’re starting to roll out a brand new tab in your Statsig Console, the Home Tab. The Home Tab serves as a launchpad into all the most important things happening in your team’s Statsig project.
Key features include-
Velocity Charts- helps teams easily track their experimentation and launch velocity
Core Metrics- A preview of the metrics with the "Core" tag to will show up on your Home Tab. If you haven’t tagged any metrics with "Core" yet (or you want to change which metrics are marked "Core"), you can manage this tag via the Metrics tab.
Quick Links- Shortcut links to Statsig resources, ability to invite new team members to your Project, and one-tap creation of a new Experiment, Feature Gate, etc.
Feed- Surfaces recent activity, making keeping tabs on what your team is testing and launching easy
We’re thrilled to announce that today we're rolling out native integrations with popular data warehouses (DWs) including Snowflake, Redshift, and BigQuery. Now you can import metrics directly from your existing DW tables and automatically include them in the results of all of your experiments, feature gates, and holdouts.
Ingesting precomputed metrics from DWs was been a common request from customers who have a well-established modern data stack. Our new native DW integrations enable you to immediately start measuring your team’s core metrics and KPIs for every product launch as you scale out product experimentation with StatsigÂ
You can now simply enter your DW connection string in the Statsig console, map data in your tables to Statsig fields, and start to automatically ingest precomputed metrics into your project to observe metric shifts for every product update.
To get started, navigate to Metrics from the left-hand navigation panel and click on the Ingestion tab to add new metrics. In addition to ingesting metrics daily, you run an initial backfill to bootstrap your Metrics Catalog and validate that the integration is working as expected. To learn more, visit our documentation for Data Warehouse IngestionÂ
You can start ingesting precomputed metrics and raw events from Snowflake, Redshift, and BigQuery today, with support for Databricks coming soon. To request access, please hit us up!
Some launch announcements to spice up your mid-week  As always, don't hesitate to reach out if you have questions or feature requests!Â
Today, we’re rolling out two new surfaces within Pulse to enable you to more easily analyze custom cuts of Pulse via an improved Custom Query interface. The Explore tab will enable creating quick, inline Custom Query explorations on your Pulse results, building a history of queries authored across the team that can be re-used/ modified by anyone leveraging the Explore tab. If a particular Custom Query is useful to look at on an ongoing basis, you can easily schedule the query to automatically run daily, which will live in the Scheduled tab.
Explore is currently live on Experiments only, and will be coming soon for Feature Gates.Â
You asked, we listened! As teams have scaled their usage of Statsig, we’ve seen the need for increasingly powerful search capabilities. Today, we’re starting to roll out Statsig Search 2.0, which includes the ability to search by Creator and Tag in addition to entity name, as well as includes a list of “Recently Searched” history as the default search null state to make getting back to your recently viewed entities extra easy.Â
Today, we’re opening up the ability to add tags to Experiments and Feature Gates at the point of creation, making it even easier to organize Experiments and Gates by team, company goal, etc. This addition will enable adding both existing tags, as well as creating new tags directly inline from the Experiment and Gate creation modals.Â
As more and more teams have started leveraging Custom Metrics, we’ve heard a consistent ask to support more metric types. Today, we’re debuting four new types of Custom Metrics:
Composite Metrics:
Ratios of two other already-existing metrics; this option exists within the “Ratio” metric type
Event User (Count Any and Count All):Â
Users that have any or all of a set of (non-filtered) events
Event Count Custom: A count of a set of (non-filtered) events
Event User Max Rollup: Users that have logged a target event at least once
Please note- previews do not yet exist for these new Custom Metric types, but will be coming soon.
Starting the week off strong with a bevy of launches. As always, feel free to message us with questions/ ideas/ feedback.Â
To-date reviews have been enabled at the Project level, with no ability to set more granular controls for specific configs. This week we’re rolling out the ability to require reviews at the entity-level, even if reviews are not required at the Project-level. This capability is controlled via the “…” menu and is available for Experiments, Gates, Segments, and Dynamic Configs.Â
To make metrics management within the Metrics tab more streamlined, we’ve added bulk actioning on metrics. Bulk actions include tagging, hiding, or comparing all selected metrics.Â
In addition to adding individual metrics to your Primary and Secondary Metrics sections within the Experiment Scorecard, you can now add metric tags directly. Adding a tag to the Scorecard will automatically add all metrics in the tag to your Pulse results.Â
To continue to make debugging easier using Events Explorer, we are adding the ability to attach Experiment/ Gate exposures to events. Exposure annotations are controlled at the data settings level- to specify which Gates/ Experiments you want to see events annotated with exposure logs for, go to the “Data Settings” dialog at the top of Events Explorer and select up to 5 gates/ experiments. You will now be able to see pass/ fail/ group name on every event in Events Explorer for the gate/ experiment. Statsig starts logging these exposures with your application events after you update the Data Settings and doesn’t backfill exposures for past events.Â
The User's tab enables customers to diagnose issues for specific users. Previously, the Users tab listed your application's users as of the previous day. Now, you can you can query for a user who used your application within the last hour. To query for a user simply tap on the “+ Load User” CTA in the upper lefthand corner. A history of all queried users enables easy access to be able to go back and find previously-queried users.
Just as critical as a good experiment creation experience is a good experiment viewer experience. To that end, we’re launching the ability to add images to each group in an experiment to better convey the changes between Control and Treatment(s).
For experiment creators, simply tap on the image icon next to each experiment group. You can add multiple images to each experiment group to convey the full context of that variant experience.
To view the images associated with each group, simply tap on the “View Images for Each Test Group” CTA in the upper right-hand corner of the Metric Lifts unit above the Hypothesis.
Late last week, we launched an additional logstream on the “Metrics Catalog” tab within “Metrics” to provide more visibility and easier debugging for pre-computed metrics being ingested via our API or one of our integrations (Snowflake/ Redshift/ Bigquery, etc.) NOTE- this additional logstream will only show up if you're ingesting pre-computed metrics.
This is the first part of a multi-step project to improve our pre-computed metrics ingestion experience, from setup through to ongoing usage and debugging. Stay tuned for a slew of improvements in the coming weeks… (and if you have feedback on this process/ specific pain points, don't hesitate to ping me directly!)Â
As usage of the Statsig platform grows within teams, we’re seeing more and more first-time experiment creators. To this end, we’ve improved our “Setup Checklist” in the “Setup” tab of each experiment. The new checklist includes additional functionality to test your experiment variants inline using ID-based overrides, as well as the ability to test your experiment allocations as they will appear in Production before even starting your experiment.
Note that the new checklist is entirely optional and can be collapsed for our pro experimenters who have been around the block a few times.Â
Rounding out the week with two exciting product launches! As always, don't hesitate to reach out here or 1:1 with product feedback, ideas, questions, etc. We love to hear from folks!Â
Today, we’re introducing the ability to include an experiment hypothesis and primary/ secondary metrics at experiment creation, which will manifest in the form of an experiment “Scorecard” in your results tab.
While these fields are optional, the hope is that this feature makes it easier to standardize your experiment design process within the Statsig console, as well as improves the experience for non-experiment-creators reading experiments, enabling them to more fully understand key experiment context.Â
As part of our bigger investment in a true Experiment “Scorecard”, we have implemented CUPED to automatically reduce variance and bias on all Scorecard metrics. CUPED is a statistical technique first popularized for online testing by Microsoft in 2013 that leverages pre-experimental data to reduce variance and pre-exposure bias in experiment results. Tactically, CUPED can significantly shrink confidence intervals and p-values, ultimately reducing the sample size and duration required to run an experiment. Which means you can run more experiments, faster!
CUPED will be applied by default to all Scorecard metrics (both Primary and Secondary), however you can toggle it on/ off directly above your Pulse results in the Scorecard. CUPED will not be applied to non-Scorecard metrics.
To read more about CUPED, check out our data scientist Craig’s awesome blog post here.
Today we are completing the rollout of a Metrics Tab refresh. As always, we love to get feedback & new feature requests from our community, so don't hesitate to reach out on this thread or 1:1!Â
This refresh was aimed at streamlining the Metrics tab and increasing flexibility of how you view your Events and Metrics.
Key updates include-
Custom metrics now live within the Metrics Catalog - To create a custom metric, tap the “+Create” button. Once the custom metric is created, it will live in the Metrics Catalog and is searchable + taggable inline, alongside all your other metrics.
Filtered search - We’ve added filters to Metrics Catalog and Events to enable more easily drilling down to the set of metrics or events you care about most. Filter by Tag, Source (e.g. Statsig SDK vs. ingested metrics via integration), and Type (e.g. event_count, event_dau, funnel, etc.)
Different views - Toggle between a listview for easy scanning and chart view to better understand trends in both the Metrics Catalog and Events tabs. The view toggle is in the upper right-hand corner of each tab.
Lineage - Understand the “family tree” of any event or metric with the Lineage unit at the top of all Event Detail and Metric Detail pages.
Funnels - We’ve improved our funnel UX, moving funnel views onto the funnel metric detail view itself (all funnels will also exist in the “Charts” tab). The Lineage unit at the top of the funnel metric detail view indicates which events are included in the funnel.