Just as critical as a good experiment creation experience is a good experiment viewer experience. To that end, we’re launching the ability to add images to each group in an experiment to better convey the changes between Control and Treatment(s).
For experiment creators, simply tap on the image icon next to each experiment group. You can add multiple images to each experiment group to convey the full context of that variant experience.
To view the images associated with each group, simply tap on the “View Images for Each Test Group” CTA in the upper right-hand corner of the Metric Lifts unit above the Hypothesis.
Late last week, we launched an additional logstream on the “Metrics Catalog” tab within “Metrics” to provide more visibility and easier debugging for pre-computed metrics being ingested via our API or one of our integrations (Snowflake/ Redshift/ Bigquery, etc.) NOTE- this additional logstream will only show up if you're ingesting pre-computed metrics.
This is the first part of a multi-step project to improve our pre-computed metrics ingestion experience, from setup through to ongoing usage and debugging. Stay tuned for a slew of improvements in the coming weeks… (and if you have feedback on this process/ specific pain points, don't hesitate to ping me directly!)
As usage of the Statsig platform grows within teams, we’re seeing more and more first-time experiment creators. To this end, we’ve improved our “Setup Checklist” in the “Setup” tab of each experiment. The new checklist includes additional functionality to test your experiment variants inline using ID-based overrides, as well as the ability to test your experiment allocations as they will appear in Production before even starting your experiment.
Note that the new checklist is entirely optional and can be collapsed for our pro experimenters who have been around the block a few times.
Rounding out the week with two exciting product launches! As always, don't hesitate to reach out here or 1:1 with product feedback, ideas, questions, etc. We love to hear from folks!
Today, we’re introducing the ability to include an experiment hypothesis and primary/ secondary metrics at experiment creation, which will manifest in the form of an experiment “Scorecard” in your results tab.
While these fields are optional, the hope is that this feature makes it easier to standardize your experiment design process within the Statsig console, as well as improves the experience for non-experiment-creators reading experiments, enabling them to more fully understand key experiment context.
As part of our bigger investment in a true Experiment “Scorecard”, we have implemented CUPED to automatically reduce variance and bias on all Scorecard metrics. CUPED is a statistical technique first popularized for online testing by Microsoft in 2013 that leverages pre-experimental data to reduce variance and pre-exposure bias in experiment results. Tactically, CUPED can significantly shrink confidence intervals and p-values, ultimately reducing the sample size and duration required to run an experiment. Which means you can run more experiments, faster!
CUPED will be applied by default to all Scorecard metrics (both Primary and Secondary), however you can toggle it on/ off directly above your Pulse results in the Scorecard. CUPED will not be applied to non-Scorecard metrics.
To read more about CUPED, check out our data scientist Craig’s awesome blog post here.
Today we are completing the rollout of a Metrics Tab refresh. As always, we love to get feedback & new feature requests from our community, so don't hesitate to reach out on this thread or 1:1!
This refresh was aimed at streamlining the Metrics tab and increasing flexibility of how you view your Events and Metrics.
Key updates include-
Custom metrics now live within the Metrics Catalog - To create a custom metric, tap the “+Create” button. Once the custom metric is created, it will live in the Metrics Catalog and is searchable + taggable inline, alongside all your other metrics.
Filtered search - We’ve added filters to Metrics Catalog and Events to enable more easily drilling down to the set of metrics or events you care about most. Filter by Tag, Source (e.g. Statsig SDK vs. ingested metrics via integration), and Type (e.g. event_count, event_dau, funnel, etc.)
Different views - Toggle between a listview for easy scanning and chart view to better understand trends in both the Metrics Catalog and Events tabs. The view toggle is in the upper right-hand corner of each tab.
Lineage - Understand the “family tree” of any event or metric with the Lineage unit at the top of all Event Detail and Metric Detail pages.
Funnels - We’ve improved our funnel UX, moving funnel views onto the funnel metric detail view itself (all funnels will also exist in the “Charts” tab). The Lineage unit at the top of the funnel metric detail view indicates which events are included in the funnel.