Have you ever set up a relatively complex Custom Metric and then realized you want another similar metric but with a slight tweak? Yep, we have too! To make that process easy, today weâre introducing the ability to clone Custom Metrics.
To clone a Custom Metric, go to the "âŚ" menu in a metric page, then select âClone.â You will have the opportunity to name your new metric, add a description and tags, and then we will auto-fill all the inputs of the metric definition from the source metric. Customize to your liking and you're good to go!
Happy Friday, Statsig Community! To cap off a beautiful week here in Seattle âď¸, we have a number of exciting launch updates to share:
Todate, when you launch a new feature roll-out or experiment, you have to wait 24 hours to start seeing your Pulse results. Today, weâre very excited to shorten that time significantly with the launch of more real-time Pulse. Now, you will see Pulse results start to flow through within 10-15 minutes of starting your roll-out or experiment.
A few things to consider-
For the first 24 hours, results do not include confidence intervals; early metric lifts are meant to help you ensure that things are looking roughly as expected and verify the configuration of your gate/ experiment, NOT make any launch decisions
The Pulse hovercard view will look a bit different; time-series and top-line impact estimates will not be available until the first 24-hour daily lift calculation
At some companies, an user may have a different ID in different environments and hence want to specify the environment to override a given ID in. To enable this, weâve added the ability to specify target environment for Overrides in Experiments. For Gates, you can achieve this via creating an environment-specific rule.
(vs. Strictly Time Duration)
Weâre introducing more flexibility into how you can measure & track experiment target duration. Now, you can choose between setting a target # of days or a target # of exposures an experiment needs to hit before a decision can be made.
To configure a target # of exposures, tap âAdvanced Settingsâ in Experiment Setup tab, then under âExperiment Measured Inâ select âExposuresâ (vs. âDaysâ). The progress tracker at the top of your experiment will now show progress against hitting target number of exposures.
See our docs for more details.
Statsig manages randomization during experiment assignment. In some B2B (or low scale, high variance cases) the law of large numbers doesnât work. Here it is helpful to manually assign users to test and control to ensure both groups are comparable. Statsig now lets you do this. Learn More
What is Stratified Sampling?
Stratified sampling is a sampling method that ensures specific groups of data (or users) are properly represented. You can think of this like slicing a birthday cake. If sliced recklessly, some people may get too much frosting and others will get too little. But when sliced carefully, each slice is a proper representation of the whole. In Data Science, we commonly trust random sampling. The Law of Large Numbers ensures that a sufficiently-sized sample will be representative of the entire population. However, in some cases, this may not be true, such as:
When the sample size is small
When the samples are heterogeneous
We gave our Warehouse Ingestion tab a total makeover so that you can have better visibility into your import status! Some key improvements include:
A simple visual display to track your import progress, with an extended date range
Verify your imported data with ease and confidence using our import volume chart and data samples
Take actions more easily and stay in control of your imports (use the ââŚâ menu), whether you want to trigger a backfill or edit your daily ingestion schedule
Weâve heard from some folks that they want to explore metrics even outside an experimentâs context. Weâve just started adding capabilities to do this. Now, when youâre looking at a metric in the Metrics Catalog you can:
compare values to a prior period to look for anomalies
apply smoothing to understand trends
look at other metrics at the same time to see correlation (or lack thereof) group by metric dimensions
save this exploration as a Dashboard to revisit/share with others
view current experiments and feature rollouts that impact this metric (also in Insights)
This starts rolling out March 31.
Now, you can specify which audience you want to calculate experimental power for, by selecting any existing Feature Gate via the Power Calculator.
To do this, go to the Power Calculator (either under âAdvanced Settingsâ in Experiment creation or via the âTools & Resourcesâ menu) and select âPopulationâ.
This will kick off an async power calculation based on the selected targeting gateâs historical metric value(s), and you will be notified via email and Slack once your power analysis is complete.
This gives you more free real estate to do your work in console! This will now be the default setting, but you can switch this back to manual collapse by using the ââŚâ menu on the nav bar.
This can be found under Types filter in your Gates catalog. While these gates indicate helpful information about your flags, they will not change anything about the functionality of the flags.
Permanent Gates (set by you) are gates that are expected to stay in your codebase for a long time (e.g. user permissions, killswitches). Statsig wonât nudge you to clean up these gates.
You can set gates to be Permanent in the creation flow or by using the ââŚâ menu within each gate page.
Stale Gates (set by Statsig) are good candidates to be cleaned up (and will be used to list out gates for email/slack nudges)
On Monday morning, youâll receive your first monthly nudge (email + slack) to take action on stale gates.
At a high level, these gates are defined as 0%/100% rolled out or have had 0 checks in the last 30 days (but exclude newly created or Permanent gates).
Please see the permanent and stale gates documentation for more information.
Today we are introducing the ability to configure thresholds for your metrics that will automatically trigger an alert if breached in the context of any feature rollout or experiment.
This is especially useful for your companyâs suite of core Guardrail Metrics, as you can configure thresholds once and rest assured that youâll be notified whenever a new feature gate or experiment breaches the pre-set threshold. (As a reminder you can also hook up Statsig to Datadog monitors, read more here!)
Go to the Metric Detail View page of the metric in question
Tap into the âAlertsâ tab and tap â+ Create Alertâ
Configure the threshold value and minimum # of participating units required to trigger the alert (even though this second field is optional, we highly recommend configuring it to minimize the noisiness of your alert)
When your metric alert fires, you will be notified via email (and Slack if youâve configured the Statsig Slack bot) and directed to the âDiagnosticsâ tab of the offending gate or experiment.
Please note that these alerts are for metric values in the context of a specific gate or experiment and NOT on the top-line value of a metric. See our docs for more details on Metric Alerts, and don't hesitate to reach out if you have questions or feedback!
â Composite Sums:Â You can now create an aggregation (sum) metric using other metrics from your catalog, whereas previously, you were only able to sum up events. You can now do cool things such as: adding up revenue across different categories or user counts across different regions.
âď¸ Pass Rate filter: We heard your feedback and have added a Pass Rate filter in your Gates Catalog, in addition to the existing Roll Out Rate filter, to make your launch/disable/cleanup decisions off of!
Whatâs the difference? Roll Out Rate is strictly based on the gate rules youâve set, whereas Pass Rate shows whatâs happening in practice at the time of evaluation. For example, if you have a set of rules that effectively pass everyone besides a small holdout group, Roll Out Rate would be < 100% but Pass Rate could still be 100% if the holdout is small enough.
âžď¸ Permanent Gates: Permanent feature gates are expected to live in your codebase for an extended period of time, beyond a feature release, usually for operations or infrastructure control (examples: user permissions, circuit breakers/kill switches).
You can now mark your gates Permanent on Statsig, telling your team (and Statsig) that they should proceed with more caution if attempting to clean up these gates. This will not change anything functionally about the gate itself, but will allow us to surface and label them differently in the Statsig console for your convenience.
Hi everyone! Here are a few launch announcements to kick off our week.Â
Today, weâre introducing a number of updates to Environments on Statsig-
Adding & Customizing Environments
Whereas previously Statsig only supported two, pre-defined environments, we are now opening up the ability to create as many environments as you want and customize the name and relative hierarchy of these environments. To customize the environments available in your Statsig Project, go to the âEnvironments & Keysâ tab within Project Settings.
API Keys per Environment
We are also adding support for environment-specific API keys, enabling greater launch security and privacy through more API key granularity. To configure environment-specific API keys, go to the âEnvironments & Keysâ tab within Project Settings.
Improved Environments UX
Youâll also notice that environments look and feel a bit different in the Console. We now expose which environments a given rule is targeted at on every rule header, with the ability to toggle rule-sets by target environment via the environment filters at the top of the gate.
Read more about how environments work via our Docs Page.Â
Today, Pulse makes a relatively naĂŻve assumption that the world is simple and a metric increase = good = green, and a metric decrease = bad = red. However for many metrics, itâs actually a good thing when the metric decreases and similarly bad to see an increase (think page load times or many performance metrics!)
With metric directionality, you can set the desired direction you want to see a metric move and Pulse will update the color-coding in metric lifts accordingly. To access this feature, go to any metric detail view page, and tap the ââŚâ menu to see the âSet Metric Directionalityâ option.
We hope you enjoy this new functionality! Please get in touch with us with any feedback, and join the Statsig Slack community to receive these updates first!