This gives you more free real estate to do your work in console! This will now be the default setting, but you can switch this back to manual collapse by using the “…” menu on the nav bar.
This can be found under Types filter in your Gates catalog. While these gates indicate helpful information about your flags, they will not change anything about the functionality of the flags.
Permanent Gates (set by you) are gates that are expected to stay in your codebase for a long time (e.g. user permissions, killswitches). Statsig won’t nudge you to clean up these gates.
You can set gates to be Permanent in the creation flow or by using the “…” menu within each gate page.
Stale Gates (set by Statsig) are good candidates to be cleaned up (and will be used to list out gates for email/slack nudges)
On Monday morning, you’ll receive your first monthly nudge (email + slack) to take action on stale gates.
At a high level, these gates are defined as 0%/100% rolled out or have had 0 checks in the last 30 days (but exclude newly created or Permanent gates).
Please see the permanent and stale gates documentation for more information.
Today we are introducing the ability to configure thresholds for your metrics that will automatically trigger an alert if breached in the context of any feature rollout or experiment.
This is especially useful for your company’s suite of core Guardrail Metrics, as you can configure thresholds once and rest assured that you’ll be notified whenever a new feature gate or experiment breaches the pre-set threshold. (As a reminder you can also hook up Statsig to Datadog monitors, read more here!)
Go to the Metric Detail View page of the metric in question
Tap into the “Alerts” tab and tap “+ Create Alert”
Configure the threshold value and minimum # of participating units required to trigger the alert (even though this second field is optional, we highly recommend configuring it to minimize the noisiness of your alert)
When your metric alert fires, you will be notified via email (and Slack if you’ve configured the Statsig Slack bot) and directed to the “Diagnostics” tab of the offending gate or experiment.
Please note that these alerts are for metric values in the context of a specific gate or experiment and NOT on the top-line value of a metric. See our docs for more details on Metric Alerts, and don't hesitate to reach out if you have questions or feedback!
➕ Composite Sums: You can now create an aggregation (sum) metric using other metrics from your catalog, whereas previously, you were only able to sum up events. You can now do cool things such as: adding up revenue across different categories or user counts across different regions.
☑️ Pass Rate filter: We heard your feedback and have added a Pass Rate filter in your Gates Catalog, in addition to the existing Roll Out Rate filter, to make your launch/disable/cleanup decisions off of!
What’s the difference? Roll Out Rate is strictly based on the gate rules you’ve set, whereas Pass Rate shows what’s happening in practice at the time of evaluation. For example, if you have a set of rules that effectively pass everyone besides a small holdout group, Roll Out Rate would be < 100% but Pass Rate could still be 100% if the holdout is small enough.
♾️ Permanent Gates: Permanent feature gates are expected to live in your codebase for an extended period of time, beyond a feature release, usually for operations or infrastructure control (examples: user permissions, circuit breakers/kill switches).
You can now mark your gates Permanent on Statsig, telling your team (and Statsig) that they should proceed with more caution if attempting to clean up these gates. This will not change anything functionally about the gate itself, but will allow us to surface and label them differently in the Statsig console for your convenience.
Hi everyone! Here are a few launch announcements to kick off our week.
Today, we’re introducing a number of updates to Environments on Statsig-
Adding & Customizing Environments
Whereas previously Statsig only supported two, pre-defined environments, we are now opening up the ability to create as many environments as you want and customize the name and relative hierarchy of these environments. To customize the environments available in your Statsig Project, go to the “Environments & Keys” tab within Project Settings.
API Keys per Environment
We are also adding support for environment-specific API keys, enabling greater launch security and privacy through more API key granularity. To configure environment-specific API keys, go to the “Environments & Keys” tab within Project Settings.
Improved Environments UX
You’ll also notice that environments look and feel a bit different in the Console. We now expose which environments a given rule is targeted at on every rule header, with the ability to toggle rule-sets by target environment via the environment filters at the top of the gate.
Read more about how environments work via our Docs Page.
Today, Pulse makes a relatively naïve assumption that the world is simple and a metric increase = good = green, and a metric decrease = bad = red. However for many metrics, it’s actually a good thing when the metric decreases and similarly bad to see an increase (think page load times or many performance metrics!)
With metric directionality, you can set the desired direction you want to see a metric move and Pulse will update the color-coding in metric lifts accordingly. To access this feature, go to any metric detail view page, and tap the “…” menu to see the “Set Metric Directionality” option.
We hope you enjoy this new functionality! Please get in touch with us with any feedback, and join the Statsig Slack community to receive these updates first!
In the past, we integrated Statsig with Datadog so that you could send events from Statsig to Datadog and use the whole suite of services on Datadog to monitor these events. However, this was only one-directional and any real-time observations you make from Datadog would require manual oversight to take action on Statsig.
🎉 Intoducing the Statsig and Datadog Trigger Integration
Now we’ve made it possible to leverage Datadog’s real-time monitoring to automatically toggle a feature gate on or off in Statsig.
Configuring Datadog and Statsig to monitor events and toggle feature gates is simple:
Simply create a “trigger” on Statsig
Create a Datadog webhook using the trigger URL
Configure your monitor to notify that webhook.
Example
Imagine you are rolling out a new feature behind a Statsig feature gate. You can set up a Datadog monitor to detect anomalies in your operational metrics in correlation to changes to this gate.
Instead of having it send an alert to you or your team, you can create a trigger to disable this gate automatically. Now if the monitor fires an alert, by the time you are notified, you can rest assured the gate has already been turned off.
With this new integration, Statsig and Datadog customers can take full advantage of the features of both services. With added benefits of:
🔬 Better monitoring of feature rollouts
🚨 Faster response time to metric regression
☮️ Overall, more reliability of their service and peace of mind when launching new features.
We hope you enjoy this new functionality! Please get in touch with us with any feedback, and don’t hesitate to join the Statsig Slack community!
Bringing you another highly anticipated launch - a new feature set to make it easy for you to manage the lifecycle of your feature gates, including cleanup:
You can now use one of these 4 statuses to represent the different stages of your feature (can be updated in individual feature gate page):
In Progress: feature in the process of being rolled out and tested
Launched: feature has been rolled out to everyone
Disabled: feature has been rolled back from everyone
Archived: feature is now a permanent part of your codebase (i.e. flag reference has been removed)
New filters on gates catalog to provide you useful views -
🚀 which gates do you need to make a launch decision for?
🧹 which gates should your team clean up from your codebase?
🎉 see all your launched features to celebrate the work your team has done!
Check out our docs for full details! We’ll continue to ramp up the rollout throughout the next 1-2 weeks.
📆 Follow up features coming soon -
Nudges (emails, slack) to clean up feature gates
Mark your gates “permanent” to prevent nudges above!
Hi everyone, coming at ya with an exciting launch announcement that we’ve started rolling out Metrics Archival + Deletion!
📦 (Updated) Archiving Metrics: your metric will no longer be computed, but its history will be retained.
🗑 (New) Delete Metrics: your metric (and its history) will be removed from Statsig.
We’ve provided a healthy amount of checks in this process to make these features safe to use (e.g. 24-hour grace period, warnings about gate/experiment/metric dependencies, notifying impacted entity owners, etc), so you can manage your metrics confidently without fearing unintended consequences. Please visit the docs page to find out more!
Our plan is to ramp up the roll out to 100% by the end of this week, please let us know if you have any feedback as you start using them!
Christmas came early here at Statsig, with some exciting features coming down the pike. Wishing everyone a happy holiday from snowy Seattle!
Sometimes it’s necessary to reset or reallocate an experiment, but you don’t want to lose access to previous Pulse results that have accrued up to that point. Now, we’ve made it easy to access historical Pulse results pre-reset via an Experiment’s “History”.
To access an old Pulse snapshot, go to “History” and find the reset event, then tap “View Pulse Snapshot”.
Following a tag will subscribe you to updates on any Experiments, Gates, and (soon) Metrics with that tag throughout your Project. This is an easy way to stay on top of anything happening in Statsig that’s relevant to your team or key initiatives.
To Follow a tag, go to “Project Settings” → “Tags”.
(Coming Soon) We’re excited to start rolling out a set of upgrades to our Custom Metric creation capabilities. These updates include-
Ability to edit Custom Metrics - Now, after you’ve created a Custom Metric if you need to go back and tweak the metric setup, you can do so via the “Setup” tab of the metric detail view.
Ability to combine multiple, filtered events - By popular request, we have added support for building Custom Metrics using multiple, filtered events.
Include future ID types - At Custom Metric creation, you can now auto opt-in your new Custom Metric to include all future ID types you add to your Project.
Now you can check the status of your imports (succeeded, errored, loaded with no data, in progress, etc.) first thing when you log in to Statsig! With the status right on the homepage, you can now see any delays upfront and diagnose issues as early as possible.
Happy Friday, Statsig Community! We have a fun set of launch announcements for y'all this week.... making every last day count as we come up on the last few weeks of 2022!
Today, we’re excited to add an explicit section into Feature Gates for Monitoring Metrics. This will enable gate creators to call out any metrics they want to monitor as part of a feature rollout, and make it easier for non-creators to know what launch impact to look for.
Note that by default the Core tag will be auto-added to Monitoring Metrics for all new gate creations.
Historically, we’ve supported sending in a Value and JSON metadata with every logged event, enabling you to break out Pulse results by a metric's Value inline within Pulse.
Today, we’re expanding the number of dimensions you can configure for an event, supporting up to 4 custom dimensions that you can define and send in with events to split your analysis by. To configure custom dimensions for your event, go to the Metrics tab → Events, select the event you want to configure and tap "Setup." Note that you cannot yet configure multiple dimensions for Custom Metrics.
Reviewing gate and experiment changes is a core part of the rollout process. Today, we’re making reviews even easier by providing a clearer Before/ After experience to more easily view changes, as well as introducing a new review mode called “Diff View”.
To view changes in Diff View, simply toggle the mode selector in the upper right-hand corner of the review unit from “Visual View” to “Diff View”. Voila!