Product Updates

We help you ship faster. And we walk the walk

Managing Feature Gate Lifecycle, Including Cleanup

Bringing you another highly anticipated launch - a new feature set to make it easy for you to manage the lifecycle of your feature gates, including cleanup:

You can now use one of these 4 statuses to represent the different stages of your feature (can be updated in individual feature gate page):

  • In Progress: feature in the process of being rolled out and tested

  • Launched: feature has been rolled out to everyone

  • Disabled: feature has been rolled back from everyone

  • Archived: feature is now a permanent part of your codebase (i.e. flag reference has been removed)

New filters on gates catalog to provide you useful views -

  • 🚀 which gates do you need to make a launch decision for?

  • 🧹 which gates should your team clean up from your codebase? 

  • 🎉 see all your launched features to celebrate the work your team has done!

Check out our docs for full details! We’ll continue to ramp up the rollout throughout the next 1-2 weeks. 

📆 Follow up features coming soon -

  • Nudges (emails, slack) to clean up feature gates

  • Mark your gates “permanent” to prevent nudges above!

Managing the Lifecycle of Statsig Feature Gates
1/31/2023
Permalink ›

Metrics Archival, Deletion and More!

Hi everyone, coming at ya with an exciting launch announcement that we’ve started rolling out Metrics Archival + Deletion!

  • 📦 (Updated) Archiving Metrics: your metric will no longer be computed, but its history will be retained.

  • 🗑 (New) Delete Metrics: your metric (and its history) will be removed from Statsig.

We’ve provided a healthy amount of checks in this process to make these features safe to use (e.g. 24-hour grace period, warnings about gate/experiment/metric dependencies, notifying impacted entity owners, etc), so you can manage your metrics confidently without fearing unintended consequences. Please visit the docs page to find out more!

Our plan is to ramp up the roll out to 100% by the end of this week, please let us know if you have any feedback as you start using them!

Statsig Metric Archive
Statsig Deletion Undo
12/21/2022
Permalink ›

Historical Pulse Results, Following Tags, and Custom Metrics Improvements

Christmas came early here at Statsig, with some exciting features coming down the pike. Wishing everyone a happy holiday from snowy Seattle!

🕰️ Historical Pulse Results

Sometimes it’s necessary to reset or reallocate an experiment, but you don’t want to lose access to previous Pulse results that have accrued up to that point. Now, we’ve made it easy to access historical Pulse results pre-reset via an Experiment’s “History”.

To access an old Pulse snapshot, go to “History” and find the reset event, then tap “View Pulse Snapshot”.

Historical Pulse Results

➕ Following Tags

Following a tag will subscribe you to updates on any Experiments, Gates, and (soon) Metrics with that tag throughout your Project. This is an easy way to stay on top of anything happening in Statsig that’s relevant to your team or key initiatives.

To Follow a tag, go to “Project Settings” → “Tags”.

Following Tags

⚒️ Custom Metrics Improvements

(Coming Soon) We’re excited to start rolling out a set of upgrades to our Custom Metric creation capabilities. These updates include-

  1. Ability to edit Custom Metrics - Now, after you’ve created a Custom Metric if you need to go back and tweak the metric setup, you can do so via the “Setup” tab of the metric detail view.

  2. Ability to combine multiple, filtered events - By popular request, we have added support for building Custom Metrics using multiple, filtered events.

  3. Include future ID types - At Custom Metric creation, you can now auto opt-in your new Custom Metric to include all future ID types you add to your Project.

Custom Metrics Improvements
Custom Metrics Improvements 2
12/13/2022
Permalink ›

✅ Daily Ingestion Status (on the homepage)

Now you can check the status of your imports (succeeded, errored, loaded with no data, in progress, etc.) first thing when you log in to Statsig! With the status right on the homepage, you can now see any delays upfront and diagnose issues as early as possible.

Data Ingestion V0 – Figma
12/9/2022
Permalink ›

Monitoring Metrics & Explore in Feature Gates, Multiple Metric Dimensions, and Improved Review UX!

Happy Friday, Statsig Community! We have a fun set of launch announcements for y'all this week.... making every last day count as we come up on the last few weeks of 2022!

Monitoring Metrics & Explore in Feature Gates

Today, we’re excited to add an explicit section into Feature Gates for Monitoring Metrics. This will enable gate creators to call out any metrics they want to monitor as part of a feature rollout, and make it easier for non-creators to know what launch impact to look for.

Note that by default the Core tag will be auto-added to Monitoring Metrics for all new gate creations.

Statsig Monitoring Metrics

Up to 4 Multiple Metric Dimensions

Historically, we’ve supported sending in a Value and JSON metadata with every logged event, enabling you to break out Pulse results by a metric's Value inline within Pulse.

Today, we’re expanding the number of dimensions you can configure for an event, supporting up to 4 custom dimensions that you can define and send in with events to split your analysis by. To configure custom dimensions for your event, go to the Metrics tab → Events, select the event you want to configure and tap "Setup." Note that you cannot yet configure multiple dimensions for Custom Metrics.

Statsig Multiple Metric Dimensions

Improved Review UX

Reviewing gate and experiment changes is a core part of the rollout process. Today, we’re making reviews even easier by providing a clearer Before/ After experience to more easily view changes, as well as introducing a new review mode called “Diff View”.

To view changes in Diff View, simply toggle the mode selector in the upper right-hand corner of the review unit from “Visual View” to “Diff View”. Voila!

Statsig Diff View for Reviews
11/1/2022
Permalink ›

New Slack Integration

Hey everyone, we’ve just released a new integration for receiving console notifications on Slack.

This is different from the current Slack integration which just sends audit logs.

To enable, go to “Account Settings” -> “Notifications” tab.

For more information about the app, see https://statsigcommunity.slack.com/apps/A022AA315JN-statsig.

(FYI we are working to get the app approved on Slack’s app store, but this may take some time)

10/31/2022
Permalink ›

v1 Dashboards, Discussion Tags, and Advanced Search (Halloween Edition 🎃)

Happy Monday (and Happy Halloween) Statsig Community! We've got some tricks AND some treats up our sleeve for you today, with an exciting set of new product updates-

v1 Dashboards

You may have noticed a new “Dashboards” tab in the left-hand nav of your Console! Last week, we quietly started rolling out the v1 of our new Dashboards product. Dashboards give you a flexible canvas to build dashboards for the metrics, experiments, and rollouts your team cares most about.

Dashboards v1

With Dashboards, you can-

  • Create Custom Time Series - Create line or bar charts of your metrics, including dimension breakdowns for events.

  • Add Experiment and Rollout Monitoring - Add any Experiments or feature roll-outs that may impact your metrics inline on your Dashboard.

  • Organize and Label Widgets - Quickly and easily organize your widgets on the drag-and-drop canvas of the Dashboard. Add labels to clearly delineate grouped metrics, as well as caption individual charts to clarify metric definitions.

This is an early v1 foundation for our newest product offering, and something that will continue to evolve. If you have any feedback, we would love to hear it! Don’t hesitate to reach out with feature requests or suggestions for improvements.

Discussion Tags

To make adding relevant folks into the conversation on your Experiments and Gates easier, we’ve added the ability to tag team members in Discussions. Tagging team members in a Discussion comment will notify them via email (and soon Slack as well!)

Discussion Tagging

Advanced Search Capabilities

Powerful search capabilities are key to being able to quickly navigate the Statsig Console. Today, we’re excited to announce that we’ve added keyword search for “started”, “ended”, and “active” search keywords, with support for either one date or a date range.

Advanced Searches

Attached is a table of how to use these. We've also added explicit filter options next to the search bar that will enable you to filter by Status, Health Check Status, ID Type, Creator, & Tag (all of which are also supported directly inline in Search).

Advanced Search Cheat Sheet
10/21/2022
Permalink ›

Deeper Amplitude Integration

New Integration: Incoming Events From Amplitude

Hey all, just wanted to announce that we have completed work on the Amplitude incoming integration. This will allow you to configure Amplitude to forward events to Statsig.

Statsig Docs: https://docs.statsig.com/integrations/data-connectors/amplitude

Amplitude Docs: https://www.docs.developers.amplitude.com/data/destinations/statsig/

10/4/2022
Permalink ›

New Sequential Testing Capabilities

Sequential Testing

Today, we’re continuing to invest in our Stats Engine with the addition of Sequential Testing capabilities. In Sequential Testing, the p-values for each preliminary analysis window are adjusted to compensate for the increased false positive rate associated with peeking. The goal is to enable early decision-making when there's sufficient evidence, while limiting the risk of false positives.

sequential testing

To enable Sequential Testing on your experiment, we require setting a target duration (which is used to calculate the adjusted p-values). We provide a handy Power Analysis Calculator within Experiment Setup to enable quick and easy estimation of target duration.

sequential testing 2

Once a target duration is set, simply toggle on Sequential Testing to start seeing adjusted confidence intervals overlayed over the default 95% confidence interval within your Pulse results.

image (1)
9/30/2022
Permalink ›

Experiment Setup Configuration UX, Automated A/A Test Reports, and more!

Happy FRIDAY, Statsig Community! We've made it to the end of the week, which means it's time for another set of product launch announcements!

New Experiment Setup Configuration UX

Today, we’re excited to debut a sleek new configuration UX for experiment groups and parameters. Easily see your layer allocation, any targeting gates you’re using, experiment parameters, groups, and group split percentages in one, clear visual breakdown.

We believe this will make setting up experiments more intuitive for members of your team who are newer to Statsig, as well as give experiment creators and viewers alike an intuitive overview of how the experiment is configured.

New Groups Params

Automated A/A Test Reports

It’s oftentimes considered best practice to regularly ensure the health of your stats engine and your metrics by running periodic A/A tests. We’ve made running these A/A tests at scale easy by setting up simulated A/A tests that run every day in the background, for every company on the platform. Starting today, you can download the running history of your simulated A/A test performance via the “Tools” menu in your Statsig Console.

We run 10 tests/ day, and the download will include your last 30 days of test results. Please note that we only started running these simulations ~1 week ago, so a download today will only include ~70 sets of simulation results.

AA Test Entry

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy