Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Webflow bluesky Rippling vercel Brex Univision Microsoft bloomberg OpenAI ea Flipkart Notion
GENERAL

Does Changing a Metric's Name Mid-Experiment Impact Results?

Date of slack thread: 6/17/24

Anonymous: Hello! Does changing a metric’s name mid-experiment impact the results? I tried it once in the past and got a warning in the experiment’s Pulse Results that the metric changed since the experiment started and may have introduced bias. I wouldn’t expect changing just the metric’s name to bias the results though?

Statbot: Changing a metric’s name mid-experiment should not inherently impact the results or introduce bias in terms of the data being collected. However, the warning in the experiment’s Pulse Results likely serves as a caution that the system detected a change related to the metric, which could potentially affect the interpretation of the results if the change was more than just a name change. The documentation suggests that Statsig proactively measures the pre-experiment values of all scorecard metrics for all experiment groups and determines if the values are significantly different, which could cause misinterpretations. If bias is detected, users are notified and a warning is placed on relevant Pulse results source. In your case, if you are certain that only the name of the metric was changed and nothing else (like the definition or the way it’s calculated), then the warning might be overly cautious. However, it’s important to ensure that the metric’s underlying data and calculation method remain consistent throughout the experiment to avoid any actual bias. If you have further concerns or if the warning persists, it would be best to review the experiment setup and the metric’s configuration to confirm that no other changes have occurred.

Statbot: Click below to mark as resolved.

Statbot: This thread has been marked as resolved. Press below to re-open.

Timothy Chan (Statsig): Hi Andrew, unfortunately changing the metric name will affect our ability to connect the metric’s historical data to the current data so the data will look disjointed. It will not introduce bias though.

Timothy Chan (Statsig): Can you share a screenshot of where you see this warning? I’d like to pass the feedback to the team.

Anonymous: This was from awhile ago and I don’t remember the exact experiment, sorry! I was planning to change some metric names but then remembered seeing the warning awhile ago so decided to ask about it first.

Anonymous: Could you elaborate on this a bit? What do you mean by it will look disjointed? For context, our custom metric names are all over the place so I want to implement a naming convention. But I first want to know how changing a bunch of metric names will impact current experiments.

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy