Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
OpenAI Microsoft Univision ea affirm Notion SoundCloud n26 Rappi ziprecruiter CharacterAI Found
GENERAL

How can we identify and track specific company IDs passing a feature gate during a scheduled rollout in Statsig?

Date of slack thread: 4/22/24

Anonymous: We’re looking to do a scheduled rollout of a feature gate against company IDs as the unit ID, so e.g. phase 1 will be x % of company IDs for a feature. Can we easily pull which company IDs are passing for e.g. the first phase? I see the View pulses page, but couldn’t seem to extract this info.

Tore (Statsig): What do you need this for? If you increase the rollout %, only new company IDs will be included, the ones that were passing previously will continue to pass. There is no static list of companyIDs that will pass for a rollout - it’s a deterministic hash of the ID. So given an ID, I can tell you if it passes or not. And given a list of IDs, I can tell you which ones pass. But on its own, Statsig does not track a list of which IDs are in which group in real time. The best you could do is the next day, you can export exposures from the console to get the set of all passing IDs.

Anonymous: Thank you. Is that hash function something you expose or no? The rationale here is that for certain feature rollouts, our customer success teams can have touch points if they’re aware of who’s having it turned on when, if for example we have rollouts over a week. We could certainly create manual buckets, but were wondering if we could still leverage the pass % functionality and get that info.

Tore (Statsig): Are you able to persist it on your side? You could use an evaluation callback to track who has been exposed to which features. Alternatively, if you are okay just knowing which group they would be, you can use any SDK/HTTP API - just make sure you disable exposure logging for that check so it doesn’t record it in any experiment.

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy