Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Notion Brex OpenAI Microsoft ea affirm Univision CharacterAI Webflow Go1 Made Rippling
GENERAL

How can I handle uninitialized events in Statsig when using StatsigProvider in a React app and understand their impact on event metering?

Date of slack thread: 5/8/24

Anonymous: Hello! Been testing Statsig with the React StatsigProvider for a future A/B testing feature, and I have 2 questions regarding uninitialized events:

  1. I wrapped my app using the StatsigProvider, like this:
<StatsigProvider
    sdkKey={statsigKey}
    waitForInitialization={true}
    user={abTestingUser}
>
    {renderContent()}
</StatsigProvider>

And what is happening is that I get 1 uninitialized event for every view event in the experiment. Why is that? I thought using waitForInitialization={true} is meant to solve this case.

  1. Second thing is, if the StatsigProvider is wrapping the rest of the app, and I had an experiment running, but I decided to close it and now the experiment decision is Abandon. If I go to the experiment in the console, I still see only Uninitialized events going in. I want to know if it counts as events in the subscription and if so, how should I prevent initializing the SDK if there are no experiments running, because that will trigger unwanted events.

Vijaye (Statsig): The bot is right. I’m including some SDK team members to help with #1. Could you post code snippets on how you are getting experiment variants?

Anonymous: Yea well the experiment variants are pretty straightforward. Inside a component rendered in the scope of the provider, I do:

const { config: zeroStateExperiment, isLoading: isExperimentLoading } = useExperiment(AbTestingExperiments.BalanceZeroStateVariations, false);
const zeroStateVariant = zeroStateExperiment.get(ZeroStateVariationsParams.BalanceVariant, '');
const zeroStateHtmlUrl = zeroStateExperiment.get(ZeroStateVariationsParams.BalanceHtmlUrl, '');

And regarding the bot answer, how can I do that: To prevent initializing the SDK unnecessarily, you could conditionally wrap your app with the StatsigProvider only when there are active experiments or feature gates that need to be evaluated. This would require some logic in your app to determine when the SDK should be initialized. If you need further assistance with this, please share the link to the specific gates or experiments, and a Statsig team member will be able to help you further. I mean, I don’t want to change the code each time I have/don’t have active experiments, ideally I’d like to be able to get it from Statsig and then conditionally render it.

Vijaye (Statsig): That part about initializing the SDK conditionally isn’t right. I don’t know where the bot got that idea from. Your code looks right. Could it be that initialize is timing out? Regardless of the experiment state, you shouldn’t be seeing “uninitialized”.

Anonymous: Locally it doesn’t show any uninitialized events, but when deployed it got the uninitialized event, then immediately the expected event. And then I get this in the experiment.

Tore (Statsig): To your second question: checks to non-active experiments do not count towards metered events for your project: “Experiment checks that result in no allocation (e.g., the experiment hasn’t started, or has finished) or Feature Flags that have been disabled (fully shipped or abandoned with no rule evaluation) do not generate Metered Events.” Pricing

Tore (Statsig): To the first question - is your user object static, or is it changing?

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy