Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs

Why am I getting a lot of "uninitialized" values in my experiment in a Server Side Rendering (SSR) project?

If you are using the synchronous provider and Server Side Rendering (SSR), the assignment reasons chart should be 100% with the reason “Bootstrap”. There should not be any “network”/“Cache”/etc. If you are seeing a lot of uninitialized values, it could point to a potential implementation issue.

One possible cause could be changes in the user object that you pass into the synchronous provider. If there are fields on the user object that you are loading asynchronously or in an effect elsewhere, it could trigger the SDK to update the values for the user.

To debug this issue, you should verify each of the render passes on the StatsigSynchronousProvider and ensure that there is only a single render with a static user object. If the user object changes and it rerenders, it could lead to the issue you are experiencing.

If you are calling useExperiment() in your code, make sure that the user object passed into the provider is static and never changes. If the user object changes after you bootstrap the StatsigSynchronousProvider, it will cause the provider to re-fetch initialize values using a network request, effectively discarding the results passed to it through SSR.

It's also important to note that if you are using SSR correctly, you should never have a “network” reason. If you are seeing a “network” reason, it could indicate that your users are only going through the Client Side Rendering (CSR) flow.

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Karandeep Anand
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy