Platform

Developers

Resources

Pricing

Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Filter Applied:

Can multiple SDKs be used on a site without creating problems, such as a JavaScript site with a React app using both JavaScript and React SDKs?

Using multiple SDKs on a site is technically possible, but it can lead to complications. The Statsig JavaScript SDK and the React SDK are designed to be used independently. If you use both on the same site, they will each maintain their own state and won't share information.

This could lead to inconsistencies in feature gate evaluations and experiment assignments.

If your site is primarily in JavaScript but includes a React app, it's recommended to initialize Statsig with the JavaScript SDK and then pass the initialized Statsig object into the React app. This way, you ensure that both parts of your application are using the same Statsig state.

However, if your React app is substantial or complex, it might be more beneficial to use the React SDK for its additional features, like the StatsigProvider and React hooks. In this case, you should ensure that the JavaScript part of your application doesn't also initialize Statsig, to avoid the aforementioned issues.

The React SDK brings in the JS SDK as well, so you don’t need to include it separately. Once that is initialized then you can use the ‘statsig’ object directly.

However, you’ll have to be careful - the react SDK internally keeps track of the state of the user, so if you try to update the user outside the provider, your react component tree won’t rerender with updates. Similarly, if you try to call methods before the react SDK initializes the js sdk instance internally, you could run into issues.

The best approach depends on the specifics of your application and its usage of Statsig.

How to address a vulnerability alert for Statsig Ruby SDK due to MPL-2.0 license in Snyk

When a vulnerability alert is triggered in Snyk for the Statsig Ruby SDK due to the MPL-2.0 license, it is important to understand the implications of the license and how it may affect your organization. The Mozilla Public License 2.0 (MPL-2.0) is a widely accepted open-source license that allows the code to be used in both open-source and proprietary projects.

Under MPL-2.0, if modifications are made to the original code, those changes must be disclosed when the software is distributed. However, the MPL-2.0 license permits the combination of the licensed code with proprietary code, which means that the SDK can be used in closed-source applications without requiring the entire application to be open-sourced.

It is important to note that while the MPL-2.0 license is generally compatible with other licenses, it may be flagged by security tools like Snyk because it requires review and understanding of its terms. Organizations should consult with their legal team or open-source compliance experts to ensure that the use of MPL-2.0 licensed software aligns with their policies and legal obligations.

If your organization has made no modifications to the original code, there should be no concerns regarding the need to disclose changes. Ultimately, the decision to use software under the MPL-2.0 license should be made by the decision-makers within the organization after careful consideration of the license terms and compliance requirements.

How to implement Statsig tracking on Shopify Custom Pixels?

In order to implement StatSig tracking on Shopify Custom Pixels, you need to ensure that the StatSig SDK is properly initialized and that events are logged correctly. Here's a step-by-step guide on how to do this:

1. First, create a script element and set its source to the StatSig SDK. Append this script to the document head.

javascript const statsigScript = document.createElement('script'); statsigScript.setAttribute('src', 'https://cdn.jsdelivr.net/npm/statsig-js/build/statsig-prod-web-sdk.min.js'); document.head.appendChild(statsigScript);

2. Next, initialize the StatSig SDK. This should be done within an asynchronous function that is called when the StatSig script has loaded.

javascript const statsigInit = new Promise(async (resolve) => {    statsigScript.onload = async () => {      await statsig.initialize("client-sdk-key", { userID: "user-id" });      resolve();    } })

3. Subscribe to the event you want to track and log it with StatSig. Make sure to wait for the StatSig SDK to initialize before logging the event.

javascript analytics.subscribe("checkout_started", async () => {    await statsigInit;    statsig.logEvent("checkout_started"); });

Remember that statsig will be the global, you want to wait for initialization before logging, no need to require.Please note that the SDK accumulates events within, and flushes periodically. Ensure that Statsig.logEvent() was ever called, and that the application didn’t exit right away after that so the SDK has enough time to flush the events.

Inspect the network requests and see what requests were made and what their responses were to verify if the events are being sent correctly.

How to use the `get<T>` method in the DynamicConfig typing class in TypeScript React-Native?

In TypeScript React-Native, the get<T> method in the DynamicConfig typing class is used to retrieve a specific parameter from a Dynamic Config. Here's how to use it:

1. **Passing the Key**: The key you should pass in is the name of the parameter you want to retrieve from the Dynamic Config. This parameter is defined in the Statsig Console when you create or edit a Dynamic Config.

2. **Understanding the Return Value**: The get method will return the defaultValue you provide if the parameter name does not exist in the Dynamic Config object. This can happen if there's a typo in the parameter name, or if the client is offline and the value has not been cached. If you're always getting the defaultValue, it's likely that the parameter name doesn't match what's in the Dynamic Config, or the client hasn't successfully fetched the latest config values. Please double-check your parameter names and your network status. If the issue persists, it might be a good idea to log this issue for further debugging.

3. **Retrieving the Entire Object**: If you want to get the entire object, you can use the getValue() method. For example, if you call config.getValue(), you will get the entire object.

4. **Example Usage**: The argument passed into getConfig will be the dynamic config key itself which returns a config object that implements the get method. You pass a top-level object key into that get method. For instance, if the top level key is clothing, you would pass that into the get method accordingly, like so: statsig.getConfig('max_discount').get('clothing');

Remember, the get<T> method is designed to access individual properties of the dynamic config, not the entire object. For accessing the entire object, use the getValue() method.

Is there an admin CLI or SDK for creating and configuring gates and experiments in Statsig?

Currently, there is no admin Command Line Interface (CLI) or Software Development Kit (SDK) specifically designed for creating and configuring gates and experiments in Statsig. However, you can use the Statsig Console API for these tasks.

The Console API documentation provides detailed instructions and examples on how to use it. You can find the introduction to the Console API. For specific information on creating and configuring gates, refer to this link.

While there are no immediate plans to build a CLI for these tasks, the Console API documentation includes curl command examples that might be helpful for developers looking to automate these tasks.

Please note that the Statsig SDKs are primarily used for checking the status of gates and experiments, not for creating or configuring them.

What happens if multiple Node.js processes initialize the Statsig SDK at the same time?

When multiple Node.js processes initialize the Statsig SDK simultaneously, they might all try to write to the file at the same time, which could lead to race conditions or other issues. To mitigate this, you can use a locking mechanism to ensure that only one process writes to the file at a time. This could be a file-based lock, a database lock, or another type of lock depending on your application's architecture and requirements.

Another approach could be to have a single process responsible for writing to the file. This could be a separate service or a designated process among your existing ones. This process would be the only one to initialize Statsig with the rulesUpdatedCallback, while the others would initialize it with just the bootstrapValues.Remember to handle any errors that might occur during the file writing process to ensure your application's robustness.

If you're using a distributed system, you might want to consider using something like Redis, which is designed to handle consumers across a distributed system. Statsig's Data Adapter is designed to handle this so you don't need to build this manually. You can find more information about the Data Adapter in the Data Adapter overview and the Node Redis data adapter.

However, for these types of cases, it's recommended to only have a single data adapter that is enabled for writes, so you don't run into concurrent write issues.If your company doesn't currently use Redis for anything else and you use GCP buckets for file storage, there's no need to adopt Redis. The data adapter is the latest supported way to accomplish what you're trying to do.

In terms of performance, startup time is the primary concern here. And write() performance to an extent, though if you limit that to a single instance that issues writes, they should be infrequent.

In the case of using both the front end and the back end SDKs in tandem, this is common. You can even use the server SDKs to bootstrap the client SDKs (so that there’s no asynchronous request back to statsig to get assignments). For more notes on resilience and best practices, you can refer to the Statsig reliability FAQs.

What is the impact of turning `waitForInitialization` off in Statsig's React SDK?

The waitForInitialization option in Statsig's React SDK is used to delay the rendering of your components until the SDK has finished initializing. This ensures that the correct feature gate values are available when your components are rendered. If you're seeing users in the "uninitialized" group, it could mean that the SDK initialization is not yet complete when the feature gate check is being made. This could be due to a slow network connection or other issues that delay the initialization process.

To resolve this, you could consider increasing the initTimeoutMS value, which gives the SDK more time to complete the network roundtrip before falling back to cached values. You could also use the init callback for debugging to check when it's returning. If the issue persists, it might be worth considering using a server SDK on one of your servers to bootstrap with. This way, if you already have a network request serving stuff to your clients, you can have it evaluate the user and pass those values back without needing a roundtrip to the SDK.

Remember, the initCalled: true value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method. If you're trying to avoid unnecessary updateUser calls, consider building a statsigUser and only call for update if the local statsigUser is different from the one saved in the SDK instance.

If you set waitForInitialization off, you should get the uninitialized check, and then once SDK initialization completes (within 3 seconds), you should get another check with the actual value (assuming the network request was successful for initialization).

Statsig is working on the metadata around these cases to make it easier to debug. In new SDK versions, it will be possible to differentiate them a bit more. There are also plans to make a change to React that won't even render the children until initialization has at least started so there won't be uninitialized checks at first. This is due to the ordering of effects, where SDK hooks will run before the SDK initialization path in the provider.

What is the recommended method for performing A/B tests for static pages in Next.js using Statsig?

There are two main methods for performing A/B tests for static pages in Next.js using Statsig.

The first method involves using getClientInitializeResponse and storing the initializeValues in a cookie. This approach is suitable if you want to avoid generating separate static pages for each variant. However, the cookie size is limited to 4KB, so this method might not be suitable if the initializeValues are large.

The second method involves generating a separate static page for each experiment's variant. This approach is suitable if you have a small number of variants and want to avoid the cookie size limitation. However, this method might require more setup and maintenance if you have a large number of variants.

If you're unsure which method to use, you can start with the method that seems easier to implement and switch to the other method if you encounter issues.

If you're concerned about the size of initializeValues, there are a couple of ways to bring down the response size. One way is to use target apps to limit the gates/experiments/etc included in the response. Another way is to add an option to getClientInitializeResponse to specify which gates/experiments/etc to include in the response.

If you plan on stitching together multiple cookies, a simple string splice might be easier. An alternative that doesn't involve stitching together multiple initializeValues is to use multiple client SDK instances. This wouldn't be supported in react, but using the js SDK, you could have multiple statsig instances each with its own set of configs. You would have to keep track of which instance to use for which experiment but this may be a "cleaner" approach.

The JS SDK can be synchronously loaded using initializeValues similarly to how the StatsigSynchronousProvider works. So you should be able to just call statsig.initialize(..., {initializeValues}) without needing to worry about awaiting.

Finally, you can also use the local evaluation SDK to fetch the whole config before the page becomes interactive and then pass it to the synchronous SDK. This is a client SDK, but it solves the "flickering" issues because you don't need to wait for the experiment(s) data to be fetched on the fly.

Why am I getting `RULE` as `Default` and `REASON` as `Unrecognized` when deploying to an environment in Statsig?

When deploying to an environment in Statsig, if you encounter an issue where the RULE is always Default and the REASON is Unrecognized, it typically means that the SDK was initialized, but the config or feature gate you're trying to evaluate did not exist in the set of values. This could be due to a few reasons:

1. The feature gate or dynamic config you're trying to evaluate does not exist or is not correctly spelled in your code. Please double-check the spelling and case-sensitivity.

2. The SDK might not have been able to fetch the latest rules from the Statsig server. This could be due to network issues or if the SDK initialization did not complete successfully.

3. If you're using a server SDK, it's possible that the SDK is outdated and doesn't recognize new types of gate conditions. In this case, upgrading the SDK might resolve the issue.

Remember, the Unrecognized reason is only given when the SDK is initialized, but the config or feature gate did not exist in the set of values.

It's also important to ensure that you are waiting for initialize to finish before making evaluations. For instance, calling checkGate inside the callback or after you are sure the callback has been triggered.

Additionally, check if the environment you've deployed to is able to make requests to the relevant URLs. In some cases, these requests might be blocked by the client in your production environment.

If you're still having trouble, please provide more details about your setup and the issue, and a Statsig team member will assist you shortly.

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
CPO
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy