Platform

Developers

Resources

Pricing

Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Filter Applied:

How to configure holdouts to measure the impact of different product teams?

How to ensure real-time logging of events works via the HTTP API?

When utilizing the HTTP API to log events in real-time, it is crucial to ensure that the events are properly formatted to be recognized by the system. If events are not appearing in the Metrics Logstream, it may be necessary to wrap the events in an array using the events:[] syntax. This is particularly important because events are typically sent in bulk, and the correct formatting is essential for them to be processed and displayed in the stream.

Additionally, it is recommended to use the SDKs provided for various languages, as they handle a lot of the heavy lifting, including performance and reliability improvements. For cases where the SDKs are not an option, such as with certain desktop applications or specific frameworks like Qt, it may be possible to use language bindings to integrate with the SDKs of supported languages. This can potentially save significant development effort and ensure more reliable event logging.

Furthermore, for the best practice regarding experiments and configurations, it is advised to fetch the experiment config at the time when it is needed to ensure accurate exposure logging. The SDKs typically fetch all evaluations for a single user in one shot and cache it for the session duration, allowing for local and instantaneous experiment config or gate checks. If using the HTTP API directly, it is possible to use an 'initialize' endpoint, which is typically reserved for SDK use.

This endpoint fetches all evaluated results for a given user without generating exposure logs at the same time, thus avoiding overexposure issues.

How to implement Statsig tracking on Shopify Custom Pixels?

In order to implement StatSig tracking on Shopify Custom Pixels, you need to ensure that the StatSig SDK is properly initialized and that events are logged correctly. Here's a step-by-step guide on how to do this:

1. First, create a script element and set its source to the StatSig SDK. Append this script to the document head.

javascript const statsigScript = document.createElement('script'); statsigScript.setAttribute('src', 'https://cdn.jsdelivr.net/npm/statsig-js/build/statsig-prod-web-sdk.min.js'); document.head.appendChild(statsigScript);

2. Next, initialize the StatSig SDK. This should be done within an asynchronous function that is called when the StatSig script has loaded.

javascript const statsigInit = new Promise(async (resolve) => {    statsigScript.onload = async () => {      await statsig.initialize("client-sdk-key", { userID: "user-id" });      resolve();    } })

3. Subscribe to the event you want to track and log it with StatSig. Make sure to wait for the StatSig SDK to initialize before logging the event.

javascript analytics.subscribe("checkout_started", async () => {    await statsigInit;    statsig.logEvent("checkout_started"); });

Remember that statsig will be the global, you want to wait for initialization before logging, no need to require.Please note that the SDK accumulates events within, and flushes periodically. Ensure that Statsig.logEvent() was ever called, and that the application didn’t exit right away after that so the SDK has enough time to flush the events.

Inspect the network requests and see what requests were made and what their responses were to verify if the events are being sent correctly.

How to roll out and monitor multiple new features simultaneously using Statsig?

To roll out and monitor multiple new features simultaneously using Statsig, you can utilize the platform's Feature Gates and Experiments.

For each new feature you plan to roll out, create a corresponding Feature Gate. This approach automatically converts a feature roll-out into an A/B test, allowing you to measure the impact of the roll-out on all your product and business metrics as the roll out proceeds.

If you wish to test hypotheses between product variants, create an Experiment. An Experiment can offer multiple variants and returns a JSON config to help you configure your app experience based on the group that the user is assigned to.

To show features randomly to different sets of customers, use the 'Pass%' in a Feature Gate and 'Allocation%' in an Experiment. This allows you to control the percentage of users who see each feature.

Statsig's Experiments offer more capabilities for advanced experiment designs. For example, you can analyze variants using stable IDs for situations when users have not yet signed-up (or signed-in), or using custom IDs to analyze user groups, pages, sessions, workspaces, cities, and so on. You can also run multiple isolated experiments concurrently.

Remember to define your company goals and key performance indicators (KPIs) to measure the success of your features. You can break down these strategic goals into actionable metrics that you can move with incremental, iterative improvements.

If you use three different feature gates, you will find out how each feature, individually, performed against the baseline. If you want combinatorial impact analysis, like, A vs B vs C vs AB vs BC vs AC vs ABC, then you will need to setup an experiment with 7 variants and specify the combinations via parameters and measure.

However, in practice, this level of combinatorial testing isn’t always fruitful and will consume a lot of time. A pragmatic recommendation would be to use feature gates to individually launch and measure the impact of a single feature, launch the ones that improve metrics and wind down the ones that don’t.

Is there an admin CLI or SDK for creating and configuring gates and experiments in Statsig?

Currently, there is no admin Command Line Interface (CLI) or Software Development Kit (SDK) specifically designed for creating and configuring gates and experiments in Statsig. However, you can use the Statsig Console API for these tasks.

The Console API documentation provides detailed instructions and examples on how to use it. You can find the introduction to the Console API. For specific information on creating and configuring gates, refer to this link.

While there are no immediate plans to build a CLI for these tasks, the Console API documentation includes curl command examples that might be helpful for developers looking to automate these tasks.

Please note that the Statsig SDKs are primarily used for checking the status of gates and experiments, not for creating or configuring them.

What is the best practice for using one Statsig account for both dev/test and production environments?

Statsig currently supports metrics calculation for a single production environment. If you have multiple environments set up, you will only be able to use one as your production environment for metrics and experiments.

When running an experiment, the default behavior is that the experiment will run in all environments. This means that if you try to access the config values of the experiment, you will get a valid value, even if you are in a non-production environment.

If you want your experiment to only run on production, you can set a targeting gate. This will ensure that only users in the production environment will pass the gate and be included in the experiment.

Here is an example of how you might access the config values of an experiment:

javascript useExperiment(XXX).config.get(YYY, false);

In this example, XXX is the experiment you are running, and YYY is the config value you are trying to access. If the experiment is not running in the current environment, you will get the default fallback value, which is false in this case.

Remember, once you start an experiment, it will run in all environments unless you set a targeting gate to restrict it to specific environments.

The best practice is to use the experiment checklist and diagnostics tab to instrument the test, enable it in lower environments, and validate that exposures are flowing through. Then, when you’ve validated these, you click “Start” to go to production. This workflow is typically adequate for most users.

Please note that this is subject to change as Statsig continues to evolve and add new features. Always refer to the official Statsig documentation for the most up-to-date information.

What is the impact of turning `waitForInitialization` off in Statsig's React SDK?

The waitForInitialization option in Statsig's React SDK is used to delay the rendering of your components until the SDK has finished initializing. This ensures that the correct feature gate values are available when your components are rendered. If you're seeing users in the "uninitialized" group, it could mean that the SDK initialization is not yet complete when the feature gate check is being made. This could be due to a slow network connection or other issues that delay the initialization process.

To resolve this, you could consider increasing the initTimeoutMS value, which gives the SDK more time to complete the network roundtrip before falling back to cached values. You could also use the init callback for debugging to check when it's returning. If the issue persists, it might be worth considering using a server SDK on one of your servers to bootstrap with. This way, if you already have a network request serving stuff to your clients, you can have it evaluate the user and pass those values back without needing a roundtrip to the SDK.

Remember, the initCalled: true value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method. If you're trying to avoid unnecessary updateUser calls, consider building a statsigUser and only call for update if the local statsigUser is different from the one saved in the SDK instance.

If you set waitForInitialization off, you should get the uninitialized check, and then once SDK initialization completes (within 3 seconds), you should get another check with the actual value (assuming the network request was successful for initialization).

Statsig is working on the metadata around these cases to make it easier to debug. In new SDK versions, it will be possible to differentiate them a bit more. There are also plans to make a change to React that won't even render the children until initialization has at least started so there won't be uninitialized checks at first. This is due to the ordering of effects, where SDK hooks will run before the SDK initialization path in the provider.

What is the recommended way to roll out a feature customer by customer in a B2B context using Statsig?

In a B2B context, the recommended way to roll out a feature customer by customer is by using feature gates. You can create a feature gate and add a list of customer IDs to the conditions of the gate. This way, only the customers in the list will pass the gate and have access to the feature, while the rest will go to the default behavior.

Here's an example of how you can do this:  

const user = {      userID: '12345',      email: '12345@gmail.com',      ...   };       const showNewDesign = await Statsig.checkGate(user, 'new_homepage_design');   if (showNewDesign) {      // New feature code here } else {      // Default behavior code here }  

 In this example, 'new_homepage_design' is the feature gate, and '12345' is the customer ID. You can replace these with your own feature gate and customer IDs.On the other hand, Dynamic Configs are more suitable when you want to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes, such as country.

Remember to follow best practices for feature gates, such as managing for ease and maintainability, selecting the gating decision point, and focusing on one feature per gate.

Alternatively, you can target your feature by CustomerID. You could either use a Custom Field and then pass a custom field to the SDK {custom: {customer: xyz123} or create a new Unit Type of customerID and then target by Unit ID. For more information on creating a new Unit Type, refer to the Statsig documentation.

Why am I getting `RULE` as `Default` and `REASON` as `Unrecognized` when deploying to an environment in Statsig?

When deploying to an environment in Statsig, if you encounter an issue where the RULE is always Default and the REASON is Unrecognized, it typically means that the SDK was initialized, but the config or feature gate you're trying to evaluate did not exist in the set of values. This could be due to a few reasons:

1. The feature gate or dynamic config you're trying to evaluate does not exist or is not correctly spelled in your code. Please double-check the spelling and case-sensitivity.

2. The SDK might not have been able to fetch the latest rules from the Statsig server. This could be due to network issues or if the SDK initialization did not complete successfully.

3. If you're using a server SDK, it's possible that the SDK is outdated and doesn't recognize new types of gate conditions. In this case, upgrading the SDK might resolve the issue.

Remember, the Unrecognized reason is only given when the SDK is initialized, but the config or feature gate did not exist in the set of values.

It's also important to ensure that you are waiting for initialize to finish before making evaluations. For instance, calling checkGate inside the callback or after you are sure the callback has been triggered.

Additionally, check if the environment you've deployed to is able to make requests to the relevant URLs. In some cases, these requests might be blocked by the client in your production environment.

If you're still having trouble, please provide more details about your setup and the issue, and a Statsig team member will assist you shortly.

Why am I getting a lot of "uninitialized" values in my experiment in a Server Side Rendering (SSR) project?

If you are using the synchronous provider and Server Side Rendering (SSR), the assignment reasons chart should be 100% with the reason “Bootstrap”. There should not be any “network”/“Cache”/etc. If you are seeing a lot of uninitialized values, it could point to a potential implementation issue.

One possible cause could be changes in the user object that you pass into the synchronous provider. If there are fields on the user object that you are loading asynchronously or in an effect elsewhere, it could trigger the SDK to update the values for the user.

To debug this issue, you should verify each of the render passes on the StatsigSynchronousProvider and ensure that there is only a single render with a static user object. If the user object changes and it rerenders, it could lead to the issue you are experiencing.

If you are calling useExperiment() in your code, make sure that the user object passed into the provider is static and never changes. If the user object changes after you bootstrap the StatsigSynchronousProvider, it will cause the provider to re-fetch initialize values using a network request, effectively discarding the results passed to it through SSR.

It's also important to note that if you are using SSR correctly, you should never have a “network” reason. If you are seeing a “network” reason, it could indicate that your users are only going through the Client Side Rendering (CSR) flow.

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
CPO
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy