Platform

Developers

Resources

Pricing

Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Filter Applied:

How to address a vulnerability alert for Statsig Ruby SDK due to MPL-2.0 license in Snyk

When a vulnerability alert is triggered in Snyk for the Statsig Ruby SDK due to the MPL-2.0 license, it is important to understand the implications of the license and how it may affect your organization. The Mozilla Public License 2.0 (MPL-2.0) is a widely accepted open-source license that allows the code to be used in both open-source and proprietary projects.

Under MPL-2.0, if modifications are made to the original code, those changes must be disclosed when the software is distributed. However, the MPL-2.0 license permits the combination of the licensed code with proprietary code, which means that the SDK can be used in closed-source applications without requiring the entire application to be open-sourced.

It is important to note that while the MPL-2.0 license is generally compatible with other licenses, it may be flagged by security tools like Snyk because it requires review and understanding of its terms. Organizations should consult with their legal team or open-source compliance experts to ensure that the use of MPL-2.0 licensed software aligns with their policies and legal obligations.

If your organization has made no modifications to the original code, there should be no concerns regarding the need to disclose changes. Ultimately, the decision to use software under the MPL-2.0 license should be made by the decision-makers within the organization after careful consideration of the license terms and compliance requirements.

How to ensure consistent experiment results across CSR and SSR pages with Statsig

Ensuring consistent experiment results across Client-Side Rendered (CSR) and Server-Side Rendered (SSR) pages when using Statsig involves establishing a single source of truth for user assignments. The challenge arises when independent network requests on different pages result in varying initialization times, potentially leading to different experiment variants being presented to the user.

To address this, it is recommended to have one consistent source of truth for the user's assignment. This can be achieved by using the assignments from the first page (CSR) and carrying them over to the second page (SSR), rather than initializing a new request on the second page.

This approach ensures that the user experiences a consistent variant throughout their session, regardless of the page they navigate to. It is important to note that the user's decision for any getExperiment call is determined at the time of initialization, not when getExperiment is called. The getExperiment call simply returns the value fetched during initialization and logs the user's participation in the experiment and their assigned group.

To implement this, developers should ensure that the assignments are fetched from the network request on the first page and then passed to the second page, avoiding the need for another initialization request and the management of an ever-growing list of parameters.

Why am I getting `RULE` as `Default` and `REASON` as `Unrecognized` when deploying to an environment in Statsig?

When deploying to an environment in Statsig, if you encounter an issue where the RULE is always Default and the REASON is Unrecognized, it typically means that the SDK was initialized, but the config or feature gate you're trying to evaluate did not exist in the set of values. This could be due to a few reasons:

1. The feature gate or dynamic config you're trying to evaluate does not exist or is not correctly spelled in your code. Please double-check the spelling and case-sensitivity.

2. The SDK might not have been able to fetch the latest rules from the Statsig server. This could be due to network issues or if the SDK initialization did not complete successfully.

3. If you're using a server SDK, it's possible that the SDK is outdated and doesn't recognize new types of gate conditions. In this case, upgrading the SDK might resolve the issue.

Remember, the Unrecognized reason is only given when the SDK is initialized, but the config or feature gate did not exist in the set of values.

It's also important to ensure that you are waiting for initialize to finish before making evaluations. For instance, calling checkGate inside the callback or after you are sure the callback has been triggered.

Additionally, check if the environment you've deployed to is able to make requests to the relevant URLs. In some cases, these requests might be blocked by the client in your production environment.

If you're still having trouble, please provide more details about your setup and the issue, and a Statsig team member will assist you shortly.

Why am I getting a lot of "uninitialized" values in my experiment in a Server Side Rendering (SSR) project?

If you are using the synchronous provider and Server Side Rendering (SSR), the assignment reasons chart should be 100% with the reason “Bootstrap”. There should not be any “network”/“Cache”/etc. If you are seeing a lot of uninitialized values, it could point to a potential implementation issue.

One possible cause could be changes in the user object that you pass into the synchronous provider. If there are fields on the user object that you are loading asynchronously or in an effect elsewhere, it could trigger the SDK to update the values for the user.

To debug this issue, you should verify each of the render passes on the StatsigSynchronousProvider and ensure that there is only a single render with a static user object. If the user object changes and it rerenders, it could lead to the issue you are experiencing.

If you are calling useExperiment() in your code, make sure that the user object passed into the provider is static and never changes. If the user object changes after you bootstrap the StatsigSynchronousProvider, it will cause the provider to re-fetch initialize values using a network request, effectively discarding the results passed to it through SSR.

It's also important to note that if you are using SSR correctly, you should never have a “network” reason. If you are seeing a “network” reason, it could indicate that your users are only going through the Client Side Rendering (CSR) flow.

Why am I seeing failures in gate evaluation on React Native despite the gate being set to 100% pass and Statsig being initialized?

In the event of encountering unexpected diagnostic results when evaluating a gate on React Native, despite the gate being set to 100% pass and Statsig being initialized, there are several potential causes to consider.

The "Uninitialized" status in evaluationDetails.reason can occur even after calling .initialize() and awaiting the result. This issue can be due to several reasons:

1. **Ad Blockers**: Ad blockers can interfere with the initialization network request, causing it to fail.

2. **Network Failures**: Any network issues that prevent the initialization network request from completing successfully can result in an "Uninitialized" status.

3. **Timeouts**: The statsig-js SDK applies a default 3-second timeout on the initialize request. This can lead to more frequent initialization timeouts on mobile clients where users may have slower connections.

If you encounter this issue, it's recommended to investigate the potential causes listed above. Check for the presence of ad blockers, network issues, and timeouts. This will help you identify the root cause and implement the appropriate solution.

It's worth noting that the initCalled: true value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method.

If you're still experiencing issues, it might be helpful to use the debugging tools provided by Statsig. These tools can help you understand why a certain user got a certain value. For instance, you can check the diagnostics tab for higher-level pass/fail/bucketing population sizes over time, and for debugging specific checks, the logstream at the bottom is useful and shows both production and non-production exposures in near real-time.

One potential solution to this issue is to use the waitForInitialization option. When you don’t use this option, any of your components will be called immediately, regardless of if Statsig has initialized. This can result in 'Uninitialized' exposures. By setting waitForInitialization=true, you can defer the rendering of those components until after statsig has already initialized. This will guarantee the components aren’t called until initialize has completed, and you won’t see any of those ‘Uninitialized’ exposures being logged. You can find more details in the Statsig documentation.

However, if you can't use waitForInitialization due to it remounting the navigation stack resulting in a changed navigation state, you can check for initialization through initCompletionCallback.

You can also verify the initialization by checking the value of isLoading from useGate and useConfig and also initialized and initStarted from StatsigContext.If the issue persists, please reach out to the Statsig team for further assistance.

Why is my experiment only showing overridden values and not running as expected?

If you're only seeing overridden values in the Exposure Stream for your experiment, there could be several reasons for this. Here are some steps you can take to troubleshoot:

1. Check the Initialization Status: Each hook has an isLoading check, which you can use, or the StatsigProvider provides the StatsigContext which has initialization status as a field as well. This can be used to prevent a gate check unless you know the account has already been passed in and reinitialized.

2. Check the Data Flow: Ensure that the id_type is set correctly and that your ids match the format of ids logged from SDKs. You can check this on the Metrics page of your project.

3. Check the Query History: If your data is still not showing up in the console, check your query history for the user to understand which data is being pulled, and if queries are not executing or are failing.

4. Check the Exposure Counts: If you're seeing lower than expected exposure counts, it could be due to initializing with a StatsigUser object that does not have the userID set, and then providing that on a subsequent render. This causes the SDK to refetch values, but logs an exposure for the “empty” userID first. To prevent this, ensure the userID is set before initializing the StatsigUser object.

If you've checked all these and the issue persists, it might be best to reach out for further assistance.In some cases, users may still qualify for the overrides based on the attributes you’re sending on the user object. This may be due to caching, or it may be due to the user qualifying for other segments that control overrides.

For example, if users have "first_utm_campaign": "34-kaiser-db" being sent on the user object, they would qualify for a segment that’s being used in the overrides.

It's also important to note that overridden users will see the assigned variant but will be excluded from experiment results. We have a way to include users in results for experiments not in layers, but it seems we don’t have that option for experiments in layers.

Lastly, consider why you are using overrides in this scenario instead of a targeting gate. Overrides can be used to test the Test variant in a staging environment before starting the experiment on prod. However, if some of your customers have opted out of being experimented on, a targeting gate might be a more suitable option.

Why is there a discrepancy between experiment allocation counts and server side pageview metric counts?

The discrepancy between the experiment allocation counts and the ssr_search_results_page_view (dau) counts could be due to several reasons:

1. **User Activity**: Not all users who are allocated to an experiment will trigger the ssr_search_results_page_view event. Some users might not reach the page that triggers this event, leading to a lower count for the event compared to the allocation.

2. **Event Logging**: There might be issues with the event logging. Ensure that the statsig.logEvent() function is being called correctly and that there are no errors preventing the event from being logged.

3. **Timing of Allocation and Event Logging**: If the event is logged before the user is allocated to the experiment, the event might not be associated with the experiment. Ensure that the allocation happens before the event logging.

4. **Multiple Page Views**: If a user visits the page multiple times in a day, they will be counted once in the allocation but multiple times in the ssr_search_results_page_view event.

If you've checked these potential issues and the discrepancy still exists, it might be a good idea to reach out to the Statsig team for further assistance.

Another possible reason for the discrepancy could be latency. If there is a significant delay between the experiment allocation and the event logging, users might abandon the page before the event is logged. This could lead to a lower count for the ssr_search_results_page_view event compared to the allocation.

Why is there cross-contamination in experiment groups and discrepancies in funnel vs summary data?

When conducting experiments, it is crucial to ensure that there is no cross-contamination between control and treatment groups and that data is accurately reflected in both funnel and summary views.

Cross-contamination can occur due to implementation issues, such as a race condition with tracking. This happens when users, particularly those with slower network connections, land on a control page and a page-view event is tracked before the redirect occurs.

To mitigate this, it is recommended to adjust the placement of tracking scripts. The Statsig redirect script should be positioned high in the head of the page, ensuring that it executes as early as possible. Meanwhile, page tracking calls should be made later in the page load lifecycle to reduce the likelihood of premature event tracking. This adjustment is expected to decrease discrepancies in tracking and improve the accuracy of experiment results.

Additionally, it is important to confirm that there are no other entry points to the control URL that could inadvertently affect the experiment's integrity. Ensuring that the experiment originates from the correct page and that redirects are functioning as intended is essential for maintaining the validity of the test.

Lastly, it is necessary to have specific calls in the code to track page views accurately. These measures will help ensure that the experiment data is reliable and that the funnel and summary views are consistent.

Why is there no exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching?

If you're not seeing any exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching, there are a few things you might want to check:

1. Ensure that your Server Secret Key is correct. You can find this in the Statsig console under Project Settings > API Keys. 2. Make sure that the name of the feature gate in your function matches exactly with the name of the feature gate you've created in the Statsig console. 3. Verify that the user ID is being correctly set and passed to the StatsigUser object. 4. Check if your environment tier matches the one you've set in the Statsig console.If all these are correct and you're still not seeing any data in the Diagnostics/Pulse Results tabs, it might be a technical issue on our end.

The Statsig SDK batches events and flushes them periodically as well as on shutdown or flush. If you are using the SDK in your middleware, it's recommended to call flush to guarantee events are flushed. For more information, refer to the Statsig documentation.

If you're still not seeing any data, it's possible that there's an issue with event compression. In some cases, disabling event compression can resolve the issue. However, this should be done with caution and only as a last resort, as it may impact performance.

If you're using a specific version of the SDK, you might want to consider downgrading to a previous version, such as v5.13.2, which may resolve the issue.

Remember, if you're still experiencing issues, don't hesitate to reach out to the Statsig team for further assistance.

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
CPO
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy