In an A/A test, where both groups receive the same experience, you would generally expect to see no significant difference in metrics results. However, statistical noise can sometimes lead to significant results purely due to random chance. For example, if you're using a 95% confidence interval (5% significance level), you can expect to see one statistically significant metric out of twenty purely due to random chance. This number goes up if you start to include borderline metrics.
It's also important to note that the results can be influenced by factors such as within-week seasonality, novelty effects, or differences between early adopters and slower adopters. If you're seeing a significant result, it's crucial to interpret it in the context of your hypothesis and avoid cherry-picking results. If the result doesn't align with your hypothesis or doesn't have a plausible explanation, it could be a false positive.
If you're unsure, it might be helpful to run the experiment again to see if you get similar results. If the same pattern continues to appear, it might be worth investigating further.
In the early days of an experiment, the confidence intervals are so wide that these results can look extreme. There are two solutions to this:
1. Decisions should be made at the end of fixed-duration experiment. This ensures you get full experimental power on your metrics. Peeking at results on a daily basis is a known challenge with experimentation and it's strongly suggested that you take premature results with a grain of salt. 2. You can use Sequential testing. Sequential testing is a solution to the peeking problem. It will inflate the confidence intervals during the early stages of the experiment, which dramatically cuts down the false positive rates from peeking, while still providing a statistical framework for identifying notable results. More information on this feature can be found here.
It's important to keep in mind that experimentation is an imprecise science that's dealing with a lot of noise in the data. There's always a possibility of getting unexpected results by sheer random chance. If you're doing experiments strictly, you would make a decision based on the fixed-duration data. However, pragmatically, the newer data is always better (more data, more power) and it's okay to use as long as you're not cherry-picking and waiting for a borderline result to turn green.
In the scenario where you have two experiments running for two different groups of users (for instance, free users and paid users), and a user transitions from one experiment to another (like from a free user to a paid user), there isn't a direct way to ensure that this user will be placed in the same group (TEST GROUP) in the new experiment. The assignment of users to experiment groups is randomized to maintain the integrity of the experiment results.
However, if you want to maintain consistency in the user experience, you might consider using the Stable ID as the experiment's unit type. This ID persists on the user's device, allowing them to have the same experience across different states (like logged out to logged in, or free to paid). It's important not to change the experiment type midway. If the experiment spans different user states, it's best to stick with the Stable ID.
In addition, we offer a feature called Layers which allows you to ensure experiments are mutually exclusive, and that a user is only assigned to one of the tests within the Layer. We also support “Targeting Gates”, which determines if a user should be allocated to an experiment based on some criteria (ie; targeting paid vs free users).
Once a user is qualified for an experiment, we randomize that user into either Test or Control by default. So it’s possible for a user to be in Test in ExperimentA and Control in experimentB.
Statsig does not support sticky results for A/B tests based on IP address. The primary identifiers used for consistency in experiments are the User ID and the Stable ID. The User ID is used for signed-in users, ensuring consistency across different platforms like mobile and desktop. For anonymous users, a Stable ID is generated and stored in local storage.
While the IP address can be included in the user object, it's not used as a primary identifier for experiments. The main reason is that multiple users might share the same IP address (for example, users on the same network), and a single user might have different IP addresses (for example, when they connect from different networks). Therefore, using the IP address for sticky results in A/B tests could lead to inconsistent experiences for users.
If you want to maintain consistency across a user's devices, you might consider using a method to identify the user across these devices, such as a sign-in system, and then use the User ID for your experiments.
For scenarios where users revisit the site multiple times without logging in, there are two potential options:
1. Run the test sequentially, only control at first, then only test group. This is known as Switchback testing. You can learn more about it in this blog post and the technical documentation.
2. Offer a way to switch between control/test group visually for the user so they can bounce back to the behavior they'd expect from being on another device.
However, if there's a lengthy effect duration, Switchback may not be ideal. If you are able to infer the IP address, you can use this as the user identifier (maybe even as a custom identifier) and randomize on this. But be aware that skew in the number of users per IP address may introduce a significant amount of noise. You may want to exclude certain IP addresses from the experiment to get around this.
The skew comes from IP addresses that represent dozens if not hundreds of users. This can skew some of the stats when we try to infer confidence intervals. For example, instead of a conversion rate of 0/1, or 1/1, this metric looks like 36/85. This overweights both the numerator and denominator for this "user" which can skew the results.
To initialize the client SDK in a way that doesn't retrieve unneeded configuration data for metrics only, you can use the statsig.initializeAsync
method. This method should be called with your user object, client key, and options. The important option to pass here is “initializeValues.” You should pass an empty object {}
to this option. This will instruct the SDK not to make a network request for values, and just to serve them from that empty object. You don't need to await the call, but if you did it should return immediately anyway. All the event logging APIs should work just fine, but you won't have values for any feature gate or experiment.
If you want a subset of data, that's definitely possible. We have something called “target apps” which allows you to associate a subset of your configs with a specific target app, and then associate an SDK key with a target app. That key will only fetch those configs with the same target app. More information on this can be found in the Target Apps documentation.
To manage rules for a dynamic config in Statsig, the Python SDK does not offer the functionality to set up these rules directly.
Instead, the management of dynamic config rules should be performed through the Statsig Console API. The Console API provides the capability to programmatically create and modify Dynamic configs, tests, and feature gates.
This allows for the automation of configurations and the integration of Statsig features into your development workflow. For detailed instructions on how to use the Console API, please refer to the official Statsig Console API documentation.
It is technically possible for two different web projects to share the same Statsig API keys. However, it is generally recommended to create separate Statsig projects for distinct websites with their own userIDs and metrics. This approach aids in managing each product independently. If you aim to track success across multiple websites, you may want to manage them in the same project. The decision ultimately depends on your specific use case and goals.
As for the impact on billing, it would depend on your usage. Statsig's pricing is based on Monthly Active Users (MAUs), which are unique users that interact with Statsig in a calendar month, regardless of how many API keys are used. If the same users are interacting with both projects, it would not increase your MAUs. However, if different sets of users are interacting with each project, it could potentially increase your MAUs.
When considering whether to create/use a new Statsig project, it's important to understand when it's appropriate to do so. You can refer to the guidance provided in the Statsig documentation. If you decide to create a new project, remember that the API keys are unique per project.
In conclusion, while it's technically possible to share API keys between projects, it's generally better to have separate keys for each project for easier debugging and management. The impact on billing is based on the number of unique users interacting with Statsig, not the number of API keys used.
In Statsig, you can schedule experiments using a targeting gate and a Scheduled Rollout.
To do this, you need to control who qualifies for the test using the rules on the targeting gate. Initially, set the gate to Everyone 0%. Then, use the schedule tool to increase allocation at the specified date(s).
For more detailed instructions on how to set up a Scheduled Rollout, you can refer to the Scheduled Rollout documentation.
Remember, the scheduling of experiments in Statsig is a powerful tool that allows you to control the rollout of your experiments and analyze the results in a timely and efficient manner.
Permanent gates do count towards billable events. An event is recorded when your application calls the Statsig SDK to check whether a user should be exposed to a feature gate or experiment, and this includes permanent gates. However, if a permanent gate is set to 'Launched' or 'Disabled', it will always return the default value and stop generating billable exposure events.
During the rollout or test period of a permanent gate, exposures will be collected and results will be measured. This is when the gate is billable. Once you Launch or Disable the gate, it is no longer billable. The differentiation with permanent gates is that it tells our system not to nudge you to clean it up, and that it will end up living in your codebase long term. More details can be found in the permanent and stale gates documentation.
If you want to launch a feature flag, but only set a subset group to true, you can achieve this with a Permanent, non-billable gate that targets a specific set of users. You can toggle off “Measure Metric Lifts”, but keep the gate enabled. You don’t need to click “Launch” using that other workflow.
Marking a gate as permanent effectively turns off billable events. This is a useful feature if you want to target a specific set of users without running up billable events.
Please note that we are continuously working on streamlining this process and improving the user experience. Your feedback is always appreciated.
Dynamic Config usage does not count towards the 1M free metered events.
When it comes to the propagation of changes in Dynamic Config, it is officially stated as "near real-time". While there is no precise time frame, changes are typically reflected in the services within 30 seconds of updating, based on anecdotal evidence. However, this is not a guaranteed service level agreement (SLA).
For server SDKs such as nodeJS, the update should happen automatically. Calls to getConfig will start returning the new value quickly. For client SDKs, updates do not occur in the middle of a session to maintain a consistent user experience. If you believe forcing a refresh is necessary, you might have to call Statsig.initialize again.
Please note that the above information is based on expert observations and official documentation, which can be found here.
When you reset an experiment, it does not erase the previous data from the experiment. Instead, it puts the experiment into an unstarted state, and every user will receive the default experience. The "salt" used to randomize a user's group will also be changed. This means that when you start the experiment again, your users will be randomly assigned to a group that is not necessarily the same group they were in prior to the experiment being reset. This helps ensure that the new result for the group that was not performing well due to an issue is not negatively affected even after the issue is addressed in the new version.
However, the analysis will restart when you restart the new run, because users’ group assignments will get reshuffled when you restart an experiment. All results, including primary and secondary metrics, will restart fresh when you reset the experiment.
The previous data from the experiment will still be available. You can find the results from the previous run in the experiment’s history. You can refer back to it by clicking on the link provided in the history section.
For more information, you can refer to the Config History Guide.
Statsig does indeed support experimenting with different elements such as email subject lines. You can create an experiment in Statsig and define multiple variants, each with a different subject line. This is essentially an A/B/n test where 'n' represents the number of different subject lines you want to test.
As for the use of multi-armed bandits or other selection algorithms to rotate through a pool of copy, Statsig does support multi-armed bandit tests. However, it's not explicitly stated in the documentation if this can be applied to rotating through a pool of copy.
You can use Experiments or Autotune in your email campaigns. Autotune is Statsig's implementation of the multi-armed bandit approach. For more information on Autotune, you can refer to the Autotune documentation.
For a practical example of how to use Experiments or Autotune in email campaigns, you can refer to this walkthrough guide.
For more specific guidance on your use case, it would be best to reach out to the Statsig team directly. They can provide more detailed information and help you set up your experiment in the most effective way.
Statsig provides the capability to target experiments based on user properties, which can include actions users take within an application. When a user performs an action, such as clicking a button, this information can be passed to Statsig as a user property. This property can then be used as a targeting criterion for experiments or feature gates.
To implement this, developers can utilize a 'custom field' as described in the Statsig documentation. This field can be set up to reflect user actions or attributes, enabling real-time targeting based on these criteria.
It is important to note that Statsig operates on the properties of the user that are passed to it, and while it does not store the state of a user, it can act upon the properties provided. For instance, if a 'page_url' property is passed, it can be used to target users who land on a specific page.
Similarly, if an action is taken by the user, such as a button click, this can be communicated to Statsig and used for targeting. For best practices, it is advisable to map different events as different custom fields to avoid overwriting and ensure precise targeting.
For more details on setting up custom fields for targeting, refer to the Statsig documentation on Custom Fields.
The Ruby Server SDK for Statsig is designed with an in-memory local cache to store configurations for gates and experiments. This cache enables the SDK to evaluate rules even in the event of Statsig service disruptions. The cache is updated by polling the Statsig server at a default interval of 10 seconds, which is configurable through the rulesets_sync_interval
initialization option.
The memory footprint of the cache is typically small, as it consists of a JSON string representing the configurations. However, developers should be cautious when using large ID lists for targeting, as this can significantly increase the size of the cached data and is generally not recommended.
It is important to note that the SDK does not cache user-specific assignments and parameters, but rather the configuration specifications. Monitoring application memory usage is always advisable to ensure efficient operation. For more detailed guidance on managing large ID lists and memory usage, refer to the official documentation.
To conduct Quality Assurance (QA) for your experiment while another experiment is active on the same page with an identical layer ID, you can use two methods:
1. Creating a New Layer: You can create a new layer for the new experiment. Layers allow you to run multiple landing page experiments without needing to update the code on the website for each experiment. When you run experiments as part of a layer, you should update the script to specify the layerid
instead of expid
. Here's an example of how to do this:
html <script src="https://cdn.jsdelivr.net/npm/statsig-landing-page-exp?apikey=[API_KEY]&layerid=[LAYER_NAME]"></script>
By creating a new layer for your new experiment, you can ensure that the two experiments do not interfere with each other. This way, you can conduct QA for your new experiment without affecting the currently active experiment.
2. Using Overrides: For pure QA, you can use overrides to get users into the experiences of your new experiment in that layer. Overrides take total precedence over what experiment a user would have been allocated to, what group the user would have received, or if the user would get no experiment experience because it is not started yet. You can override either individual user IDs or a larger group of users. The only caveat is a given userID will only be overridden into one experiment group per layer. For more information, refer to the Statsig Overrides Documentation.
When you actually want to run the experiment on real users, you will need to find some way to get allocation for it. This could involve concluding the other experiment or lowering its allocation.
In the Statsig React SDK, the useLayer
hook is utilized to obtain a Statsig Experiment, which is represented as a Layer. The allocatedExperimentName
is a property of this Layer object.
The allocatedExperimentName
is set as a hash of various display names for client-sdks. This is an expected behavior and not a bug in the SDK.
It's important to note that you should not need to access any of the properties on the Layer object directly. Instead, you should use the useLayer
and useExperiment
methods. You can refer to the Statsig documentation for more details on how to use these methods.
If you observe any unexpected behavior, it's recommended to review the experiment setup in the Statsig console. If everything appears correct there, the issue could be more technical in nature.
When using the Ruby SDK, the Statsig.initialize
method is the only operation that makes a network request. Once the initialization is complete, all other SDK operations, including checking gate values, are synchronous and do not make additional network requests.
The SDK fetches updates from Statsig in the background, independently of your API calls. This means that checking multiple gate values in your worker will not result in multiple API calls. Instead, gate and experiment checks trigger a lazy, batched exposure log, but nothing synchronous.
For more information about the server SDK approach, you can refer to this article about SDKs.
If you need to change the owner of your account and upgrade your tier, but the previous owner has left the company, you can reach out to our support team for assistance. Please email them at support@statsig.com from an account that has administrative privileges.
Our support team can help you change the owner of your account and upgrade your tier. To do this, you will need to provide the email of the person you would like to change the owner to.
Please note that this process requires backend changes, which our support team can handle for you. Ensure that you have the necessary permissions and information before reaching out to the support team.
Yes, it is possible to check if a user was exposed to an experiment using the JavaScript SDK. However, it's important to note that the SDK methods will only show how users would be or have been allocated, not if they have actually been exposed to the experiment.
To get a list of users who have actually been exposed to an experiment, you can use the Daily Reports feature of the Statsig console API. This feature allows you to download lists of users that have been exposed to the experiment.
Additionally, you can navigate to the users page and type in a specific user ID. This will show the list of recent exposures. You can click on any of them, and it will show all the associated metadata, including the SDK type that was used for assignment.
For more detailed information, you can refer to the following documentation: Daily Reports | Statsig Docs.
To measure the cumulative impact of different product teams' work, you can create separate holdouts for each team. Here's how you can do it:
1. Navigate to the Holdouts section on the Statsig console.
2. Click the "Create New" button and enter the name and description of the holdout that you want to create for the first team.
3. Select a Holdout size in terms of a percentage of all users. A small holdout percentage, typically between 1% and 5%, is recommended.
4. If there are any existing features that are already gated by the first team, you can select those gates at the bottom to make sure they respect the holdout moving forward.
5. Repeat the process to create a separate holdout for the second team.
Remember, you should not make these holdouts global. Instead, each team should add their gates/experiments/etc to their appropriate holdout as they create them. If there are gates/features that should be attributed to both teams, you can add that gate to both holdouts - and it will just keep out the union of their holdout audiences.
The percentage you set your holdout at will depend on how many users you expect to be impacted by the changes of each team. General guidance is in the neighborhood of 1-5%, but you can use the power analysis calculator to generate some intuition if you’re already logging those KPI metrics.
For further information, you can refer to the following resources:
- Getting in on Holdouts - Statsig Holdouts Documentation - Power Analysis Calculator
When reviewing experimentation results, it is crucial to understand the significance of pre-experiment data. This data serves to highlight any potential pre-existing differences between the groups involved in the experiment. Such differences, if not accounted for, could lead to skewed results by attributing these inherent discrepancies to the experimental intervention.
To mitigate this issue, a technique known as CUPED (Controlled-Experiment Using Pre Experiment Data) is employed.
CUPED is instrumental in reducing variance and pre-exposure bias, thereby enhancing the accuracy of the experiment results. It is important to recognize, however, that CUPED has its limitations and cannot completely eliminate bias. Certain metrics, particularly those like retention, do not lend themselves well to CUPED adjustments.
In instances where bias is detected, users are promptly notified, and a warning is issued on the relevant Pulse results. The use of pre-experiment data is thus integral to the process of identifying and adjusting for pre-existing group differences, ensuring the integrity of the experimental outcomes.
In order to make the review required criteria narrower for different actions and environments, you can follow these steps:
1. Enabling a flag for themselves: This can be done through overrides in the gate. You can uncheck the requirement for review in the role & access control settings.
2. Disabling review requirements for lower environments: You can configure which environments require reviews via your Project Settings. To do this, go to Project Settings --> Keys & Environments --> tap Edit on Environments.
As a Project Admin, you can also allow yourself and other Project Admins to self-approve review requests. To turn on this setting, navigate to the Project Settings page, click on the Edit button next to Config Review Requirements, and click the checkbox to Allow Project Admins to self-approve reviews.
Please note that any changes to these settings will require approval from currently designated reviewers.
Also, it's important to note that the role-based access control setting is only available for customers in our enterprise tier. You can find more information about our pricing and tiers on our pricing page.
In the Statsig PHP SDK, the dependency mockery/mockery
was initially included as a core dependency, which was not ideal for production environments.
To address this, the Statsig PHP SDK has been updated to move mockery/mockery
to a dev-dependency.
This change ensures that mockery/mockery
is not installed in production environments, adhering to best practices for dependency management. The update has been made in version 2.3.0 of the Statsig PHP SDK.
Developers looking to integrate this change can do so by updating their version of the Statsig PHP SDK to v2.3.0.
For more details on the release and the changes included, refer to the official release notes provided by Statsig at the following link: Statsig PHP SDK Release 2.3.0.
You can indeed send the environment information using the HTTP API in Statsig. The process involves logging an event with a custom environment. Here's an example of how to do this:
bash curl \ --header "statsig-api-key: <YOUR-SDK-KEY>" \ --header "Content-Type: application/json" \ --request POST \ --data '{"events": [{"user": { "userID": "42", "statsigEnvironment": {"tier": "staging"} }, "time": 1616826986211, "eventName": "test_api_event"}]}' \ "https://events.statsigapi.net/v1/log_event>"
In this example, the statsigEnvironment
field is included in the user object, and it contains a tier
field which is set to "staging". You can replace "staging" with your desired environment.For more information, you can refer to the Statsig HTTP API documentation.
The error_callback
is a parameter in the initialize
method of the Ruby SDK that is triggered with an error message if the network request to initialize the SDK fails. This can be used for basic error handling.
Here is an example of how to use the error_callback
:
ruby Statsig.initialize(api_key, { data_adapter: Statsig::InMemoryDataAdapter.new, error_callback: lambda { |error| puts "Error: #{error}" } })
In this example, if the SDK fails to initialize due to a network request failure, the error_callback
will be triggered and the error message will be printed to the console.
You can also refer to the test case that verifies this behavior in the Ruby SDK's GitHub repository here.
Please note that the documentation is being updated to include this information for future reference.
In Autotune experiments, there isn't a specific way to conduct pre-launch testing without starting the experiment. However, you can set up the experiment and thoroughly review its configuration before initiating it.
To test the experiment, you need to click the "Start" button to launch it. If you find that adjustments are necessary after the experiment has started, you have the option to pause the experiment, make the necessary changes, and then restart it.
Remember, the integrity of your experiment relies on careful setup and review before launching. Always ensure that your configuration is correct and meets your requirements before starting the experiment.
For more detailed information, you can refer to the specific Slack conversation.
In the Statsig React SDK, you can use the initCompletionCallback
option during the SDK initialization to verify if the initialization was successful within a specified timeframe. This callback is invoked when the initialization process is completed. It provides three parameters: initDurationMs
, success
, and message
.
If the initialization was not successful within the specified timeframe, the success
parameter would be false
and the message
parameter would provide additional information.
Here's an example of how to use it:
javascript statsig.initialize('<CLIENT_SDK_KEY>', user, { initCompletionCallback: (initDurationMs, success, message) => { if (success) { console.log('Statsig has been initialized successfully.'); } else { console.log('Statsig initialization failed:', message); } }, });
Please note that this option is supported in v4.13.0+ of the JavaScript SDK. For more information, you can refer to the Statsig JavaScript SDK documentation under Options.
When conducting multiple experiments, the decision to run them in the same layer versus different layers has significant implications.
Placing experiments in the same layer ensures that there is no overlap between participants in different experiments. This is beneficial for eliminating interaction effects between experiments, as no user will be part of more than one experiment at a time. However, a critical consideration is that using layers divides the user base, which can substantially reduce the experimental power and sample size.
This division of the user base means that, at a minimum, the number of participants in each experiment is halved. Consequently, this reduction can limit the number of experiments that can be conducted simultaneously and may prolong the duration required to achieve statistically significant results.
When experiments are run in a layer and thus have a smaller sample size, any effects observed while the experiment is running will also be smaller.
For a more in-depth discussion on the topic, including the trade-offs between isolating experiments and embracing overlapping A/B tests, refer to the article Embracing Overlapping A/B Tests and the Danger of Isolating Experiments.
In Statsig, there is no hard limit to the number of dynamic configs you can create. However, the number of configs can have practical implications, particularly on the response size and latency.
Having a large number of dynamic configs can impact the initialization for both server and client SDKs. For Server SDKs, they will have to download every single config and all of their payloads during initialization, and on each polling interval if there’s an update available. This won't necessarily impact user experience, but it does mean large payloads being downloaded and stored in memory on your servers. You can find more information on Server SDKs here.
On the other hand, Client SDKs, where 'assignment' takes place on Statsig’s servers by default, will have to download user-applicable configs and their payloads to the user’s device during initialization. This increases the initialization latency and could potentially impact user experience. More details on Client SDKs can be found here.
In conclusion, while there is no explicit limit to the number of dynamic configs, having a large number can increase complexity and affect performance due to the increased payload size and latency. Therefore, it's important to consider these factors when creating and managing your dynamic configs in Statsig.
At present, Statsig does not offer a direct integration with Sentry for forwarding events. However, there are alternative methods available for event tracking and forwarding.
Statsig supports a wide range of data connectors and integrations, including Segment, Snowflake, Amplitude, Bugsnag, Fivetran, Google Analytics, Heap, Mixpanel, RevenueCat, mParticle, RudderStack, and a generic Webhook.
If you're using a service that we don't have an official integration for, you can use our Generic Webhook integration. This integration sends raw events to the provided webhook URL.
For those using Sentry, the recommended approach is to use Segment (if available) or a generic webhook. More details on how to use the generic webhook can be found in our documentation.
Please note that we are continuously expanding our range of integrations and Sentry could be considered for future development. We will provide updates on this as and when available.
In Statsig, you can use Dynamic Configs to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes. This is similar to Feature Gates, but you get an entire JSON object you can configure on the server and fetch typed parameters from it. Here's an example from the documentation:
var config = Statsig.GetConfig("awesome_product_details");
You can also use Layers/Experiments to run A/B/n experiments. We offer two APIs, but we recommend the use of layers to enable quicker iterations with parameter reuse.
However, if you're looking to dynamically set a series of flags for a user, you might need to replicate your current system using Statsig's rules. You can create rules based on user attributes and set the value of the flags accordingly.
Remember to provide a StatsigUser object whenever possible when initializing the SDK, passing as much information as possible in order to take advantage of advanced gate and config conditions.
If you're running tests in a full-stack environment and using the Ruby SDK on the server, you can override the flag value locally. Here's the relevant documentation: Local Overrides. This allows you to control the flag values based on your setup.
Currently, there is no admin Command Line Interface (CLI) or Software Development Kit (SDK) specifically designed for creating and configuring gates and experiments in Statsig. However, you can use the Statsig Console API for these tasks.
The Console API documentation provides detailed instructions and examples on how to use it. You can find the introduction to the Console API. For specific information on creating and configuring gates, refer to this link.
While there are no immediate plans to build a CLI for these tasks, the Console API documentation includes curl command examples that might be helpful for developers looking to automate these tasks.
Please note that the Statsig SDKs are primarily used for checking the status of gates and experiments, not for creating or configuring them.
When multiple Node.js processes initialize the Statsig SDK simultaneously, they might all try to write to the file at the same time, which could lead to race conditions or other issues. To mitigate this, you can use a locking mechanism to ensure that only one process writes to the file at a time. This could be a file-based lock, a database lock, or another type of lock depending on your application's architecture and requirements.
Another approach could be to have a single process responsible for writing to the file. This could be a separate service or a designated process among your existing ones. This process would be the only one to initialize Statsig with the rulesUpdatedCallback
, while the others would initialize it with just the bootstrapValues
.Remember to handle any errors that might occur during the file writing process to ensure your application's robustness.
If you're using a distributed system, you might want to consider using something like Redis, which is designed to handle consumers across a distributed system. Statsig's Data Adapter is designed to handle this so you don't need to build this manually. You can find more information about the Data Adapter in the Data Adapter overview and the Node Redis data adapter.
However, for these types of cases, it's recommended to only have a single data adapter that is enabled for writes, so you don't run into concurrent write issues.If your company doesn't currently use Redis for anything else and you use GCP buckets for file storage, there's no need to adopt Redis. The data adapter is the latest supported way to accomplish what you're trying to do.
In terms of performance, startup time is the primary concern here. And write() performance to an extent, though if you limit that to a single instance that issues writes, they should be infrequent.
In the case of using both the front end and the back end SDKs in tandem, this is common. You can even use the server SDKs to bootstrap the client SDKs (so that there’s no asynchronous request back to statsig to get assignments). For more notes on resilience and best practices, you can refer to the Statsig reliability FAQs.
When using the useGate
hook in the React SDK, if the provider does not wait for initialization and useGate
is called before the initialization completes, it will return false on the initial read. However, once the client eventually initializes, it will cause a re-render of the component that is using the useGate
hook.
This re-render is triggered because the useGate
hook updates its state with the actual value of the gate once initialization is complete. It's important to note that this re-render will occur regardless of whether the gate value is true or false. The key point is that the state of the gate has updated, which triggers the re-render.
For handling loading states while the Statsig client initializes, you can use the isLoading
value. Once the Statsig client state changes, your component will be called again and you can handle the true/false gate state as desired. For more details, refer to the Statsig React SDK documentation.
The Statsig JavaScript SDK is designed to be as lightweight as possible while supporting as many browsers as possible. The primary feature that our SDK relies on, which may not be supported by all browsers, is a JavaScript Promise. You may wish to polyfill a Promise library to ensure maximum browser compatibility. We recommend taylorhakes/promise-polyfill for its small size and compatibility.
Please note that the SDK has not been tested on Internet Explorer. Microsoft is retiring IE11 in June 2022. For more detailed information, you can refer to the Statsig JavaScript Client SDK documentation.
For more specific compatibility details, you can refer to the Statsig Compatibility page. This page provides detailed information about the compatibility of Statsig with different versions of browsers like Chrome.
As for latency information, unfortunately, this conversation does not provide any details. Please refer to the official Statsig documentation or support for more information.
In a B2B context, the recommended way to roll out a feature customer by customer is by using feature gates. You can create a feature gate and add a list of customer IDs to the conditions of the gate. This way, only the customers in the list will pass the gate and have access to the feature, while the rest will go to the default behavior.
Here's an example of how you can do this:
const user = { userID: '12345', email: '12345@gmail.com', ... }; const showNewDesign = await Statsig.checkGate(user, 'new_homepage_design'); if (showNewDesign) { // New feature code here } else { // Default behavior code here }
In this example, 'new_homepage_design' is the feature gate, and '12345' is the customer ID. You can replace these with your own feature gate and customer IDs.On the other hand, Dynamic Configs are more suitable when you want to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes, such as country.
Remember to follow best practices for feature gates, such as managing for ease and maintainability, selecting the gating decision point, and focusing on one feature per gate.
Alternatively, you can target your feature by CustomerID. You could either use a Custom Field and then pass a custom field to the SDK {custom: {customer: xyz123} or create a new Unit Type of customerID and then target by Unit ID. For more information on creating a new Unit Type, refer to the Statsig documentation.
When an experiment is in the "Unstarted" state, the code will revert to the 'default values' in the code. This refers to the parameter you pass to our get
calls as documented here.
You have the option to enable an Experiment in lower environments such as staging or development, by toggling it on in those environments prior to starting it in Production. This allows you to test and adjust the experiment as needed before it goes live.
Remember, the status of the experiment is determined by whether the "Start" button has been clicked. If it hasn't, the experiment remains in the "Unstarted" state, allowing you to review and modify the experiment's configuration as needed.
When you encounter the error statsigSDK> Event metadata is too large (max 4096). Some attributes may be stripped.
in your logs, it indicates that the size of the metadata for a particular event has exceeded the maximum limit of 4096 characters when stringified. This limit is set by Statsig to ensure efficient data handling.
To resolve this issue, you should review the events you're logging and the associated metadata. You might be logging more information than necessary or there could be large data structures being included unintentionally. If you're unsure about what metadata is being sent, you can add debugging statements in your code to print out the metadata before it's sent to Statsig. This will help you identify any unusually large pieces of data. Once you've identified the cause, you can then modify your logging to reduce the size of the metadata. The goal is to log only the information that is necessary for your metrics and analyses.
Event metadata is usually included to filter your Metrics during analysis and define more contextual metrics. The topic of event-metadata and this limit are covered in the Statsig documentation.
This error is likely to be thrown when we log internal used performance logging events. It's not related to events your service logged with log_event. A patch will be released to fix this issue.
In the event of encountering unexpected diagnostic results when evaluating a gate on React Native, despite the gate being set to 100% pass and Statsig being initialized, there are several potential causes to consider.
The "Uninitialized" status in evaluationDetails.reason
can occur even after calling .initialize()
and awaiting the result. This issue can be due to several reasons:
1. **Ad Blockers**: Ad blockers can interfere with the initialization network request, causing it to fail.
2. **Network Failures**: Any network issues that prevent the initialization network request from completing successfully can result in an "Uninitialized" status.
3. **Timeouts**: The statsig-js SDK applies a default 3-second timeout on the initialize request. This can lead to more frequent initialization timeouts on mobile clients where users may have slower connections.
If you encounter this issue, it's recommended to investigate the potential causes listed above. Check for the presence of ad blockers, network issues, and timeouts. This will help you identify the root cause and implement the appropriate solution.
It's worth noting that the initCalled: true
value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method.
If you're still experiencing issues, it might be helpful to use the debugging tools provided by Statsig. These tools can help you understand why a certain user got a certain value. For instance, you can check the diagnostics tab for higher-level pass/fail/bucketing population sizes over time, and for debugging specific checks, the logstream at the bottom is useful and shows both production and non-production exposures in near real-time.
One potential solution to this issue is to use the waitForInitialization
option. When you don’t use this option, any of your components will be called immediately, regardless of if Statsig has initialized. This can result in 'Uninitialized' exposures. By setting waitForInitialization=true
, you can defer the rendering of those components until after statsig has already initialized. This will guarantee the components aren’t called until initialize has completed, and you won’t see any of those ‘Uninitialized’ exposures being logged. You can find more details in the Statsig documentation.
However, if you can't use waitForInitialization
due to it remounting the navigation stack resulting in a changed navigation state, you can check for initialization through initCompletionCallback
.
You can also verify the initialization by checking the value of isLoading
from useGate
and useConfig
and also initialized
and initStarted
from StatsigContext
.If the issue persists, please reach out to the Statsig team for further assistance.
If you're not seeing any exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching, there are a few things you might want to check:
1. Ensure that your Server Secret Key is correct. You can find this in the Statsig console under Project Settings > API Keys. 2. Make sure that the name of the feature gate in your function matches exactly with the name of the feature gate you've created in the Statsig console. 3. Verify that the user ID is being correctly set and passed to the StatsigUser object. 4. Check if your environment tier matches the one you've set in the Statsig console.If all these are correct and you're still not seeing any data in the Diagnostics/Pulse Results tabs, it might be a technical issue on our end.
The Statsig SDK batches events and flushes them periodically as well as on shutdown
or flush
. If you are using the SDK in your middleware, it's recommended to call flush
to guarantee events are flushed. For more information, refer to the Statsig documentation.
If you're still not seeing any data, it's possible that there's an issue with event compression. In some cases, disabling event compression can resolve the issue. However, this should be done with caution and only as a last resort, as it may impact performance.
If you're using a specific version of the SDK, you might want to consider downgrading to a previous version, such as v5.13.2, which may resolve the issue.
Remember, if you're still experiencing issues, don't hesitate to reach out to the Statsig team for further assistance.