Platform

Developers

Resources

Pricing

Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs

Can a significant result in an A/A test be attributed to random chance?

In an A/A test, where both groups receive the same experience, you would generally expect to see no significant difference in metrics results. However, statistical noise can sometimes lead to significant results purely due to random chance. For example, if you're using a 95% confidence interval (5% significance level), you can expect to see one statistically significant metric out of twenty purely due to random chance. This number goes up if you start to include borderline metrics.

It's also important to note that the results can be influenced by factors such as within-week seasonality, novelty effects, or differences between early adopters and slower adopters. If you're seeing a significant result, it's crucial to interpret it in the context of your hypothesis and avoid cherry-picking results. If the result doesn't align with your hypothesis or doesn't have a plausible explanation, it could be a false positive.

If you're unsure, it might be helpful to run the experiment again to see if you get similar results. If the same pattern continues to appear, it might be worth investigating further.

In the early days of an experiment, the confidence intervals are so wide that these results can look extreme. There are two solutions to this:

1. Decisions should be made at the end of fixed-duration experiment. This ensures you get full experimental power on your metrics. Peeking at results on a daily basis is a known challenge with experimentation and it's strongly suggested that you take premature results with a grain of salt.   2. You can use Sequential testing. Sequential testing is a solution to the peeking problem. It will inflate the confidence intervals during the early stages of the experiment, which dramatically cuts down the false positive rates from peeking, while still providing a statistical framework for identifying notable results. More information on this feature can be found here.

It's important to keep in mind that experimentation is an imprecise science that's dealing with a lot of noise in the data. There's always a possibility of getting unexpected results by sheer random chance. If you're doing experiments strictly, you would make a decision based on the fixed-duration data. However, pragmatically, the newer data is always better (more data, more power) and it's okay to use as long as you're not cherry-picking and waiting for a borderline result to turn green.

Can a user be consistently placed in the same experiment group when transitioning from a free to a paid user?

Can I create a feature gate or user segment based on a specific event occurrence in Statsig?

In the current setup of Statsig, there is no direct method to create a feature gate or a user segment based on a specific event occurrence. Feature gates and user segments are typically created based on user attributes or conditions.

However, you can log custom events using the Statsig SDK and then analyze these events in the Statsig console. While this doesn't directly influence the feature gate or user segment, it can provide valuable insights into user behavior that can inform your decision-making process.

For a more automated solution, you might consider using the Console API. While it doesn't directly support event-based targeting, it can help streamline the process of checking for specific user attributes in gates or segments.

It's important to note that the Statsig SDKs are designed to be performant, meaning they make bucketing decisions based on rules/config they have, and can’t make async calls to look up state about a user as part of making this decision.

If you need to gate based on a user's state, you would need to manage the state of a user property (for example, isSubscriber?) and pass this along in the user object for the SDK to be able to gate based on this state.

For further questions or more specific assistance, it is recommended to reach out to the Statsig team directly. They may be able to provide more detailed guidance or discuss potential future features.

Can I force run updating metrics on an ongoing experiment in Statsig?

When updating log events for a feature gate in Statsig, there is no need to restart the feature gate to see the revised metrics data, as changes should take effect immediately. However, if the updated metrics are not reflecting as expected, it is advisable to verify that the events are being logged correctly.

To obtain cleaner data for metrics lifts after modifying the code for a log event, you can adjust the distribution of users between the control and experiment groups or 'resalt' the feature to reshuffle users without changing the percentage distribution.

For experiments with delayed events, setting the experiment allocation to 0% after the desired period ensures that delayed events still count towards the analytics. It is important to note that the sizes of variants cannot be adjusted during an ongoing experiment to maintain the integrity of the results.

To increase the exposure of a variant, the current experiment must be stopped, and a new one with the desired percentage split should be started.

In the context of managing experiments with Terraform, the status field can be updated to reflect one of four possible values: setup, active, decision_made, and abandoned, aiding in the management of the experiment's lifecycle.

For those utilizing Statsig Warehouse Native, creating an Assignment Source and a new experiment allows for the definition of the experiment timeline and subsequent calculation of results.

Statsig pipelines typically run in PST, with data landing by 9am PST, although enterprise companies have the option to switch this to UTC.

Statsig Cloud calculates new or changed metrics going forward, and Statsig Warehouse Native offers the flexibility to create metrics after experiments have started or finished and to reanalyze data retrospectively.

Can I get sticky results for an A/B test based on IP address?

Statsig does not support sticky results for A/B tests based on IP address. The primary identifiers used for consistency in experiments are the User ID and the Stable ID. The User ID is used for signed-in users, ensuring consistency across different platforms like mobile and desktop. For anonymous users, a Stable ID is generated and stored in local storage.

While the IP address can be included in the user object, it's not used as a primary identifier for experiments. The main reason is that multiple users might share the same IP address (for example, users on the same network), and a single user might have different IP addresses (for example, when they connect from different networks). Therefore, using the IP address for sticky results in A/B tests could lead to inconsistent experiences for users.

If you want to maintain consistency across a user's devices, you might consider using a method to identify the user across these devices, such as a sign-in system, and then use the User ID for your experiments.

For scenarios where users revisit the site multiple times without logging in, there are two potential options:

1. Run the test sequentially, only control at first, then only test group. This is known as Switchback testing. You can learn more about it in this blog post and the technical documentation.

2. Offer a way to switch between control/test group visually for the user so they can bounce back to the behavior they'd expect from being on another device.

However, if there's a lengthy effect duration, Switchback may not be ideal. If you are able to infer the IP address, you can use this as the user identifier (maybe even as a custom identifier) and randomize on this. But be aware that skew in the number of users per IP address may introduce a significant amount of noise. You may want to exclude certain IP addresses from the experiment to get around this.

The skew comes from IP addresses that represent dozens if not hundreds of users. This can skew some of the stats when we try to infer confidence intervals. For example, instead of a conversion rate of 0/1, or 1/1, this metric looks like 36/85. This overweights both the numerator and denominator for this "user" which can skew the results.

Can multiple SDKs be used on a site without creating problems, such as a JavaScript site with a React app using both JavaScript and React SDKs?

Using multiple SDKs on a site is technically possible, but it can lead to complications. The Statsig JavaScript SDK and the React SDK are designed to be used independently. If you use both on the same site, they will each maintain their own state and won't share information.

This could lead to inconsistencies in feature gate evaluations and experiment assignments.

If your site is primarily in JavaScript but includes a React app, it's recommended to initialize Statsig with the JavaScript SDK and then pass the initialized Statsig object into the React app. This way, you ensure that both parts of your application are using the same Statsig state.

However, if your React app is substantial or complex, it might be more beneficial to use the React SDK for its additional features, like the StatsigProvider and React hooks. In this case, you should ensure that the JavaScript part of your application doesn't also initialize Statsig, to avoid the aforementioned issues.

The React SDK brings in the JS SDK as well, so you don’t need to include it separately. Once that is initialized then you can use the ‘statsig’ object directly.

However, you’ll have to be careful - the react SDK internally keeps track of the state of the user, so if you try to update the user outside the provider, your react component tree won’t rerender with updates. Similarly, if you try to call methods before the react SDK initializes the js sdk instance internally, you could run into issues.

The best approach depends on the specifics of your application and its usage of Statsig.

Can Statsig analyze the impact of an experiment on CLV after a longer period, like 6 months?

In a subscription-based business model, understanding the long-term impact of experiments on Customer Lifetime Value (CLV) is crucial.

Statsig provides the capability to analyze the impact of an experiment on CLV over extended periods, such as 6 months. To facilitate this, Statsig allows for the setup of an experiment to run for a specific duration, such as 1 month, and then decrease the allocation to 0%, effectively stopping new user allocation while continuing to track the analytics for the users who were part of the experiment.

This tracking can continue for an additional 5 months or more, depending on the requirements. It is important to note that the experience delivered to users during the experiment will not continue after the allocation is set to 0%. However, there are strategies to address this, which can be discussed based on specific concerns or requirements.

Additionally, Statsig experiments by default expire after 90 days, but there is an option to extend the experiment duration multiple times for additional 30-day periods. Users will receive notifications or emails as the expiration date approaches, prompting them to extend the experiment if needed.

This functionality is available on the Pro plan, ensuring that businesses can effectively measure the long-term impact of their experiments on CLV without direct integration with a data warehouse and by updating CLV through integrations such as Hightouch.

Can two different web projects share the same Statsig API keys and what is the impact on billing?

Do permanent gates count towards billable events and how to launch a feature flag for a subset group without running up billable events?

Permanent gates do count towards billable events. An event is recorded when your application calls the Statsig SDK to check whether a user should be exposed to a feature gate or experiment, and this includes permanent gates. However, if a permanent gate is set to 'Launched' or 'Disabled', it will always return the default value and stop generating billable exposure events.

During the rollout or test period of a permanent gate, exposures will be collected and results will be measured. This is when the gate is billable. Once you Launch or Disable the gate, it is no longer billable. The differentiation with permanent gates is that it tells our system not to nudge you to clean it up, and that it will end up living in your codebase long term. More details can be found in the permanent and stale gates documentation.

If you want to launch a feature flag, but only set a subset group to true, you can achieve this with a Permanent, non-billable gate that targets a specific set of users. You can toggle off “Measure Metric Lifts”, but keep the gate enabled. You don’t need to click “Launch” using that other workflow.

Marking a gate as permanent effectively turns off billable events. This is a useful feature if you want to target a specific set of users without running up billable events.

Please note that we are continuously working on streamlining this process and improving the user experience. Your feedback is always appreciated.

How are p-values of experiments calculated and is it always assumed that the underlying distribution is a normal distribution?

In the context of hypothesis testing, the p-value is the probability of observing an effect equal to or larger than the measured metric delta, assuming that the null hypothesis is true. A p-value lower than a pre-defined threshold is considered evidence of a true effect.

The calculation of the p-value depends on the number of degrees of freedom (ν). For most experiments, a two-sample z-test is appropriate. However, for smaller experiments with ν < 100, Welch's t-test is used. In both cases, the p-value is dependent on the metric mean and variance computed for the test and control groups.

The z-statistic of a two-sample z-test is calculated using the formula: Z = (Xt - Xc) / sqrt(var(Xt) + var(Xc)). The two-sided p-value is then obtained from the standard normal cumulative distribution function.

For smaller sample sizes, Welch's t-test is the preferred statistical test due to its lower false positive rates in cases of unequal sizes and variances. The t-statistic is computed in the same way as the two-sample z-test, and the degrees of freedom ν are computed using a specific formula.

While the normal distribution is often used in these calculations due to the central limit theorem, the specific distribution used can depend on the nature of the experiment and the data. For instance, in Bayesian experiments, the posterior probability distribution is calculated, which can involve different distributions depending on the prior beliefs and the likelihood.

It's important to note that it's typically assumed that the sample means are normally distributed. This is generally true for most metrics thanks to the central limit theorem, even if the distribution of the metric values themselves is not normal.

How can we conduct QA for an experiment if another experiment is active on the same page with an identical layer ID?

To conduct Quality Assurance (QA) for your experiment while another experiment is active on the same page with an identical layer ID, you can use two methods:

1. Creating a New Layer: You can create a new layer for the new experiment. Layers allow you to run multiple landing page experiments without needing to update the code on the website for each experiment. When you run experiments as part of a layer, you should update the script to specify the layerid instead of expid. Here's an example of how to do this:

html <script src="https://cdn.jsdelivr.net/npm/statsig-landing-page-exp?apikey=[API_KEY]&layerid=[LAYER_NAME]"></script>

By creating a new layer for your new experiment, you can ensure that the two experiments do not interfere with each other. This way, you can conduct QA for your new experiment without affecting the currently active experiment.

2. Using Overrides: For pure QA, you can use overrides to get users into the experiences of your new experiment in that layer. Overrides take total precedence over what experiment a user would have been allocated to, what group the user would have received, or if the user would get no experiment experience because it is not started yet. You can override either individual user IDs or a larger group of users. The only caveat is a given userID will only be overridden into one experiment group per layer. For more information, refer to the Statsig Overrides Documentation.

When you actually want to run the experiment on real users, you will need to find some way to get allocation for it. This could involve concluding the other experiment or lowering its allocation.

How does Statsig differentiate data from different environments and can non-production data be used in experiments?

Statsig differentiates data from different environments by the environment tier you specify during the SDK initialization. You can set the environment tier to "staging", "development", or "production". By default, all checks and event logs are considered "production" data if the environment tier is unset.

Experiments and metrics primarily factor in production data. Non-production events are visible in diagnostics, but they are not included in Pulse results. This is because most companies do not want non-production test data being included. If you want to include these, you can log them as regular events. However, non-production data is filtered out of the warehouse and there is no other way to include it.

When initializing, if you add { environment: { tier: "production" } }, this would set your environment to "production", not "staging" or "development". If you want to set your environment to "staging" or "development", you should replace "production" with the desired environment tier.

Pulse results are only computed for “Production” tier events. To see Pulse results, you need: • An experiment that is “Started” (aka, enabled in production) • Exposures & events in production tier • Exposures for users in all test groups • Events/metrics associated to users in test groups

As long as you have initialized using { environment: { tier: "production" } }, your Pulse will compute. This means that even if your code is deployed to staging, as long as you initialize with the production tier, you will be able to see Pulse results.

How to address a vulnerability alert for Statsig Ruby SDK due to MPL-2.0 license in Snyk

When a vulnerability alert is triggered in Snyk for the Statsig Ruby SDK due to the MPL-2.0 license, it is important to understand the implications of the license and how it may affect your organization. The Mozilla Public License 2.0 (MPL-2.0) is a widely accepted open-source license that allows the code to be used in both open-source and proprietary projects.

Under MPL-2.0, if modifications are made to the original code, those changes must be disclosed when the software is distributed. However, the MPL-2.0 license permits the combination of the licensed code with proprietary code, which means that the SDK can be used in closed-source applications without requiring the entire application to be open-sourced.

It is important to note that while the MPL-2.0 license is generally compatible with other licenses, it may be flagged by security tools like Snyk because it requires review and understanding of its terms. Organizations should consult with their legal team or open-source compliance experts to ensure that the use of MPL-2.0 licensed software aligns with their policies and legal obligations.

If your organization has made no modifications to the original code, there should be no concerns regarding the need to disclose changes. Ultimately, the decision to use software under the MPL-2.0 license should be made by the decision-makers within the organization after careful consideration of the license terms and compliance requirements.

How to change the owner of our account and upgrade our tier when the previous owner has left the company?

If you need to change the owner of your account and upgrade your tier, but the previous owner has left the company, you can reach out to our support team for assistance. Please email them at support@statsig.com from an account that has administrative privileges.

Our support team can help you change the owner of your account and upgrade your tier. To do this, you will need to provide the email of the person you would like to change the owner to.

Please note that this process requires backend changes, which our support team can handle for you. Ensure that you have the necessary permissions and information before reaching out to the support team.

How to configure holdouts to measure the impact of different product teams?

How to ensure consistent experiment results across CSR and SSR pages with Statsig

Ensuring consistent experiment results across Client-Side Rendered (CSR) and Server-Side Rendered (SSR) pages when using Statsig involves establishing a single source of truth for user assignments. The challenge arises when independent network requests on different pages result in varying initialization times, potentially leading to different experiment variants being presented to the user.

To address this, it is recommended to have one consistent source of truth for the user's assignment. This can be achieved by using the assignments from the first page (CSR) and carrying them over to the second page (SSR), rather than initializing a new request on the second page.

This approach ensures that the user experiences a consistent variant throughout their session, regardless of the page they navigate to. It is important to note that the user's decision for any getExperiment call is determined at the time of initialization, not when getExperiment is called. The getExperiment call simply returns the value fetched during initialization and logs the user's participation in the experiment and their assigned group.

To implement this, developers should ensure that the assignments are fetched from the network request on the first page and then passed to the second page, avoiding the need for another initialization request and the management of an ever-growing list of parameters.

How to ensure real-time logging of events works via the HTTP API?

When utilizing the HTTP API to log events in real-time, it is crucial to ensure that the events are properly formatted to be recognized by the system. If events are not appearing in the Metrics Logstream, it may be necessary to wrap the events in an array using the events:[] syntax. This is particularly important because events are typically sent in bulk, and the correct formatting is essential for them to be processed and displayed in the stream.

Additionally, it is recommended to use the SDKs provided for various languages, as they handle a lot of the heavy lifting, including performance and reliability improvements. For cases where the SDKs are not an option, such as with certain desktop applications or specific frameworks like Qt, it may be possible to use language bindings to integrate with the SDKs of supported languages. This can potentially save significant development effort and ensure more reliable event logging.

Furthermore, for the best practice regarding experiments and configurations, it is advised to fetch the experiment config at the time when it is needed to ensure accurate exposure logging. The SDKs typically fetch all evaluations for a single user in one shot and cache it for the session duration, allowing for local and instantaneous experiment config or gate checks. If using the HTTP API directly, it is possible to use an 'initialize' endpoint, which is typically reserved for SDK use.

This endpoint fetches all evaluated results for a given user without generating exposure logs at the same time, thus avoiding overexposure issues.

How to implement Statsig tracking on Shopify Custom Pixels?

In order to implement StatSig tracking on Shopify Custom Pixels, you need to ensure that the StatSig SDK is properly initialized and that events are logged correctly. Here's a step-by-step guide on how to do this:

1. First, create a script element and set its source to the StatSig SDK. Append this script to the document head.

javascript const statsigScript = document.createElement('script'); statsigScript.setAttribute('src', 'https://cdn.jsdelivr.net/npm/statsig-js/build/statsig-prod-web-sdk.min.js'); document.head.appendChild(statsigScript);

2. Next, initialize the StatSig SDK. This should be done within an asynchronous function that is called when the StatSig script has loaded.

javascript const statsigInit = new Promise(async (resolve) => {    statsigScript.onload = async () => {      await statsig.initialize("client-sdk-key", { userID: "user-id" });      resolve();    } })

3. Subscribe to the event you want to track and log it with StatSig. Make sure to wait for the StatSig SDK to initialize before logging the event.

javascript analytics.subscribe("checkout_started", async () => {    await statsigInit;    statsig.logEvent("checkout_started"); });

Remember that statsig will be the global, you want to wait for initialization before logging, no need to require.Please note that the SDK accumulates events within, and flushes periodically. Ensure that Statsig.logEvent() was ever called, and that the application didn’t exit right away after that so the SDK has enough time to flush the events.

Inspect the network requests and see what requests were made and what their responses were to verify if the events are being sent correctly.

How to incorporate an app version check into Statsig experiment variants?

Incorporating an app version check into Statsig experiment variants can be achieved using Feature Gates or Segments. Here are the steps you need to follow:

1. Capture the Initial App Version: The first step is to capture and store the initial app version that a user starts using. This information is crucial as it will be used to determine whether a user is new and started using the app from a specific version onwards.

2. Use Custom Fields for Targeting: The next step is to use Custom Fields for targeting. This will require some code on the client-side that passes in user upgrade/create timestamps as a custom field.

3. Pass the Initial App Version as a Custom Field Key: You need to pass the value for users who started using the app as new users from a specific version as the Custom Field Key.

4. Configure the Custom Field Key: Once you create the key, configure it using the "Greater than or equal to version" operator. This operator checks the current version of the user's app.

For your specific case, you can create two separate experiments or feature gates. One for users on app version 1, where the variants are v1 and v2, and another for users on app version 2, where the variants are v1 to v4. You can then use the app version as a custom field for targeting.

Please note that the Key set in the Custom Field will be included in the events called by Feature Gates. However, it's not a chargeable event. It's just attributes that will be in the payload for events you're already tracking.

How to interpret pre-experiment results in experimentation data

When reviewing experimentation results, it is crucial to understand the significance of pre-experiment data. This data serves to highlight any potential pre-existing differences between the groups involved in the experiment. Such differences, if not accounted for, could lead to skewed results by attributing these inherent discrepancies to the experimental intervention.

To mitigate this issue, a technique known as CUPED (Controlled-Experiment Using Pre Experiment Data) is employed.

CUPED is instrumental in reducing variance and pre-exposure bias, thereby enhancing the accuracy of the experiment results. It is important to recognize, however, that CUPED has its limitations and cannot completely eliminate bias. Certain metrics, particularly those like retention, do not lend themselves well to CUPED adjustments.

In instances where bias is detected, users are promptly notified, and a warning is issued on the relevant Pulse results. The use of pre-experiment data is thus integral to the process of identifying and adjusting for pre-existing group differences, ensuring the integrity of the experimental outcomes.

How to optimize landing page loading without waiting for experiment configurations

When optimizing landing page loading without waiting for experiment configurations, it is recommended to use a custom script approach if you need to pass in specific user identifiers, such as Segment's anonymous ID.

This is because the standard landing page script provided by Statsig does not allow for the initialization with a different user ID. Instead, it automatically generates a stableID for the user, which is used for traffic splitting and consistent user experience upon revisits. This stableID is stored in a cookie and is used by Statsig to identify users in experiments.

However, if you need to synchronize with metrics from Segment using the Segment anonymous ID, you may need to deconstruct the landing page tool and implement a custom version that allows you to set the userObject with your Segment anonymous ID.

Additionally, you can enrich the Segment User with an additional identifier, such as statsigStableID, which can be obtained in JavaScript using statsig.getStableID(). This ID can then be mapped from the Segment event payload to Statsig's stableID. If performance is a concern, you can bootstrap your SDK with values so you don't have to wait for the network request to complete before rendering.

This can help mitigate performance issues related to waiting for experiment configurations. For more information on bootstrapping the SDK, you can refer to the official documentation on bootstrapping.

How to roll out and monitor multiple new features simultaneously using Statsig?

To roll out and monitor multiple new features simultaneously using Statsig, you can utilize the platform's Feature Gates and Experiments.

For each new feature you plan to roll out, create a corresponding Feature Gate. This approach automatically converts a feature roll-out into an A/B test, allowing you to measure the impact of the roll-out on all your product and business metrics as the roll out proceeds.

If you wish to test hypotheses between product variants, create an Experiment. An Experiment can offer multiple variants and returns a JSON config to help you configure your app experience based on the group that the user is assigned to.

To show features randomly to different sets of customers, use the 'Pass%' in a Feature Gate and 'Allocation%' in an Experiment. This allows you to control the percentage of users who see each feature.

Statsig's Experiments offer more capabilities for advanced experiment designs. For example, you can analyze variants using stable IDs for situations when users have not yet signed-up (or signed-in), or using custom IDs to analyze user groups, pages, sessions, workspaces, cities, and so on. You can also run multiple isolated experiments concurrently.

Remember to define your company goals and key performance indicators (KPIs) to measure the success of your features. You can break down these strategic goals into actionable metrics that you can move with incremental, iterative improvements.

If you use three different feature gates, you will find out how each feature, individually, performed against the baseline. If you want combinatorial impact analysis, like, A vs B vs C vs AB vs BC vs AC vs ABC, then you will need to setup an experiment with 7 variants and specify the combinations via parameters and measure.

However, in practice, this level of combinatorial testing isn’t always fruitful and will consume a lot of time. A pragmatic recommendation would be to use feature gates to individually launch and measure the impact of a single feature, launch the ones that improve metrics and wind down the ones that don’t.

How to send heads-up emails to users before exposing them to a new feature?

In order to notify users before they are exposed to a new feature, you can create a separate feature gate to control the rollout percentage and a segment to contain the account IDs that can be exposed to the feature. This approach allows you to effectively manage the rollout process and ensure that users are notified in advance.

The main feature gate will only pass for the accounts in the segment that also pass the separate feature gate. This provides a clear distinction between the users who are eligible and those who have been exposed to the feature.

Here's a brief overview of the process:

1. Create a main feature gate (rollout_feature_gate). The users that pass this gate will be exposed to the feature.

2. Create a separate feature gate (exposure_eligibility_gate) to control the rollout percentage. The users that pass this gate are the ones eligible to be exposed to the feature.

3. Create a segment (allegeable_accounts) that contains all the account IDs that can be exposed to the feature.

The rollout_feature_gate will return pass only for the accounts in the allegeable_accounts segment, while questioning the exposure_eligibility_gate. After a certain amount of time, export all account IDs in the exposure_eligibility_gate to the allegeable_accounts segments, and increase the exposure_eligibility_gate percentage.

This approach allows you to have a distinction between the eligible users that were exposed to the feature (allegeable_accounts segment) to the ones that are allegeable but potentially not yet exposed to it (exposure_eligibility_gate). You can manage additional rules and environment conditions in the main feature gate (rollout_feature_gate).

Remember to test your setup thoroughly in a pre-production environment before rolling it out to ensure everything works as expected.

How to use the `get<T>` method in the DynamicConfig typing class in TypeScript React-Native?

In TypeScript React-Native, the get<T> method in the DynamicConfig typing class is used to retrieve a specific parameter from a Dynamic Config. Here's how to use it:

1. **Passing the Key**: The key you should pass in is the name of the parameter you want to retrieve from the Dynamic Config. This parameter is defined in the Statsig Console when you create or edit a Dynamic Config.

2. **Understanding the Return Value**: The get method will return the defaultValue you provide if the parameter name does not exist in the Dynamic Config object. This can happen if there's a typo in the parameter name, or if the client is offline and the value has not been cached. If you're always getting the defaultValue, it's likely that the parameter name doesn't match what's in the Dynamic Config, or the client hasn't successfully fetched the latest config values. Please double-check your parameter names and your network status. If the issue persists, it might be a good idea to log this issue for further debugging.

3. **Retrieving the Entire Object**: If you want to get the entire object, you can use the getValue() method. For example, if you call config.getValue(), you will get the entire object.

4. **Example Usage**: The argument passed into getConfig will be the dynamic config key itself which returns a config object that implements the get method. You pass a top-level object key into that get method. For instance, if the top level key is clothing, you would pass that into the get method accordingly, like so: statsig.getConfig('max_discount').get('clothing');

Remember, the get<T> method is designed to access individual properties of the dynamic config, not the entire object. For accessing the entire object, use the getValue() method.

Is there a limit to the number of dynamic configs in Statsig and what are the effects of having a large number?

In Statsig, there is no hard limit to the number of dynamic configs you can create. However, the number of configs can have practical implications, particularly on the response size and latency.

Having a large number of dynamic configs can impact the initialization for both server and client SDKs. For Server SDKs, they will have to download every single config and all of their payloads during initialization, and on each polling interval if there’s an update available. This won't necessarily impact user experience, but it does mean large payloads being downloaded and stored in memory on your servers. You can find more information on Server SDKs here.

On the other hand, Client SDKs, where 'assignment' takes place on Statsig’s servers by default, will have to download user-applicable configs and their payloads to the user’s device during initialization. This increases the initialization latency and could potentially impact user experience. More details on Client SDKs can be found here.

In conclusion, while there is no explicit limit to the number of dynamic configs, having a large number can increase complexity and affect performance due to the increased payload size and latency. Therefore, it's important to consider these factors when creating and managing your dynamic configs in Statsig.

Is there an admin CLI or SDK for creating and configuring gates and experiments in Statsig?

Currently, there is no admin Command Line Interface (CLI) or Software Development Kit (SDK) specifically designed for creating and configuring gates and experiments in Statsig. However, you can use the Statsig Console API for these tasks.

The Console API documentation provides detailed instructions and examples on how to use it. You can find the introduction to the Console API. For specific information on creating and configuring gates, refer to this link.

While there are no immediate plans to build a CLI for these tasks, the Console API documentation includes curl command examples that might be helpful for developers looking to automate these tasks.

Please note that the Statsig SDKs are primarily used for checking the status of gates and experiments, not for creating or configuring them.

Understanding the difference between daily participation rate and one time event metrics in Statsig

In Statsig, the Daily Participation Rate and One Time Event metrics are used to track user behavior in experiments. The choice between these two metrics depends on the nature of the event you're tracking.

1. Daily Participation Rate: This metric is calculated as the total number of days that a user has the selected event, divided by the number of days the user is in the experiment. This is done for each user in the experiment. The mean event_dau, or the average active days per user, is then calculated by aggregating this average event_dau for each user in the experiment, with each user weighted equally. This metric is more suitable for events that are expected to occur repeatedly for a given user.

2. One Time Event: This metric is ideal for events that are only expected once per user, such as booking events. If the event is expected to occur only once per user during the experiment or holdout period, then the One Time Event metric would be suitable.

For longer experiments and holdouts, the choice of metric would still depend on the frequency of the event. If the event is expected to occur approximately once a month or less frequently, the One Time Event metric should be appropriate. However, if the event is expected to occur approximately weekly or more frequently, the Daily Participation Rate metric might be more appropriate as it captures recurring behavior.

When reviewing experiments, consider all related metrics:

- One-time events best capture the number of unique users who participated in the event. - Daily participation rate is an effective proxy for "how much" people are participating in the event. - Total events (event_count) is a better proxy for revenue or downstream metrics.

For holdouts, it can be helpful to use different rollups. For example, looking at one-time metrics for the 7-day or 28-day rollup would tell you what % of users participated (at all) within the last 7-day or 28-day window. This can be an effective way to get past the history issue.

What happens if multiple Node.js processes initialize the Statsig SDK at the same time?

When multiple Node.js processes initialize the Statsig SDK simultaneously, they might all try to write to the file at the same time, which could lead to race conditions or other issues. To mitigate this, you can use a locking mechanism to ensure that only one process writes to the file at a time. This could be a file-based lock, a database lock, or another type of lock depending on your application's architecture and requirements.

Another approach could be to have a single process responsible for writing to the file. This could be a separate service or a designated process among your existing ones. This process would be the only one to initialize Statsig with the rulesUpdatedCallback, while the others would initialize it with just the bootstrapValues.Remember to handle any errors that might occur during the file writing process to ensure your application's robustness.

If you're using a distributed system, you might want to consider using something like Redis, which is designed to handle consumers across a distributed system. Statsig's Data Adapter is designed to handle this so you don't need to build this manually. You can find more information about the Data Adapter in the Data Adapter overview and the Node Redis data adapter.

However, for these types of cases, it's recommended to only have a single data adapter that is enabled for writes, so you don't run into concurrent write issues.If your company doesn't currently use Redis for anything else and you use GCP buckets for file storage, there's no need to adopt Redis. The data adapter is the latest supported way to accomplish what you're trying to do.

In terms of performance, startup time is the primary concern here. And write() performance to an extent, though if you limit that to a single instance that issues writes, they should be infrequent.

In the case of using both the front end and the back end SDKs in tandem, this is common. You can even use the server SDKs to bootstrap the client SDKs (so that there’s no asynchronous request back to statsig to get assignments). For more notes on resilience and best practices, you can refer to the Statsig reliability FAQs.

What is the best practice for using one Statsig account for both dev/test and production environments?

Statsig currently supports metrics calculation for a single production environment. If you have multiple environments set up, you will only be able to use one as your production environment for metrics and experiments.

When running an experiment, the default behavior is that the experiment will run in all environments. This means that if you try to access the config values of the experiment, you will get a valid value, even if you are in a non-production environment.

If you want your experiment to only run on production, you can set a targeting gate. This will ensure that only users in the production environment will pass the gate and be included in the experiment.

Here is an example of how you might access the config values of an experiment:

javascript useExperiment(XXX).config.get(YYY, false);

In this example, XXX is the experiment you are running, and YYY is the config value you are trying to access. If the experiment is not running in the current environment, you will get the default fallback value, which is false in this case.

Remember, once you start an experiment, it will run in all environments unless you set a targeting gate to restrict it to specific environments.

The best practice is to use the experiment checklist and diagnostics tab to instrument the test, enable it in lower environments, and validate that exposures are flowing through. Then, when you’ve validated these, you click “Start” to go to production. This workflow is typically adequate for most users.

Please note that this is subject to change as Statsig continues to evolve and add new features. Always refer to the official Statsig documentation for the most up-to-date information.

What is the impact of turning `waitForInitialization` off in Statsig's React SDK?

The waitForInitialization option in Statsig's React SDK is used to delay the rendering of your components until the SDK has finished initializing. This ensures that the correct feature gate values are available when your components are rendered. If you're seeing users in the "uninitialized" group, it could mean that the SDK initialization is not yet complete when the feature gate check is being made. This could be due to a slow network connection or other issues that delay the initialization process.

To resolve this, you could consider increasing the initTimeoutMS value, which gives the SDK more time to complete the network roundtrip before falling back to cached values. You could also use the init callback for debugging to check when it's returning. If the issue persists, it might be worth considering using a server SDK on one of your servers to bootstrap with. This way, if you already have a network request serving stuff to your clients, you can have it evaluate the user and pass those values back without needing a roundtrip to the SDK.

Remember, the initCalled: true value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method. If you're trying to avoid unnecessary updateUser calls, consider building a statsigUser and only call for update if the local statsigUser is different from the one saved in the SDK instance.

If you set waitForInitialization off, you should get the uninitialized check, and then once SDK initialization completes (within 3 seconds), you should get another check with the actual value (assuming the network request was successful for initialization).

Statsig is working on the metadata around these cases to make it easier to debug. In new SDK versions, it will be possible to differentiate them a bit more. There are also plans to make a change to React that won't even render the children until initialization has at least started so there won't be uninitialized checks at first. This is due to the ordering of effects, where SDK hooks will run before the SDK initialization path in the provider.

What is the recommended method for performing A/B tests for static pages in Next.js using Statsig?

There are two main methods for performing A/B tests for static pages in Next.js using Statsig.

The first method involves using getClientInitializeResponse and storing the initializeValues in a cookie. This approach is suitable if you want to avoid generating separate static pages for each variant. However, the cookie size is limited to 4KB, so this method might not be suitable if the initializeValues are large.

The second method involves generating a separate static page for each experiment's variant. This approach is suitable if you have a small number of variants and want to avoid the cookie size limitation. However, this method might require more setup and maintenance if you have a large number of variants.

If you're unsure which method to use, you can start with the method that seems easier to implement and switch to the other method if you encounter issues.

If you're concerned about the size of initializeValues, there are a couple of ways to bring down the response size. One way is to use target apps to limit the gates/experiments/etc included in the response. Another way is to add an option to getClientInitializeResponse to specify which gates/experiments/etc to include in the response.

If you plan on stitching together multiple cookies, a simple string splice might be easier. An alternative that doesn't involve stitching together multiple initializeValues is to use multiple client SDK instances. This wouldn't be supported in react, but using the js SDK, you could have multiple statsig instances each with its own set of configs. You would have to keep track of which instance to use for which experiment but this may be a "cleaner" approach.

The JS SDK can be synchronously loaded using initializeValues similarly to how the StatsigSynchronousProvider works. So you should be able to just call statsig.initialize(..., {initializeValues}) without needing to worry about awaiting.

Finally, you can also use the local evaluation SDK to fetch the whole config before the page becomes interactive and then pass it to the synchronous SDK. This is a client SDK, but it solves the "flickering" issues because you don't need to wait for the experiment(s) data to be fetched on the fly.

What is the recommended way to roll out a feature customer by customer in a B2B context using Statsig?

In a B2B context, the recommended way to roll out a feature customer by customer is by using feature gates. You can create a feature gate and add a list of customer IDs to the conditions of the gate. This way, only the customers in the list will pass the gate and have access to the feature, while the rest will go to the default behavior.

Here's an example of how you can do this:  

const user = {      userID: '12345',      email: '12345@gmail.com',      ...   };       const showNewDesign = await Statsig.checkGate(user, 'new_homepage_design');   if (showNewDesign) {      // New feature code here } else {      // Default behavior code here }  

 In this example, 'new_homepage_design' is the feature gate, and '12345' is the customer ID. You can replace these with your own feature gate and customer IDs.On the other hand, Dynamic Configs are more suitable when you want to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes, such as country.

Remember to follow best practices for feature gates, such as managing for ease and maintainability, selecting the gating decision point, and focusing on one feature per gate.

Alternatively, you can target your feature by CustomerID. You could either use a Custom Field and then pass a custom field to the SDK {custom: {customer: xyz123} or create a new Unit Type of customerID and then target by Unit ID. For more information on creating a new Unit Type, refer to the Statsig documentation.

What is the status of the experiment if start is not clicked and what actions can I take?

When an experiment is in the "Unstarted" state, the code will revert to the 'default values' in the code. This refers to the parameter you pass to our get calls as documented here.

You have the option to enable an Experiment in lower environments such as staging or development, by toggling it on in those environments prior to starting it in Production. This allows you to test and adjust the experiment as needed before it goes live.

Remember, the status of the experiment is determined by whether the "Start" button has been clicked. If it hasn't, the experiment remains in the "Unstarted" state, allowing you to review and modify the experiment's configuration as needed.

Why am I getting `RULE` as `Default` and `REASON` as `Unrecognized` when deploying to an environment in Statsig?

When deploying to an environment in Statsig, if you encounter an issue where the RULE is always Default and the REASON is Unrecognized, it typically means that the SDK was initialized, but the config or feature gate you're trying to evaluate did not exist in the set of values. This could be due to a few reasons:

1. The feature gate or dynamic config you're trying to evaluate does not exist or is not correctly spelled in your code. Please double-check the spelling and case-sensitivity.

2. The SDK might not have been able to fetch the latest rules from the Statsig server. This could be due to network issues or if the SDK initialization did not complete successfully.

3. If you're using a server SDK, it's possible that the SDK is outdated and doesn't recognize new types of gate conditions. In this case, upgrading the SDK might resolve the issue.

Remember, the Unrecognized reason is only given when the SDK is initialized, but the config or feature gate did not exist in the set of values.

It's also important to ensure that you are waiting for initialize to finish before making evaluations. For instance, calling checkGate inside the callback or after you are sure the callback has been triggered.

Additionally, check if the environment you've deployed to is able to make requests to the relevant URLs. In some cases, these requests might be blocked by the client in your production environment.

If you're still having trouble, please provide more details about your setup and the issue, and a Statsig team member will assist you shortly.

Why am I getting a lot of "uninitialized" values in my experiment in a Server Side Rendering (SSR) project?

If you are using the synchronous provider and Server Side Rendering (SSR), the assignment reasons chart should be 100% with the reason “Bootstrap”. There should not be any “network”/“Cache”/etc. If you are seeing a lot of uninitialized values, it could point to a potential implementation issue.

One possible cause could be changes in the user object that you pass into the synchronous provider. If there are fields on the user object that you are loading asynchronously or in an effect elsewhere, it could trigger the SDK to update the values for the user.

To debug this issue, you should verify each of the render passes on the StatsigSynchronousProvider and ensure that there is only a single render with a static user object. If the user object changes and it rerenders, it could lead to the issue you are experiencing.

If you are calling useExperiment() in your code, make sure that the user object passed into the provider is static and never changes. If the user object changes after you bootstrap the StatsigSynchronousProvider, it will cause the provider to re-fetch initialize values using a network request, effectively discarding the results passed to it through SSR.

It's also important to note that if you are using SSR correctly, you should never have a “network” reason. If you are seeing a “network” reason, it could indicate that your users are only going through the Client Side Rendering (CSR) flow.

Why am I seeing failures in gate evaluation on React Native despite the gate being set to 100% pass and Statsig being initialized?

In the event of encountering unexpected diagnostic results when evaluating a gate on React Native, despite the gate being set to 100% pass and Statsig being initialized, there are several potential causes to consider.

The "Uninitialized" status in evaluationDetails.reason can occur even after calling .initialize() and awaiting the result. This issue can be due to several reasons:

1. **Ad Blockers**: Ad blockers can interfere with the initialization network request, causing it to fail.

2. **Network Failures**: Any network issues that prevent the initialization network request from completing successfully can result in an "Uninitialized" status.

3. **Timeouts**: The statsig-js SDK applies a default 3-second timeout on the initialize request. This can lead to more frequent initialization timeouts on mobile clients where users may have slower connections.

If you encounter this issue, it's recommended to investigate the potential causes listed above. Check for the presence of ad blockers, network issues, and timeouts. This will help you identify the root cause and implement the appropriate solution.

It's worth noting that the initCalled: true value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method.

If you're still experiencing issues, it might be helpful to use the debugging tools provided by Statsig. These tools can help you understand why a certain user got a certain value. For instance, you can check the diagnostics tab for higher-level pass/fail/bucketing population sizes over time, and for debugging specific checks, the logstream at the bottom is useful and shows both production and non-production exposures in near real-time.

One potential solution to this issue is to use the waitForInitialization option. When you don’t use this option, any of your components will be called immediately, regardless of if Statsig has initialized. This can result in 'Uninitialized' exposures. By setting waitForInitialization=true, you can defer the rendering of those components until after statsig has already initialized. This will guarantee the components aren’t called until initialize has completed, and you won’t see any of those ‘Uninitialized’ exposures being logged. You can find more details in the Statsig documentation.

However, if you can't use waitForInitialization due to it remounting the navigation stack resulting in a changed navigation state, you can check for initialization through initCompletionCallback.

You can also verify the initialization by checking the value of isLoading from useGate and useConfig and also initialized and initStarted from StatsigContext.If the issue persists, please reach out to the Statsig team for further assistance.

Why are entries not showing up in the "Exposure Stream" tab of the experiment?

If you're not seeing data in the "Exposure Stream" tab, there could be a few reasons for this. Here are some steps you can take to troubleshoot:

1. Check Data Flow: Ensure that the id_type is set correctly and that your ids match the format of ids logged from SDKs. You can check this on the Metrics page of your project.

2. Query History: If your data is still not showing up in the console, check your query history for the user to understand which data is being pulled, and if queries are not executing or are failing.

3. Data Processing Time: If data is loading, it's likely we're just processing. For new metrics, give it a day to catch. If data isn't loaded after a day or two, please check in with us. The most common reason for metrics catalog failures is due to id_type mismatches.

4. Initialization Status: Each hook has an isLoading check, which you can use, or the StatsigProvider provides the StatsigContext which has initialization status as a field as well. This can be used to prevent a gate check unless you know the account has already been passed in and reinitialized.

If you've checked all these and are still experiencing issues, please let us know. We're here to help!

In some cases, the issue might be due to the way you are accessing the layer config and getting the value of the config. When you manually reach into the layer config and get the value of the config, we won’t know when to generate a layer exposure. For example, if you have a line of code like this:

const config = statsig?.getLayer('landing_page_gg')?.value;

While this will give you the value of the config, it won’t generate exposures and your experiment won’t work. In order to generate the correct exposures, you will need to get the Layer object and ask for the values using the “get” method. Like this:

const layer = statsig?.getLayer('landing_page_gg'); const block_1 = layer.get('block_1', '');

And similarly for other parameters you have defined in your layer.

If you are pulling multiple values at once and they control the same experience, it is recommended to pack them into a single parameter. However, if the parameters affect experiences in multiple locations, keep them separate. If you are pulling everything in one shot, treat them as a single object value. Use the json type. You could also just add a new parameter to the current experiment that has all the other values. There's no need to create a new experiment if you don’t want to.

Why is my experiment only showing overridden values and not running as expected?

If you're only seeing overridden values in the Exposure Stream for your experiment, there could be several reasons for this. Here are some steps you can take to troubleshoot:

1. Check the Initialization Status: Each hook has an isLoading check, which you can use, or the StatsigProvider provides the StatsigContext which has initialization status as a field as well. This can be used to prevent a gate check unless you know the account has already been passed in and reinitialized.

2. Check the Data Flow: Ensure that the id_type is set correctly and that your ids match the format of ids logged from SDKs. You can check this on the Metrics page of your project.

3. Check the Query History: If your data is still not showing up in the console, check your query history for the user to understand which data is being pulled, and if queries are not executing or are failing.

4. Check the Exposure Counts: If you're seeing lower than expected exposure counts, it could be due to initializing with a StatsigUser object that does not have the userID set, and then providing that on a subsequent render. This causes the SDK to refetch values, but logs an exposure for the “empty” userID first. To prevent this, ensure the userID is set before initializing the StatsigUser object.

If you've checked all these and the issue persists, it might be best to reach out for further assistance.In some cases, users may still qualify for the overrides based on the attributes you’re sending on the user object. This may be due to caching, or it may be due to the user qualifying for other segments that control overrides.

For example, if users have "first_utm_campaign": "34-kaiser-db" being sent on the user object, they would qualify for a segment that’s being used in the overrides.

It's also important to note that overridden users will see the assigned variant but will be excluded from experiment results. We have a way to include users in results for experiments not in layers, but it seems we don’t have that option for experiments in layers.

Lastly, consider why you are using overrides in this scenario instead of a targeting gate. Overrides can be used to test the Test variant in a staging environment before starting the experiment on prod. However, if some of your customers have opted out of being experimented on, a targeting gate might be a more suitable option.

Why is there a discrepancy between experiment allocation counts and server side pageview metric counts?

The discrepancy between the experiment allocation counts and the ssr_search_results_page_view (dau) counts could be due to several reasons:

1. **User Activity**: Not all users who are allocated to an experiment will trigger the ssr_search_results_page_view event. Some users might not reach the page that triggers this event, leading to a lower count for the event compared to the allocation.

2. **Event Logging**: There might be issues with the event logging. Ensure that the statsig.logEvent() function is being called correctly and that there are no errors preventing the event from being logged.

3. **Timing of Allocation and Event Logging**: If the event is logged before the user is allocated to the experiment, the event might not be associated with the experiment. Ensure that the allocation happens before the event logging.

4. **Multiple Page Views**: If a user visits the page multiple times in a day, they will be counted once in the allocation but multiple times in the ssr_search_results_page_view event.

If you've checked these potential issues and the discrepancy still exists, it might be a good idea to reach out to the Statsig team for further assistance.

Another possible reason for the discrepancy could be latency. If there is a significant delay between the experiment allocation and the event logging, users might abandon the page before the event is logged. This could lead to a lower count for the ssr_search_results_page_view event compared to the allocation.

Why is there cross-contamination in experiment groups and discrepancies in funnel vs summary data?

When conducting experiments, it is crucial to ensure that there is no cross-contamination between control and treatment groups and that data is accurately reflected in both funnel and summary views.

Cross-contamination can occur due to implementation issues, such as a race condition with tracking. This happens when users, particularly those with slower network connections, land on a control page and a page-view event is tracked before the redirect occurs.

To mitigate this, it is recommended to adjust the placement of tracking scripts. The Statsig redirect script should be positioned high in the head of the page, ensuring that it executes as early as possible. Meanwhile, page tracking calls should be made later in the page load lifecycle to reduce the likelihood of premature event tracking. This adjustment is expected to decrease discrepancies in tracking and improve the accuracy of experiment results.

Additionally, it is important to confirm that there are no other entry points to the control URL that could inadvertently affect the experiment's integrity. Ensuring that the experiment originates from the correct page and that redirects are functioning as intended is essential for maintaining the validity of the test.

Lastly, it is necessary to have specific calls in the code to track page views accurately. These measures will help ensure that the experiment data is reliable and that the funnel and summary views are consistent.

Why is there no exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching?

If you're not seeing any exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching, there are a few things you might want to check:

1. Ensure that your Server Secret Key is correct. You can find this in the Statsig console under Project Settings > API Keys. 2. Make sure that the name of the feature gate in your function matches exactly with the name of the feature gate you've created in the Statsig console. 3. Verify that the user ID is being correctly set and passed to the StatsigUser object. 4. Check if your environment tier matches the one you've set in the Statsig console.If all these are correct and you're still not seeing any data in the Diagnostics/Pulse Results tabs, it might be a technical issue on our end.

The Statsig SDK batches events and flushes them periodically as well as on shutdown or flush. If you are using the SDK in your middleware, it's recommended to call flush to guarantee events are flushed. For more information, refer to the Statsig documentation.

If you're still not seeing any data, it's possible that there's an issue with event compression. In some cases, disabling event compression can resolve the issue. However, this should be done with caution and only as a last resort, as it may impact performance.

If you're using a specific version of the SDK, you might want to consider downgrading to a previous version, such as v5.13.2, which may resolve the issue.

Remember, if you're still experiencing issues, don't hesitate to reach out to the Statsig team for further assistance.

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
CPO
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy