However, Sample Ratio Mismatch (SRM) can sometimes occur in setups like this, leading to uneven splits in user groups. For instance, in an experiment targeting a 50/50 split between control and test groups, a company might expose 1,000 users. Instead of 500 users in each group, Statsig may only receive data for 200 in control and 500 in test—a roughly 28/72 split.
Why does this happen?
Issues like website crashes when serving the control version could prevent the SDK from sending exposure events to Statsig.
Currently, Statsig provides debugging tools that help identify suspect dimensions passed through the SDK. For example, if most control exposures come from the US while the test group is evenly split between EU and US, the issue might be linked to the SDK in the EU release.
However, these tools have been limited to analyzing a preset list of dimensions, such as sdk_type
, browser
, country
, and os
.
Having SRM in an experiment is highly problematic. It skews experiment results and makes them unreliable, rendering the findings invalid. Debugging SRM is crucial, especially for customers with complex release setups where pinpointing the source of the issue can be challenging.
The ability to analyze additional, custom dimensions provides much-needed granularity and flexibility, enabling customers to diagnose and resolve SRM more effectively.
We’ve expanded our SRM debugging capabilities to allow customers to define custom user dimensions for analysis. With this update, Statsig will run its SRM Analysis on these custom dimensions, providing deeper insights tailored to individual customer needs.
Step 1: In your project settings, list the custom dimensions you want to analyze.
Step 2: Navigate to the Diagnostics tab and open the Experiment Health Checks.
Step 3: Use the SRM Debugger to review group metrics. This tool highlights any custom dimensions that are likely contributing to SRM issues.
With this added flexibility, customers can debug SRM with precision, ensuring their experiments produce trustworthy results.
Standard deviation and variance are essential for understanding data spread, evaluating probabilities, and making informed decisions. Read More ⇾
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾