Date of slack thread: 4/12/24
Anonymous: Hello team,
We’re currently experiencing an issue with pre-experimental bias detection for one of our metrics: the completion rate of Step1. This metric is treated as a binary, yet when the experiment was set up, CUPED was activated, as far as I know typically used for continuous metrics, not binary ones like completion rates.
StatSig has flagged pre-experimental bias in Step1 with a concerning p-value of 0.004 (refer to the attached screenshot). Notably, there have been no changes that should impact this metric. Additionally, Step1’s completion rate is used in manual calculation of rates of another metric, substituting for STS exposures. A few times we changed traffic allocation to the experiment.
Could you please help with the following questions:
Vijaye (Statsig): Someone from the Data team will be able to help. For my untrained eye, this looks small enough to ignore. <@U01RGJZBTLL>
Craig (Statsig): CUPED is unrelated to this warning - they both leverage pre-experiment data, but this warning is just comparing the two groups’ pre-experimental data and determining if they were different in a statistically significant way.
Per the docs linked in the learn more, our recommendation is generally to look at the days since exposure chart and make a call on if the trend seen there seems concerning. In this case, I think what probably happened was this triggered in an early day of the experiment where there was a stronger trend that’s been diluted by new users.
Anonymous: Thanks