Technical insights to a scalable experimentation system

Wed Aug 28 2024

Yuzheng Sun, PhD

Data Scientist, Statsig

1. Introduction

As Ron et al. (2022) highlighted, establishing trust in experimental results is challenging.

As a vendor of experimentation platforms, we have observed that in the tech industry, the success of an experimentation program depends less on the sophistication of the tests and more on the trustworthiness and scalability of the program.

Sophisticated tests can undermine the overall trust in an experimentation program by introducing more degrees of freedom, thereby increasing the risk of p-hacking. This issue is exacerbated by incentive structures that reward individuals for conducting tests with significant results.

In most tech companies, the experimentation system is easy to start but difficult to scale due to information and managerial complexities. Because these factors are less tangible, they are often overlooked, leading to wasted resources invested in an unscalable system. In such cases, the cost of maintaining more experiments increases super-linearly, while the benefits increase sub-linearly.

We serve thousands of companies with over 2 billion end users per month. In this paper, we distill our learnings and lessons into three key technical insights to ensure the scalability of experimentation systems by addressing information overload and managerial complexity through thoughtful system design.

2. Two intangible factors limiting experimentation scalability

For any system to be scalable, the cost of operation must increase sub-linearly with scale. With modern advancements in databases, compute, and storage, tangible costs are not a primary concern for most experimentation systems. However, two intangible factors limit the scalability of these systems: information overload and managerial complexity.

2.1 Information overload

Experimentation generates a vast amount of information, which typically increases polynomially:

  • Parameters: The parameter space correlates with the number of experiments, metrics, variants, and user segments. These dimensions all expand rapidly as experiments become more complex.

  • Historical Relevance: Experiments serve both decision-making and learning purposes, requiring a comprehensive understanding of both current and past experiments.

  • System Chaos: The rigorous process of experimentation involves many potential pitfalls, including sample ratio mismatches, multiple comparisons, peeking, network effects, underpowered studies, logging errors, and pipeline mistakes.

If companies do not have a system to process and synthesize this information, they often rely on personnel to manage the complexity, which is an inherently unscalable solution.

2.2 Managerial complexity

Managerial overhead is even less tangible than information, yet it cannot be ignored:

  1. Most engineers and product managers lack the statistical knowledge necessary to interpret experimental results and make correct inferences from observed effects to true effects (Cunningham, 2023).

  2. Managerial incentives often encourage detrimental behaviors, such as p-hacking.

  3. Experiments may result in technical debt by leaving configurations within the codebase.

While these challenges are solvable, mid-level managers typically lack the incentives to address them due to the principal-agent problem and resource constraints. These factors should be carefully considered when designing the system.

2.3 The dilemma: Marginal costs increasing faster than marginal returns

Without a well-designed system, the return on investment (ROI) for experimentation will decrease with scale because:

  • The marginal return of experiments increases linearly or sub-linearly with scale, as less effort is available to turn information into impact.

  • The marginal cost of experiments increases super-linearly with scale due to information and managerial overhead.

These two factors create a dilemma for many experimentation teams—they become victims of their own success. Fortunately, this dilemma is solvable. If the system is designed correctly from the start, the cost of running more experiments can increase sub-linearly, thereby freeing up more resources to drive impact from the results of experiments.

3. Technical insights for a scalable experimentation system

Through our practice, we have identified four essential insights for making experimentation scalable.

3.1 Default “on” AB testing via feature flag integration

AB testing requires treatment experiences, randomization, and an Overall Evaluation Criterion (OEC). By integrating randomization and a metrics system with feature flags, we can automate these elements, enabling AB testing to be triggered automatically with each feature launch, without much additional engineering effort. There are three side benefits: 1) Engineers can self-serve experiments; 2) Unlocking low-code experiments; 3) Exposure data and logging data are both native to the experimentation process, making it much easier to observe the entire system, conduct additional analyses, and debug.

3.2 Separation of metrics, logging, and experiments

Metrics are proxies for business outcomes and will evolve as business priorities shift. However, the underlying logging data and pipelines should remain stable. Separating the definition of metrics from logging ensures that experiments can use pre-existing setups, and metrics can be adjusted easily without affecting the integrity of the logs. We will share our experimentation architecture and the directed acyclic graph (DAG) for our pipeline in our presentation.

3.3 Data is monolithic

The chain of data-to-decisions is long: logging, DAG, definitions, how people interpreted them and used them. Each step can easily generate discrepancies and misconceptions, worsening the information overload. The single source of truth, diagnoses, and context shall live in one place and be visible end-to-end.

3.4 Automate checks around business decisions

There is often a gap between what is statistically correct and what is useful for business decisions. For example, a differential baseline between groups prior to a treatment is not statistically biased, but it is undesirable for making business decisions and usually requires resetting the test. An automated system should not only detect errors like sample ratio mismatches but also detect, flag, and mitigate "noises" such as heterogeneity effects, interaction effects, and skewed sampling. Additionally, the system should encourage best practices through the user interface (UI), such as using sequential testing to discourage peeking, enforcing hypothesis formulation before testing, offering multiple comparison correction upfront, and discouraging changes to the p-value threshold during an experiment.

4. What does scalable experimentation look like?

We identified seven key characteristics of a scalable experimentation system:

  1. Default-on experiments on all new features.

  2. Define metrics once, use everywhere.

  3. Reliable, traceable, and transparent data.

  4. Trustworthy, practical statistics engine—no "magical" math.

  5. Automated checks for errors (e.g., SRM) and flag warnings (e.g., differential baseline).

  6. Intentionally layered experimentation information for product decisions.

  7. Collaborative context around experiment results.

Beyond reducing the cost of running more experiments, systems with these characteristics enable two main outcomes:

4.1 Experimentation as a collaborative effort

Different roles contribute their strengths: engineers manage the system with best practices, product managers generate hypotheses, foster collaboration, and provide qualitative evidence, and data scientists focus on experiment design, review, and deeper analysis. This approach allows data scientists to concentrate on their expertise rather than overseeing every aspect of the AB testing lifecycle, enhancing the overall value of experiments.

4.2 Continuous value extraction from experimentation

Experimentation provides credible causal evidence, but it cannot generate returns without good ideas and good execution. By treating experimentation as a collaborative effort, the goal is to elevate the entire product development team to measure, learn, and improve, ultimately creating higher returns over time.

5. Additional reading

In addition to this abstract, we have a polished presentation that has been presented to hundreds of data scientists at various companies, helping them succeed in their experimentation efforts. We also have podcast interviews with industry practitioners providing anecdotal evidence for the points discussed here.

Create a free account

You're invited to create a free Statsig account! Get started today with 2M free events. No credit card required, of course.
an enter key that says "free account"


Try Statsig Today

Get started for free. Add your whole team!
We use cookies to ensure you get the best experience on our website.
Privacy Policy