Getting started with experimentation is a bit like getting started with authentication. Itâs not difficult and you have a sense that youâll figure it out. But just like with authentication, mistakes with experiments can be costly. Here is a short list of common mistakes to watch out for with experimentation.
A common mistake in designing experiments is setting assignments too early. Ideally, you want to set the assignment at the point you need to render the experience. Setting the assignment too early can lead to more neutral or inconclusive experiments. For example, 10 visitors land on your product page and 1 out of these 10 visitors goes from the product page to the pricing page. If youâre running an experiment on the pricing page, make the assignment when the visitor reaches the pricing page, not when they land on the product page.
Have you ever heard yourself say, âIâm sure this action is trackedâ, and then realized after starting the experiment that itâs actually not? Me too! This means Iâve to go back, add the instrumentation, and restart the experiment. There go 2 of the 14 days Iâd allocated to this experiment. As you test your instrumentation, check for missing events and missing data within events (especially unit identifiers).
Statistical power is the probability of detecting a true effect. When an experiment has low power, a true effect is hard to find. This can lead to a statistically significant result thatâs a false positive rather than a true effect. To ensure the experiment is sufficiently powered, have the patience to let the experiment run and achieve the required sample size!
Low powered experiments can also overestimate strength of the effect (assuming itâs a true effect). As Pinterest discovered, this can lead to engagement bias, where engaged users show up first and dominate the experiment results. âIf you trust the short-term results without accounting for and trying to mitigate this bias, you risk being trapped in the present: building a product for the users youâve already activated instead of the users you want to activate in the future.â To avoid getting trapped by engagement bias, try different experiments for users in different stages.
Folks getting started with experimentation frequently associate experiments primarily with the âgrowthâ function in the company that focuses on signing up new users. Broadening your scope to connect more users to the core value of your app can open up a lot more surface area for experimentation. For example, Netflix found that it has 90 seconds before viewers abandon the service, making personalization experiments incredibly valuable to their engagement and retention metrics. [Question: Do you know when new users experience moments of joy in your app?]
A lot of website optimization work focuses on changing button colors and moving the chairs around the Titanic. Throwing stuff on the wall to see what sticks isnât a plan. Tie your experiments to your product strategy. For example, if you know latency is important to your product engagement, but donât know to what extent, test your hypothesis and let the data define your product strategy. For example, Facebook learned that engagement increases significantly with faster message delivery. Using this data, they rebuilt the Messenger app from the ground up to start twice as fast, focusing on core features and stripping away the rest with Project Lightspeed.
While not as bad as experimenting without a plan, a related trap is focusing on small tweaks that lead to small results. Testing for small improvements also tends to need a larger sample size to be statistically significant. Focus on the intersection of low hanging fruit and high impact. As more people in the organization realize that the cost of an incremental experiment is approaching zeroš, theyâll naturally want to turn every feature into an A/B test like Uber and Booking.com.
To recap, there are two types of mitigations for mistakes in experimentation.
Tactically, you want to assign users at the right point, instrument each user action, and let the experiment run.
Strategically, you want to run experiments for different cohorts, identify the moments of joy in your app, tie experiments to a strategic objective, find opportunities for higher impact, and encourage rapid, ubiquitous experimentation.
If youâre seeing the journey from having no plan -> having a strategy -> testing every feature, youâre already way ahead of most people đ
Join the Statsig Slack channel to discuss and get help with your experiments. Whether youâre running on Statsig or not, we want to see your experiments succeed!
Ok, here are moar mistakesâŚ
Having tunnel vision:Â In the early days of experimentation, you might hear folks on your team say, âyouâre only looking at dataâ or âyouâre only looking at one set of metricsâ. Experimentation isnât just about getting data to make a decision, itâs about forming the full picture. Use experiment results with other quantitative and qualitative data to form the picture for your business.
Missing the learning: Whether an experiment yields statsig results or not, itâs fertile ground to generate new hypotheses and insights. In this example, Roblox wanted to determine the causal impact of their Avatar Shop on community engagement and found their missing piece in an experiment that theyâd run a year ago!
Burning out a non-measurable resource: You can avoid unintentionally burning out your valuable resources using guardrail metrics. Say youâve discovered new channel for push notifications, and it is showing great results in improving engagement. However, youâll burn out the channel with excessive notifications if you overuse it. You may set up a guardrail threshold for push notifications to achieve >8%+ CTR before ramping up on the channel. If your experiment is missing guardrail metrics, ask yourself: What trade-off am I missing? How can I model that trade-off as a guardrail metric?
[1] With the right experimentation platform, you can run thousands of experiments without worrying about the escalating grunt work of managing data pipelines or the infrastructure cost of processing the reams of data everyday. The ideal experimentation platform will also ensure that these thousands of experiments are organized to run clear of each other without impacting each othersâ results.
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how weâre making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾