Well I have. At work. From a generic support email address. To a group of our top customers. Facepalm.
In March of 2018, I was working on the games team at Facebook. You may remember that month as a tumultuous one for Facebook — its when the news about Cambridge Analytica first broke, and Facebook struggled with its response and the resulting fallout.
But inside Facebook, every team was reevaluating their relationship with data and external developers (the games team in particular). Game developers and Cambridge Analytica used the same Facebook developer platform to build social experiences.
Every team within Facebook was doing a complete audit on all external APIs. Were they still used? Still needed? Deprecated with a long timeline? Could they be more limited in scope with what data they return? We did this exercise across games. I was working on Instant Games at the time, which was a separate set of APIs from Facebook’s graphAPI at the heart of the Cambridge Analytica incident. Through this exercise, we identified a number of APIs we decided to shut down, and others which we would accelerate the deprecation timeline of.
Facebook took this time as an opportunity to announce a slew of API and platform changes. The graphAPI typically follows a 2 year deprecation cycle — but this time around, Facebook announced changes to developers, notified them via blog posts and email, and accelerated the timelines for these deprecations. You can still find the sets of changes on developers.facebook.com, like this one, titled “API and Other Platform Product Changes.”
After this exercise, I was tasked with sending out emails to all Instant Games developers — one generic one, announcing the set of changes. But another more targeted one, which threatened removing games from the platform if they did not make changes in the next week.
I had the entire list of company IDs to send emails to, and the targeted set for a sternly worded “YOUR APP WILL BE REMOVED” message. So I created a feature gate! What better way to safely check which email a company should get? I created a gate, and a condition with a set of companies that should pass and get the generic email. While I was testing, I set the pass percentage to 0% for that group. In Statsig, the gate I set up would have looked like this:
I used the gate’s Allowlist and Blocklist to force my test app into both experiences and verify I got the right email message. Good to go, right? So I ran the job and sent out the emails. I went to spot check a few of them, only to find I had sent the super strict message to the wrong group! As I looked further, I found I had sent it out that message to the entire set of companies.
How could this be?! I was SO careful. I had tested the gating. I knew the Feature Gate would prevent this from happening, and yet I had managed to threaten some well-meaning partners with removal from our platform.
As I looked into it, and tested out the gate again, I was shocked to see that apps I expected to pass were failing, even though they were in the list of passing IDs! WHAT? Had the infrastructure failed me?
Turns out I had made a simple mistake. I had configured the gate correctly and tested it correctly, except for one part: updating the pass percentage to 100%. Since I kept it at 0%, none of the IDs I checked passed the gate, and I sent out a sternly worded letter to each and every one of them.
In hindsight, the way I set this up was foolish. By default, I should send the generic email. If a company was in the special set, then it should get the updated messaging. Had I done this and still failed to update the gate pass percentage, I would have sent a more generic message out to everyone, which is an easier fix. We could have easily sent a follow up email with the additional changes that impacted a smaller subset of developers.
So I hope you take away two things from my mistake: always consider your default values, and remember to update your pass percentages. Happy feature gating!
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾