There’s a risk that you’ll buy the shirt, wait for it to be delivered, and then absolutely hate it and be stuck with a bad shirt.
But what if the shirt company offered free returns? This wouldn’t entirely eliminate the risk of you hating the shirt, but it does eliminate the risk that you’ll be stuck with it.
More of a reason to give the shirt a try, eh? Well, the same goes for feature gates:
Product builders have an idea they think will be great for users, but understand that there is a risk when launching a new feature.
Feature gates are rollouts of a feature to either 0% or 100% of users, making it easy for teams to turn off features if or when needed. These binary toggles act as a failsafe for risk or to make universal changes to an app or website (like turning off a feature causing app crashes or removing sale prices after Black Friday).
From getting to know our customers, I’ve noticed a trend for builders (especially Engineers) that feature management is extremely crucial to the workflow of how new features are shipped and monitored.
Ideally, every feature should be behind a feature gate and feature launches should be a partial rollout.
A partial rollout is the practice of showing a feature to a percentage of users that is between 0% and 100%. This allows product teams to see the impact a feature has on a sample of users without affecting ALL users.
This also helps to mitigate risk and confirm or reject builders’ hypotheses on the impact.
⏪ Rewind to the shirt analogy: Imagine buying the shirt, and then wearing it for some pre-work coffee with a friend. They tell you the shirt is a hideous monstrosity and should be immolated on the spot. “Whoever sold you that shirt deserves life in prison,” they say.
Yikes. While changing in your car, you realize there is a silver lining: You performed a partial rollout pertaining to your torso regalia. Your candid friend was the sample population, and your attire failed the experience test. Good thing you didn’t show the shirt to more users. Into the incinerator it goes…
This exact same thing can happen with web and mobile apps too: If teams partially rollout a feature and the app starts crashing, they can turn off that gate immediately, and have only ruined the sample population’s day, instead of all users.
And when I say ruined their day, I’m not exaggerating.
A partial rollout allows Statsig to measure the delta between the percentage of users with and without the feature, generating an A/B test that gives Pulse results. Statsig builders are shown reports based on the partial rollout: Red if the target metric was negatively impacted, green if positively impacted.
From those results, builders can make decisions on continuing to roll out the feature to more users, or simply killing it. Teams can use their own experience as well as the Statsig data to make better hypotheses and decisions.
Furthermore, Statsig suggests a 2/10/50/100 partial rollout cadence: Roll a feature out to 2% of all users, then 10%, then 50%, then 100%, measuring the impact on metrics each time.
P.S. I hope the shirt looks great! 😛
VWO (Visual Website Optimization) is a great platform for website testing, conversion optimization, and more. How does it stack up against competitors like Optimizely?
“Pricing experiments,” once considered a tactic available only to the major online merchants, have been adopted as a core component within the e-commerce playbook.
Hyper-personalization often involves a dual approach: quantitative metrics for robust performance analysis and insights from customer feedback to enhance satisfaction.
Optimizely is a marketing and website experimentation platform that extends into the full-stack development space—but how does it stack up against the competition?
I've seen too many data scientists doing the right thing, yet getting frustrated and giving up because their audience isn't receptive. Here's how I'm going to change that.
How Statsig aims to introduce every San Franciscan to the tools that simplify data-driven product development at fast-growth startups and Fortune 500s alike.