With new features and functionality popping up left and right, developers need a way to roll out updates without breaking everything in the process. Enter feature flagging: the superhero of software development that allows developers to control individual features like a boss.
Feature flagging allows developers to enable or disable specific features in their application without having to deploy a new version of the software, which can save time and resources. It also enables easier testing and iteration, which leads to faster development cycles better user experiences. Many software development teams at “giant” companies like Apple, Amazon, Google, (and so on), have adopted feature flagging as a best practice for their development processes.
Fun fact: Feature flags are also called feature gates, feature switches, feature flippers, feature toggles, and conditional features.
The benefits of feature flagging are plentiful and all relate to the ability to control and gather metrics around individual software features rather than an application build as a whole. Some are:
One of the most significant benefits of feature flagging is that it mitigates risk. By enabling or disabling features on demand, developers can test specific features in testing clusters or with a small set of users before rolling them out to the entire user base. This reduces the risk of bugs and issues that could impact the user experience.
Feature flagging can also help reduce the risk of security vulnerabilities by allowing developers to quickly disable a feature if a vulnerability is discovered. This can prevent attackers from exploiting the vulnerability and accessing sensitive data or systems.
Another advantage of feature flagging is that it can speed up development. By enabling specific features, developers can test them in real time without waiting for a new release to be deployed. This allows for quicker iteration and testing, leading to a faster development cycle.
Also, feature flagging can help reduce the risk of delays caused by dependencies between features. Developers can enable and test features independently, rather than having to wait for all dependencies to be completed before testing can begin.
Feature flagging also allows for customization of the application. By enabling specific features, developers can create a custom resource for a specific user or group of users. This can lead to a more personalized user experience, which can help increase user engagement and satisfaction.
Good example: A developer could enable a feature that only displays certain content to users who have previously shown an interest in that topic, like displaying shoes on an e-commerce clothing site. This customization can help create a more tailored experience for users and increase the likelihood of user retention.
Statsig's sample size calculator is a quick way to determine which size is optimal to achieve minimum detectable effect.
Feature flagging works by allowing developers to enable or disable specific features within their application. These flags are often set at the code level and can be toggled on or off depending on the needs of the developer.
When a new feature is added to an application, it is often disabled by default. This allows developers to test and iterate on the feature without impacting the user experience. Once the feature is ready for rollout, the developer can enable it for a small set of users or testing clusters.
If the feature performs well in testing, the developer can gradually roll it out to a larger user base. This can be done by enabling the feature for specific users or groups of users, such as beta testers or early adopters. Once the feature has been thoroughly tested and deemed stable, it can be enabled by default for all users.
Getting started with feature flags on Statsig is simple, and can be done by following these steps:
Create a new feature gate
Start by creating a new Feature Gate and give it a name that describes the feature you're building. It's also a good practice to write a description that is easy for people other than you to understand what this feature is about.
Add a new rule to the feature gate
By default, your new Feature Gate will not pass any rule and hence will be returning false for all checks on the client side. In order to turn on this feature, you will need to target this feature to a specific set of folks. You do that by adding a new rule to the Feature Gate.
Create a new client API key
Navigate to the API Keys section of the Statsig console or visit the API Keys section via console.statsig.com/api_keys. Create a new Client SDK Key for use in your app or website.
Use the Statsig SDK to initialize and check the feature gate
Initialize the Statsig SDK using your newly created Client API key and check the Feature Gate to see if it's enabled or not.
To test the feature gate, change the environment to match the targeting criteria
If the Feature Gate is targeted to a specific environment, such as mobile devices, you can change the environment to test if the Feature Gate is enabled or not.
To update the environment, call updateUser on Statsig
If you change the environment, you need to update the user using the updateUser method to reevaluate the Feature Gate against the new environment.
Try a different check to see how the feature gate responds
You can try a different check, such as using an email address, to see if the Feature Gate responds differently.
Use pseudocode to implement the feature gate in production
Use pseudocode to implement the Feature Gate in your app or website. Use the checkGate method to enable or disable the feature based on the Feature Gate's status.
Pro tip: Check out Statsig’s setup documentation for a deeper dive into each of these steps.
Enabling features can be done in a variety of ways, depending on the needs of the developer. One common method is to use a configuration file that contains all of the feature flags for an application. This file can be easily modified to enable or disable specific features.
Another method is to use a dashboard or control panel that allows developers to toggle feature flags on or off. This can provide a more user-friendly interface for developers who may not be familiar with the codebase.
Let's say an e-commerce site is considering launching a new module that displays products similar to the one the user is shopping for. The product team responsible for this feature wants to make sure that it doesn't negatively impact the site's performance or user experience.
To mitigate the risk, they can use a feature flag to roll out the new module to a small percentage of users initially, and gradually increase the percentage over time if everything looks good. This approach allows the team to test the feature in a safe and controlled manner, and make adjustments if necessary before fully launching it to all users.
In terms of metrics, the team can measure the impact of the new module on key performance indicators (KPIs) such as:
Engagement: Are users interacting with the new module? Are they clicking on the suggested products?
Conversion rate: Does the new module help users find products they're interested in and lead to more purchases?
Revenue: Is the new module driving more revenue for the e-commerce site?
User satisfaction: Are users happy with the new module? Are they leaving positive feedback?
By analyzing these metrics, the team can determine whether the new module is having a positive impact on the e-commerce site and adjust it as necessary before fully launching it to all users.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Migrating experimentation platforms is a chance to cleanse tech debt, streamline workflows, define ownership, promote democratization of testing, educate teams, and more.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.
A lot has changed in the past year. New hires, new products, and a new office (or two!) GB Lee tells the tale alongside pictures and illustrations:
A deep dive into CUPED: Why it was invented, how it works, and how to use CUPED to run experiments faster and with less bias.