Switching feature flagging platforms—whether from LaunchDarkly to Statsig, or just in general—is a significant move that can improve workflows, improve feature management, and give your team's experimentation culture a fresh start. Like any migration, though, it requires careful planning and execution.
In this guide, we'll walk you through the process of migrating your feature flags from LaunchDarkly to Statsig, including additional considerations and basic instructions to ensure a smooth transition.
Migration is more than just a technical task; it's a strategic opportunity to refine your processes and embrace a new platform that can streamline your feature flag management. Here's what you need to know:
A fully automated process
A direct transfer of every single flag from your old platform to the new one
A process that necessitates an immediate and complete switch-off of the old platform
A chance to clean up tech debt and optimize your feature flagging strategy
An opportunity to educate and empower your teams for faster adoption of the new platform
A process that involves taking stock of existing metrics and data sources for measuring feature impact
A project that requires technical setup and integration of necessary data sources
Statsig deliberately distinguishes between feature flags (for immediate action) and experiments (for deeper inquiry). They can be used together or separately.
On LaunchDarkly, everything is a feature flag, and feature flags may have variants and return different types.
On Statsig, feature gates are purely boolean. When deciding to ship a feature, this becomes a matter of flipping a switch. There are no variables within the feature that must be altered or replaced.
In relation to multi-type flags on LaunchDarkly, Statsig supports two important a structures:
Dynamic Configurations for pure configurations or entitlements types of use cases. Supports multi-type return values.
Experiments for measuring performance between different variations. Supports multi-type return values.
In comparison to boolean flags, multi-type flags introduce a layer of complexity that can obscure the path to full implementation.
When the time comes to transition to full deployment and remove the flag, references to those multi-type configurations need to be replaced, introducing potential points of failure and delaying the shipping process.
This necessitates extra diligence and refactoring that could have been avoided with a simple boolean check. And, as we all know, teams may already be slow in cleaning up feature gates.
More than a matter of convenience, this is a strategic approach that enhances decision-making clarity, accelerates the release process, and helps cultivate true experiment culture without slowing down development speed.
In Statsig, the hierarchy is designed with a single project that contains multiple environments, such as development, staging, and production. This structure allows for centralized management of feature flags and experiments across different stages within the same project, thus simplifying governance. Here's an example of a flag which is only on in development, which was imported using our open source migration script from LaunchDarkly:
Within the same workspace and feature flag, you can then filter the rules based on environment:
Conversely, LaunchDarkly adopts an Environment > Project hierarchy, where each environment can be considered a separate workspace.
The key difference lies in the centralization versus separation of environments: Statsig centralizes environments within a project for streamlined management, while LaunchDarkly treats each environment as a distinct workspace.
Currently, false
is the global default option for feature flags in Statsig's SDKs. If you want the default to be true, you may consider inverting the gate check logic. For Experiments in Statsig, defaults are provided in code.
Are you using SSO? Are you assigning custom roles via SSO? If so, read on.
Currently, we haven’t hooked role definition into auto provisioning yet. For automatic provisioning with SSO, new users authenticated by Okta can be automatically provisioned into a Statsig organization.
If your project within the organization is set to open, users will default to having access. For private projects, they must request access.
Each JIT-provisioned user would have member access. You will to update their roles manually/via API. It’s a similar pattern with teams and roles right now.
This is on our roadmap. (Inquire if you're interested in early access.)
Start by reviewing your current feature flags in LaunchDarkly, and shedding some technical debt:
Temporary flags: Identify flags used for rollouts that are no longer active. These can be removed to reduce tech debt.
Permanent flags: Determine which flags are essential for ongoing operations, such as kill switches or targeted functionality, and plan to migrate these to Statsig gates.
LaunchDarkly doesn't have a UI or out of the box way to review flag usage; you must build custom log on top of their API.
Ensure that your key metrics and event tracking are ported over to Statsig. Statsig supports integrations with various data warehouses, CDPs, and analytics providers, making it easy to continue measuring the success of your features and experiments.
When considering the migration steps and timeline, it can be helpful to break it down into distinct phases:
Clean up: Remove any unnecessary temporary flags from your codebase. LaunchDarkly's code references can be a big help here. This may take a week or two.
Migrating feature flags: This is relatively quick; our import tools are capable of migrating ~1k flags from LaunchDarkly in approximately 10 minutes. Consider how many multi-type flags you have, as those need slightly more planning to map to experiments or dynamic configs in Statsig. We offer two solutions to help automate/kick-start this process:
A UI-based wizard to help you import LaunchDarkly feature flags and segments into Statsig. It will tell you which gates and segments were migrated and which weren’t. It only imports the "production" environment at the moment.
An open-source script template that migrates feature flags from LaunchDarkly to Statsig. This is a good option if you want to customize the integration logic. It will also spit out a CSV of all of your LaunchDarkly flags, along with migration status and relevant URLs to the flag in LaunchDarkly and the gate in Statsig. This imports all of your environments.
Enablement: Educate your team on the differences between LaunchDarkly and Statsig. Update internal wikis, schedule enablement sessions with the Statsig team, share recordings across the team, etc. This can take 2-4 weeks, with ongoing maintenance for a month or so after that to answer any lingering questions. We'll have a shared slack channel to collaborate on this.
Flag code cleanup: The duration depends on whether your feature flag code is wrapped. Wrapped code simplifies the process significantly. This may take days to weeks.
Splitting/rollout: Testing and gradually rolling out new references is crucial. We recommend a canary-type rollout to monitor the transition effectively.
Migrating from LaunchDarkly to Statsig is a strategic move that can lead to more efficient feature flag management and a stronger experimentation culture. By following this guide, you'll be well-equipped to make the transition with confidence.
Remember, Statsig's team is always ready to assist you throughout the process, ensuring your migration is successful.
Related reading:
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾