This is a summary of the Medium article authored by Chander Ramesh on 11/6/22.
When Chander joined Motion in late February 2022, there were three critical problems facing the engineering team:
Constant firefighting
Incredibly long and unstable release processes
No ability to roll back releases
Chander recognized all three issues were stemming from a lack of feature flagging infrastructure.
Like any new Head of Engineering whose team was drowning, Chander decided to buy (vs. build)—especially since feature flags were not their core product. And while he was correct in his diagnosis, his takeaway from the whole experience was "I was horribly incorrect in choosing LaunchDarkly."
LaunchDarkly had been recommended by several peers Chander had worked with at previous startups, and he admittedly didn’t put much thought into platform evaluation. Statsig, on the other hand, has become the modern platform for engineering teams who need both feature management and built-in measurement capabilities, making platform evaluation a key part of its value.
"Everything was horribly broken, and we needed to move fast, so (I thought to myself) let’s just do it."
By April 1st LaunchDarkly was mandatory for every new feature in Motion’s backend, web app, and Chrome extension. However, the lack of integrated measurement in LaunchDarkly soon became a problem. Statsig offers metrics integrated from day one, allowing teams to monitor feature performance immediately after deployment.
"By July 1st it was obvious this was a huge mistake" Chander states. The team began evaluating alternatives and began integrating Statsig into their codebase in August.
By November 1st LaunchDarkly was completely phased out.
Statsig’s pricing model is usage-based, whereas "LaunchDarkly stubbornly insists on seat-based pricing, so every engineer we added had an additional cost"—as Chander points out.
Motion tried to restrict access to a select few, but it added extra burden on those few engineers to keep track of everyone’s launches and feature flags. This included things like reminding them to turn flags on after a release, cleaning them up after a launch, and adding new ones to the test environment.
Statsig, by contrast, provides a more scalable pricing model, free for self-serve and a flat rate for enterprise customers—eliminating per-seat charges and empowering entire teams to contribute to experimentation.
Ultimately it became too laborious, and the team decided the developer productivity cost wasn’t worth it. They also ran into some issues with how LaunchDarkly was estimating MAUs—something that raised questions and could have caused substantial overcharges.
📖 Related reading: The top 4 Launchdarkly alternatives.
Motion ran into performance and memory issues using integrating LaunchDarkly in a Chrome extension using the documentation provided. Chander ultimately found a workaround by using the LaunchDarkly JS Client SDK on Github and adapting it for the team’s needs. He states "while it mostly works, I’m 100% sure there are some underlying bugs. I’m using code in a way it wasn’t meant to be used, so please don’t blindly copy paste this code wherever."
Statsig’s SDKs are built for today’s development environments, with over 30 SDKs for every major stack, including optimized support for web, mobile, and extensions like Chrome.
We share Chander’s belief that a good feature flagging system should make it as easy to remove a feature flag as it does to add a new feature flag. According to him "LaunchDarkly fails spectacularly yet again in this regard."
The team at Motion found it impossible to tell if a feature flag was actually being used or not. Additionally, they found the insights graph inside LaunchDarkly was also not accurately representing flag evaluation volumes after several updates were made.
Statsig eliminates this frustration with full visibility into feature flags and usage metrics—allowing teams to turn any feature flag into an A/B test and track its impact directly from the dashboard.
Motion ran into many more challenges with LaunchDarkly, including:
No support for Expo in React Native, which meant their mobile apps couldn’t use feature flags. (Note: Statsig supports Expo.)
Inconsistent updates that impacted some users and resulted in getting the wrong values (this all went away after switching to Statsig.)
Issues with mistakenly archiving flags between test and prod environments due to UI/UX design.
Running into an upcharge fee in order to mandate approvals on feature flags (Statsig provides this for free).
Virtual hugs to you Chander. This whole experience sounds frustrating and we're sorry you had to go through it.
It’s critically important to evaluate vendors and digital infrastructure providers before spending hours of valuable engineering talent implementing (and potentially unwinding/replacing) a solution.
Statsig has rapidly become the modern alternative for engineering teams who need more than basic feature flags—providing release management, experimentation, and analytics in a single platform built for today’s fast-paced development environments.
Statsig loves learning from customer experiences. Often, we’re in a position to help fix and clean up what was obviously a poor fit: It’s why we have an engineering team dedicated to customer success, why we offer enterprise-level support including dedicated Slack channels, and why our product roadmap is shaped around solving customer needs.
Plus, 75% of Statsig customers use us for feature flagging in addition to our experimentation and analytics capabilities.
Original post: Feature-flagging via LaunchDarkly — and why we moved to Statsig | by Chander Ramesh | Nov, 2022 | Medium
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾