How to lose half a billion dollars with bad feature flags

Vineeth Madhusudanan
Wed Jul 14 2021

The demise of Knight Capital

Knight Capital was the largest trader in US equities in 2012 (~$21b/day) thanks to their high frequency trading algorithms. They also executed trades on behalf of retail brokers like TD Ameritrade and ETrade.

Their demise came in 2012 when they developed a new feature in their Smart Market Access Routing system to handle transactions for a new NYSE program.

Knight Capital’s stock price in 2012. KC had ~1.4k employees.

To control this new feature, they repurposed a feature gate created for a different trading algorithm called “Power Peg”. Power Peg was never meant to be used in the real world to process transactions. It was a test algorithm, specifically designed to move stock prices in test environments to enable verification of other proprietary trading algorithms.

Unfortunately, when they deployed this new code, it succeeded on seven of eight servers. Without realizing this, they flipped the feature on. Code on seven servers worked as expected. The legacy Power Peg feature came online on the eighth server and started executing trades routed to that server.

Deployments do fail, occasionally.

In a few minutes, Knight Capital assumed options positions worth $7 billion net — that resulted in a $440m loss when closed. With only $360m in assets, this made them insolvent; they had to be restructured and rescued by a set of external investors.

This proved to be a very expensive lesson in managing dead code and creating unique, well-named feature gates. Feature gates are cheap to create, never reuse them! Read more about the Knight Capital saga here, or check out unlimited feature flags in our free tier at Statsig.

Do you have a horror story with using feature gates? I’d love to hear from you!

Try Statsig Today

Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.


Recently published

My Summer as a Statsig Intern


This summer I had the pleasure of joining Statsig as their first ever product design intern. This was my first college internship, and I was so excited to get some design experience. I had just finished my freshman year in college and was still working on...

Read more

Long-live the 95% Confidence Interval


The 95% confidence interval currently dominates online and scientific experimentation; it always has. Yet it’s validity and usefulness is often questioned. It’s called too conservative by some [1], and too permissive by others. It’s deemed arbitrary...

Read more

Realtime Product Observability with Apache Druid


Statsig’s Journey with Druid This is the text version of the story that we shared at Druid Summit Seattle 2022. Every feature we build at Statsig serves a common goal — to help you better know about your product, and empower you to make good decisions for...

Read more

Quant vs. Qual


💡 How to decide between leaning on data vs. research when diagnosing and solving product problems Four heuristics I’ve found helpful when deciding between data vs. research to diagnose + solve a problem. Earth image credit of Moncast Drawing. As a PM, data...

Read more

The Importance of Default Values


Have you ever sent an email to the wrong person? Well I have. At work. From a generic support email address. To a group of our top customers. Facepalm. In March of 2018, I was working on the games team at Facebook. You may remember that month as a tumultuous...

Read more

CUPED on Statsig


Run experiments with more speed and accuracy We’re pleased to announce the rollout of CUPED for all our customers. Statsig will now automatically use CUPED to reduce variance and bias on experiments’ key metrics. This gives you access to a powerful experiment...

Read more

We use cookies to ensure you get the best experience on our website.

Privacy Policy