In a startup, everybody builds stuff (code, websites, sales lists, etc) — and part of the building process is accepting that not everything you make is good. But, switching from outcome to learning-oriented can speed up productivity, and make it easier to identify good ideas.
Speaking from personal experience here: it sucks to admit that your baby is ugly. But, what’s worse is spending a ton of time on that bad idea, and then realizing months later it’s not useful. Not that I’ve ever done that… ;)
But, this is when I learned the value of minimum viable products (MVPs).
Our lead data scientist/ experimentation wizard Tim talks a lot about building experimentation cultures- test everything, not just the product you build. One of Tim’s suggestions is to do the least amount of work possible in order to have an MVP and get some learnings. Even if a product or idea “fails”, what’s more important is how it informs direction in the future. Does crossing this idea off the list mean we can also cross some other ideas off? Or, even if the idea fails, are there salvageable elements that we can iterate on?
For me, part of the magic of Statsig has been working with super efficient people- and it’s been impressive to see how quickly people can ditch their egos and focus on learnings. In high performing teams, nobody blames anybody.
Our CEO Vijaye mentioned this in a LinkedIn post a couple months ago about code reviews at Facebook- when there are big issues in code, the team gets together to prevent similar issues in the future. The secret ingredient to these reviews? Nobody asks who created the problem. As Vijaye says, “throwing blame is not productive and will only disincentivize taking bold initiatives”.
Having an idea fail doesn’t mean that you failed too, but sometimes it can feel that way. I’m also realizing that a lot of pain can be avoided by building an MVP, instead of going straight for my dream end-state.
Learning to fail fast is accepting that in order to find a prince, you have to kiss a lot of frogs, so you better round up some frogs and get really fast at kissing.
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾