In a startup, everybody builds stuff (code, websites, sales lists, etc) — and part of the building process is accepting that not everything you make is good. But, switching from outcome to learning-oriented can speed up productivity, and make it easier to identify good ideas.
Speaking from personal experience here: it sucks to admit that your baby is ugly. But, what’s worse is spending a ton of time on that bad idea, and then realizing months later it’s not useful. Not that I’ve ever done that… ;)
But, this is when I learned the value of minimum viable products (MVPs).
Our lead data scientist/ experimentation wizard Tim talks a lot about building experimentation cultures- test everything, not just the product you build. One of Tim’s suggestions is to do the least amount of work possible in order to have an MVP and get some learnings. Even if a product or idea “fails”, what’s more important is how it informs direction in the future. Does crossing this idea off the list mean we can also cross some other ideas off? Or, even if the idea fails, are there salvageable elements that we can iterate on?
For me, part of the magic of Statsig has been working with super efficient people- and it’s been impressive to see how quickly people can ditch their egos and focus on learnings. In high performing teams, nobody blames anybody.
Our CEO Vijaye mentioned this in a LinkedIn post a couple months ago about code reviews at Facebook- when there are big issues in code, the team gets together to prevent similar issues in the future. The secret ingredient to these reviews? Nobody asks who created the problem. As Vijaye says, “throwing blame is not productive and will only disincentivize taking bold initiatives”.
Having an idea fail doesn’t mean that you failed too, but sometimes it can feel that way. I’m also realizing that a lot of pain can be avoided by building an MVP, instead of going straight for my dream end-state.
Learning to fail fast is accepting that in order to find a prince, you have to kiss a lot of frogs, so you better round up some frogs and get really fast at kissing.
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾