Since then, my level of understanding has graduated from preschool to elementary- nice!
In marketing and sales, we often advocate for platforms and tools that we don’t use ourselves- after all, we’re not engineers. But, that means folks like me have a lot of learning to do. I’m fortunate enough at Statsig to be surrounded by really, really smart people, and after picking their brains for the last month, I’d like to pass along some learning to my fellow non-technical people.
A/B testing is, in short, the reason why the apps and websites you use work so well. Building great products used to be a lot of guesswork, and the issue with this is that we always think our ideas are good ideas. Turns out, only about 1/3 of our ideas are good. This is where A/B testing matters.
Let me give you an example- for awhile, Facebook was testing out pimple popping videos to engage more folks on video. It’s click-batey, but people can’t help but watch (read: I can’t help but watch). The issue though, is that after a few minutes, people are disgusted with themselves and stop using the platform altogether.
On the backend, Facebook was running an A/B test- some people got pimple videos (A group), and some didn’t (B group). After enough folks are exposed to the videos, we can make data-informed decisions about which ideas are good, and which aren’t.
It’s easier to call someone’s baby ugly when you have data to back it up.
A/B testing means we can try out more features and quickly see which ones work. Instead of just holding onto one idea and running with it for 6 months, by progressively rolling out new ideas all the time, companies build better products faster.
This is why Statsig is cool- product teams normally spend a ton of time building out their testing infrastructure, so every feature (think- new button, different color, change in search feature, etc.) you test matters a lot. Statsig takes care of the back-end infrastructure, which translates to your team spending a nominal amount of time on testing new features, and more time coming up with new ideas.
If you’re interested in learning more, join our Slack community or sign up for a demo account.
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾