Since then, my level of understanding has graduated from preschool to elementary- nice!
In marketing and sales, we often advocate for platforms and tools that we don’t use ourselves- after all, we’re not engineers. But, that means folks like me have a lot of learning to do. I’m fortunate enough at Statsig to be surrounded by really, really smart people, and after picking their brains for the last month, I’d like to pass along some learning to my fellow non-technical people.
A/B testing is, in short, the reason why the apps and websites you use work so well. Building great products used to be a lot of guesswork, and the issue with this is that we always think our ideas are good ideas. Turns out, only about 1/3 of our ideas are good. This is where A/B testing matters.
Let me give you an example- for awhile, Facebook was testing out pimple popping videos to engage more folks on video. It’s click-batey, but people can’t help but watch (read: I can’t help but watch). The issue though, is that after a few minutes, people are disgusted with themselves and stop using the platform altogether.
On the backend, Facebook was running an A/B test- some people got pimple videos (A group), and some didn’t (B group). After enough folks are exposed to the videos, we can make data-informed decisions about which ideas are good, and which aren’t.
It’s easier to call someone’s baby ugly when you have data to back it up.
A/B testing means we can try out more features and quickly see which ones work. Instead of just holding onto one idea and running with it for 6 months, by progressively rolling out new ideas all the time, companies build better products faster.
This is why Statsig is cool- product teams normally spend a ton of time building out their testing infrastructure, so every feature (think- new button, different color, change in search feature, etc.) you test matters a lot. Statsig takes care of the back-end infrastructure, which translates to your team spending a nominal amount of time on testing new features, and more time coming up with new ideas.
If you’re interested in learning more, join our Slack community or sign up for a demo account.
Experimenting with query-level optimizations at Statsig: How we reduced latency by testing temp tables vs. CTEs in Metrics Explorer. Read More ⇾
Find out how we scaled our data platform to handle hundreds of petabytes of data per day, and our specific solutions to the obstacles we've faced while scaling. Read More ⇾
The debate between Bayesian and frequentist statistics sounds like a fundamental clash, but it's more about how we talk about uncertainty than the actual decisions we make. Read More ⇾
Building a scalable experimentation platform means balancing cost, performance, and flexibility. Here’s how we designed an elastic, efficient, and powerful system. Read More ⇾
Here's how we optimized store cloning, cut processing time from 500ms to 2ms, and engineered FastCloneMap for blazing-fast entity updates. Read More ⇾
It's one thing to have a really great and functional product. It's another thing to have a product that feels good to use. Read More ⇾