Of course, time isn’t actually passing at a different pace. But since each new year represents a smaller share of your life, every new years feels a little less significant.
Well, that might be true for people - but it’s certainly not true for Statsig.
2024 was our biggest year yet. From launching new products to hosting our first annual conference to adding thousands of new customers, our team has been busier than ever - and it still feels like we’re just getting started. Some things we're particularly proud of...
Sharpening our experimentation product with new test types, updated methodologies, and improved core workflows (full details below)
Expanding our product via multiple new product lines - including our full product analytics suite, a visual experiment editor, session replays, and more
Adding thousands of new self-service customers, by making our platform even more generous for small companies
Partnering with the experimentation community around the world, from hosting meetups in Berlin, NYC, London, SF, and Seattle to our first annual conference in San Francisco (SigSum)
Working with even more amazing companies (including Bloomberg, Xero, EA, Grammarly, plus many others)
Scaling our infrastructure to a level that we didn’t think was possible (officially processing over 1 Trillion events/day)
Growing our team to 100+ people - all of whom are world-class in their domain, and ready to keep building like crazy
As we look back on 2024, it’s hard to believe how much ground we covered. As always, it’s only possible because of our customers, team, investors, and everyone else who has taken part in our journey - so thank you to everyone who’s been a part of it.
Now, let’s dig in to some of the highlights from 2024!
At the start of 2024, all the pieces of our experimentation product were in place: a world class stats engine running on two deployment models, robust logging and config management infrastructure, and an array of advanced experimentation tools (like layers, holdouts, and more). But as anyone who works in experimentation knows - you can always go deeper, and add more.
And wow, did we ever do that!
As we work on these features, we’re constantly collaborating with experimentation leaders and experts - some of whom are external, and some of whom are customers. This includes people like Ronny Kohavi, Lukas Vermeer, Kevin Anderson, Dylan Lewis, and others. Real world feedback from practitioners and our customers alike is constantly shaping our roadmap and prioritization.
We’re also working with our customers - and there a lot of them! Some cool stats on our experimentation scope today:
Our customers have used Statsig to create over 50,000 unique experiments
We have dozens of companies running 100+ experiments per quarter and over a dozen companies running over 1,000 experiments per year
More than 6,000 unique experiment creators have used Statsig to launch a new experiment
When you have this many users at so many sophisticated companies, you get a lot of ideas for ways to improve your experimentation product. Our full list of 2024 experimentation releases is kind of staggering (you can see all of them here), but generally, we bucket our experimentation roadmap into 3 categories:
Core experimentation functionality: Features or methodologies that unlock new use cases and techniques
Stats engine: Statistical tools that reduce variance and prevent false positives, and deep configurability of the statistical tools and calculations used
Team collaboration: Features that make it easier for large organizations to manage hundreds of experiments across many teams and products (experiment summaries, experiment discussions)
Some of the highlights in each category include…
Experimentation
🎂 Stratified Sampling: Make sure Control and Test are balanced - particularly relevant in B2B use cases when there are small sample sizes
🎓 Meta Analysis: Learn across the corpus of experiments, in addition to learning from each one
👤 ID Resolution++: Link anonymous users with their signed-in activity to measure activity post conversion
🦾🦾 CMAB: Introducing contextual bandits to our Autotune product, giving teams an easy way to personalize user experiences
Stats engine
📊 Benjamini-Hochberg: An alternate to Bonferroni Correction that teams running a large number of experiments prefer
% Percentile Metrics: Often used to measure app performance or resource utilization
🧢 Capped Metrics: A very simple but effective variance reduction technique popular in ecommerce scenarios
Team collaboration
🕰️ Experiment Timeline View: A complete view of all the experiments your team has run, sorted by teim period, with summary metrics
🤼 Teams: Configure permissions and settings (including default templates or metrics) for groups of users
📊 Storytelling with Experiments: Create rich experiment summaries with live, custom scorecards inline
The most exciting part about continuing to improve our experimentation product is that we get to work with increasingly sophisticated customers. We’re working with more and more companies who are deploying Statsig to augment or replace in-house experimentation systems - systems that were developed for years and optimized for a company’s specific use case.
Nearly 4 years ago, our team started Statsig with a clear vision: give every company in the world access to a great set of data tools for building products.
While experimentation has always been our core focus, we’ve never shied away from adding new products that give builders better ways to understand their product and make smarter decisions.
Feature management was our first new product outside of experimentation (launched in 2022), and we’ve continued to deepen our investment in this product over time. Companies have used Statsig to release over 100,000 features, reaching billions of unique users. Today, Statsig is one of the most trusted and reliable feature flagging solutions on the market - not bad for our first expansion product!
This year, we added two new product lines to the Statsig platform: Product Analytics and Session Replay. Both of these products have a deep overlap with experimentation and feature management, allowing teams to track long-term trends, dig into metric data, debug experiments, and much more.
After launching product analytics in March, we’ve made a ton of incremental improvements. Some particularly exciting features include:
Adding five new chart types (Metric Drilldown, Funnel, Retention, Distribution, and User Journey)
Implementing advanced filtering and cohorting options within each chart
Extending all features and functionalities to both Cloud and Warehouse Native customers
Pulling feature flag & experiment data into Metrics Explorer (enabling you to explore topline metric trends within a feature rollout or experiment group)
Our customers have been putting it all to good use! This year, we had tens of thousands of users run an analytics query, and many companies using Statsig as their primary analytics tool.
Session Replay has been a bit more quiet, as the core functionality of watching a session speaks for itself. But we’ve added many quality-of-life features around filtering, data privacy, and more.
It’s been amazing to see so many people leverage our new products - and there’s a lot more coming in both product lines in 2025.
As we improved our core products and launched new offerings, we kept an eye on our core mission: making data-driven decision making accessible to as many companies as possible.
In 2024, we realized that to do this, we needed to get serious about our self-service offering. For us, this meant making it easier to use and even more affordable. Some changes we made this year:
Including all of our products in our free tier (experimentation, feature flags, analytics, and session recordings)
Maintaining free use of feature flags and configs in our self-service tiers
Increasing the events included in our free tier to 2 million per month (up from 1 million per month)
Adding 50,000 free session recordings per month to our free tier (and 100,000 per month to our pro tier)
Including 1 billion events in our startup program (a $50,000 value)
Streamlining our docs and writing dozens of guides on getting started with our product
Today, all these changes make Statsig the most generous self-service product on the market for experimentation, feature flags, analytics, and session recording - and you get it all in one place. Pretty nice if you’re a startup!
Just in case you don’t believe us, we also created some handy guides on our pricing for each product line (experimentation, feature flags, product analytics, session replays).
And it seems to be working! Thousands of companies started using Statsig for free this year, including ~200 companies who joined via our startup program.
2024 was not only a big year for our product - it was a great year for our community. We had the opportunity to do everything from meetups to large-scale events to our first annual conference.
This year, the Statsig event engine went international. Our team traveled to San Francisco, New York, London, Berlin, Argentina and more to connect with our communities of product builders. Thousands joined us at these gatherings, where we exchanged ideas, shared insights, and learned from some of the industry's finest—including leaders from Graphite, Monzo, HelloFresh, Babbel, Common Room, Captions, Rec Room, and more.
We also hosted our first annual conference: Significance Summit (SigSum). You can see the full highlight reel here, but it was certainly a memorable event - and one we’re extremely excited to grow in the next few years.
Our event calendar this year is even more ambitious, so stay tuned! We’d love to have you join us for more events in the coming year. Stay up to date on what we’re planning here.
The best part of our jobs is getting to work with the world’s leading technology companies. Fortunately, we had a lot of new companies join our platform in 2024 (including some highlighted below).
As we’ve continued to add customers, we’ve worked hard to make sure that our world-class support stays world-class. More customers mean more questions, but we’ve done our best to stay ahead of it - from adding an AI-powered support bot to growing our enterprise engineering and customer data science teams.
So far, it seems to be working - but if we can do anything better, just ping us on Slack. Our CEO just might answer!
We’re also continuing to grow our sales team to keep up with all the new customers we want to add. Most recently, we welcomed our first CRO (William Da Cunha) to the team. If you’re a seller looking to join a great team, please reach out!
With more customers and more products comes event more demand on our infrastructure. At the end of last year, we were excited about processing 6T events per month. Now, we’re doing well over 1T events per day with volumes ~10X of the same time last year.
If you’re curious how we did it, you can check out a deep dive we produced with ByteByteGo (thanks Alex Xu).
Most importantly, even as volume has scaled, our team has kept up our commitments for reliability and uptime (which you can always monitor here).
Last but not least, our team continued to grow. We crossed the 100 employee mark in December (woah) and are still hiring in every function.
Even though we’ve grown, we’re still the same fun, scrappy company - complete with ping-pong tables, great swag, and a ton of in-office food.
If you (or anyone you know) is looking to work at an amazing in-person startup in Seattle, go to statsig.com/careers to learn more.
2024 was our best year yet, but we’re even more excited about what’s to come.
From all of us on the Statsig team, thanks for your interest, support, questions, requests, ideas, and collaboration. We couldn’t have done it without you.
Happy building! 🥂
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾