I had the pleasure of chatting with our exceptional Software Engineers, Pierre Estephan and Alex Coleman. Our discussion revolved around metrics and analytics within the context of experimentation.
We delved into real-world examples showcasing how companies like Facebook, and some of our valued Statsig customers harnessed user insights to propel product-led growth. In light of our recent launch of Metrics Explorer, we wanted to discuss how having an integrated platform for experiments and analytics can empower organizations to unlock insights that drive growth. You can watch the recording of the event here:
It's challenging to directly impact high-level KPIs like retention or engagement. Therefore, it's crucial to understand the key drivers that lead to these goals. In all the examples we covered, including the famous Slack example where "messages sent" was the key driver of retention, companies identified what mattered most to their users and worked toward that, ultimately driving growth.
We discussed the concept of novelty effects, using notification fatigue as an example to emphasize the importance of going beyond instant gratification. It's not just about improving DAU in the short-term; it's about observing the long-term impact with metrics like 28-day retention.
Correlation doesn't imply causation, so it's not as simple as blindly pursuing a metric that appears to be correlated with retention. Instead, it's about understanding how users realize value. We emphasized the importance of understanding the qualitative aspects behind a metric, like how Facebook's famous "7 friends in 10 days" wouldn't have worked with automatically added friends for new users; it required relevant suggestions, for example.
Leading PLG companies avoid the biggest operational challenges of running experiments at scale (such as having too many tools in their stack, siloed data, and disagreements between teams on feature results) by maintaining a single platform for consistency. Everyone across the company, including non-technical users, has visibility into metrics. This approach keeps engineering and product teams agile, free from cumbersome processes, and able to build, measure, and learn rapidly.
We discussed examples where companies were able to achieve double-digit growth in core metrics by identifying drop-off points in the customer journey, such as the login or checkout funnel. They used these insights to capture low-hanging fruit by testing features to close these gaps. For more stories about our own customers who achieved remarkable growth, you can visit our customer stories page.
Standard deviation and variance are essential for understanding data spread, evaluating probabilities, and making informed decisions. Read More ⇾
We’ve expanded our SRM debugging capabilities to allow customers to define custom user dimensions for analysis. Read More ⇾
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾