Earlier this year, we announced Statsig Product Analytics to expand our product lines beyond feature flags and experimentation. Now, customers can make data-driven decisions throughout their development cycle, not just when rolling out features.
Over the past few months, we've been adding more functionalities to strengthen our Product Analytics offering as more customers have resonated with the value proposition of having a unified platform for release management, experiments, and analytics.
Visualizing key product events in charts such as funnels, retention, and user journeys is just the tip of the iceberg. Metrics often vary significantly among user segments. For example, you may look at a metric like DAU or purchases over time, but this can differ greatly between regular and power users.
A cohort is essentially any group of users that share common properties, actions, or behaviors (set of events) within a specific time frame. Events can be anything logged or ingested into Statsig, from visiting a page to completing a specific onboarding task to making a purchase.
Common cases include:
Resurrected users: Those who performed a specific action after a period of inactivity.
Power or Core users: Those who perform more than a set threshold of actions within a time frame.
Churned users: Those who became inactive after a period of sustained usage.
Improving metrics like retention directly can be challenging. However, if you identify specific actions that drive engagement and retention, you can build features to encourage those behaviors. For example, early Facebook discovered that adding 7 friends within 10 days was a key driver of stickiness and experimented with features such as "People You May Know" to boost this metric.
Statsig's cohort analysis helps you compare how different groups of users track your metrics of interest. For instance, in a food delivery app, you can validate whether users who place an order within the first 24 hours of downloading the app become long-term users.
You can easily define these two cohorts: "users who placed an order within 24 hours of account creation" vs. "users who didn't place an order within 24 hours of account creation" and leverage retention charts to measure their 180-day retention.
The best part is that you can use the same platform (with the same data) to test various features via experiments to drive the Activation metric of placing an order within 24 hours, ultimately boosting long-term retention.
Below, we'll look at three key features we recently launched that simplify how you can conduct cohort analysis in Statsig:
With multi-event cohorts, you can include users who have engaged in various combinations of activities, such as users who both completed a purchase and subscribed to a newsletter, or users who viewed a product but did not add it to their cart. This can easily be created when you're drilling down any metric of interest.
After you find an interesting user group above, you may want to continuously track how that segment performs over time. Reusable cohorts lets define, save, and reuse detailed definitions of user groups based on their interactions with your product. Again, these can based on simple behavioral patterns or complex combinations of activities and shared user properties.
Say you’re looking at users who re-engaged after a period of dormancy (they performed an event in the last 7 days but had not performed the event in the 7 days prior), you can easily save this definition for reuse in the future.
In general, as you explore your metrics and uncover valuable insights, you can easily save the query directly, allowing you to access and refine your findings at any future time.
You can make these Personal (accessible only to you) or Published (shared across the Project for collaboration) to build on your analysis and collaborate effectively. This lets you preserve your insights more easily and share them with your team.
Try these features today and reach out to us on Slack if you have any feedback and use cases you'd like to discuss!
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾