Earlier this year, we announced Statsig Product Analytics to expand our product lines beyond feature flags and experimentation. Now, customers can make data-driven decisions throughout their development cycle, not just when rolling out features.
Over the past few months, we've been adding more functionalities to strengthen our Product Analytics offering as more customers have resonated with the value proposition of having a unified platform for release management, experiments, and analytics.
Visualizing key product events in charts such as funnels, retention, and user journeys is just the tip of the iceberg. Metrics often vary significantly among user segments. For example, you may look at a metric like DAU or purchases over time, but this can differ greatly between regular and power users.
A cohort is essentially any group of users that share common properties, actions, or behaviors (set of events) within a specific time frame. Events can be anything logged or ingested into Statsig, from visiting a page to completing a specific onboarding task to making a purchase.
Common cases include:
Resurrected users: Those who performed a specific action after a period of inactivity.
Power or Core users: Those who perform more than a set threshold of actions within a time frame.
Churned users: Those who became inactive after a period of sustained usage.
Improving metrics like retention directly can be challenging. However, if you identify specific actions that drive engagement and retention, you can build features to encourage those behaviors. For example, early Facebook discovered that adding 7 friends within 10 days was a key driver of stickiness and experimented with features such as "People You May Know" to boost this metric.
Statsig's cohort analysis helps you compare how different groups of users track your metrics of interest. For instance, in a food delivery app, you can validate whether users who place an order within the first 24 hours of downloading the app become long-term users.
You can easily define these two cohorts: "users who placed an order within 24 hours of account creation" vs. "users who didn't place an order within 24 hours of account creation" and leverage retention charts to measure their 180-day retention.
The best part is that you can use the same platform (with the same data) to test various features via experiments to drive the Activation metric of placing an order within 24 hours, ultimately boosting long-term retention.
Below, we'll look at three key features we recently launched that simplify how you can conduct cohort analysis in Statsig:
With multi-event cohorts, you can include users who have engaged in various combinations of activities, such as users who both completed a purchase and subscribed to a newsletter, or users who viewed a product but did not add it to their cart. This can easily be created when you're drilling down any metric of interest.
After you find an interesting user group above, you may want to continuously track how that segment performs over time. Reusable cohorts lets define, save, and reuse detailed definitions of user groups based on their interactions with your product. Again, these can based on simple behavioral patterns or complex combinations of activities and shared user properties.
Say you’re looking at users who re-engaged after a period of dormancy (they performed an event in the last 7 days but had not performed the event in the 7 days prior), you can easily save this definition for reuse in the future.
In general, as you explore your metrics and uncover valuable insights, you can easily save the query directly, allowing you to access and refine your findings at any future time.
You can make these Personal (accessible only to you) or Published (shared across the Project for collaboration) to build on your analysis and collaborate effectively. This lets you preserve your insights more easily and share them with your team.
Try these features today and reach out to us on Slack if you have any feedback and use cases you'd like to discuss!
Kayak reacted quickly to news coverage of airline-related catastrophes and gave its Aircraft Filter feature more visibility, resulting in a 15x increase in user engagement.
A/B testing is the most reliable way to get evidence. Whether you're an advanced experimenter, or delving into testing for the first time, here's what you should know:
The new Pluto design system is an effort to simplify and enhance your experience with Statsig. Take a look for yourself!
Explore technical strategies for scalable experimentation, focusing on overcoming information and managerial challenges to achieve sustainable success.
Why do analytics teams fail? It often pertains to communication, collaboration, and aligning data efforts with business goals. Watch the video:
As our product matures and our customers use Statsig to do more, the goal of the default landing experience evolves. Find out how the Home tab has evolved to support this.