Customer lifecycle and marketing automation platforms like Braze, Marketo, Salesforce Marketing Cloud, and HubSpot offer native A/B testing capabilities that empower marketers to design and run experiments on their customers.
Here are links to the relevant AB Test documentation for these providers: (Braze, SFMC, Marketo, HubSpot).
While these platforms provide the essential tools for configuring, designing, and launching email and push notification experiments, they provide only barebones tools for measuring and analyzing experiments.
The AB Test measurement capabilities offered by these platforms lack the sophistication businesses need to confidently understand the impact of their tests and make data-driven decisions.
This is where Statsig comes in. It allows customers to apply the rigor of experimentation analysis to both the simple engagement metrics associated with these campaigns and downstream business metrics that take place later in the customer journey.
Most marketing platforms provide simple analytics that focus on engagement metrics, such as email opens and click-through rates.
However, these tools don’t incorporate metrics from subsequent phases in the journey, including web and mobile app interactions, purchase behavior, and other business outcomes. This gap can lead to a fragmented view of campaign success and make it difficult for marketers to understand the true impact of their experiments below the surface.
You can do better than this! 👇🏼
Statsig’s Warehouse Native Platform is uniquely positioned to sit on top of the data associated with your market campaigns and provide deep analysis on user metrics. These metrics can be derived in any application—Statsig is entirely agnostic to how the data was produced, as long as it lives in your data warehouse, it can be used for test analysis.
Businesses have rich datasets about their customers in their warehouses, transcending just basic clickstream-type metrics. Leveraging your data warehouse for analysis allows you, for example, to understand how an email campaign impacts customer revenue and perform results segmentation during analysis.
A very common use case with warehouse native is incorporating customer cohorts for analysis, such as spend segments (high, medium, low). So now, instead of just understanding a topline “Click Through” metric per test group (as you’re limited to in marketing tools), you can also understand how the campaign impacted revenue and how your customer spend segments behaved as a result of the campaign.
Understand the difference between one-tailed and two-tailed tests. This guide will help you choose between using a one-tailed or two-tailed hypothesis! Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾
From continuous integration and deployment to a scrappy, results-driven mindset, learn how we prioritize speed and precision to deliver results quickly and safely Read More ⇾
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾