Customer lifecycle and marketing automation platforms like Braze, Marketo, Salesforce Marketing Cloud, and HubSpot offer native A/B testing capabilities that empower marketers to design and run experiments on their customers.
Here are links to the relevant AB Test documentation for these providers: (Braze, SFMC, Marketo, HubSpot).
While these platforms provide the essential tools for configuring, designing, and launching email and push notification experiments, they provide only barebones tools for measuring and analyzing experiments.
The AB Test measurement capabilities offered by these platforms lack the sophistication businesses need to confidently understand the impact of their tests and make data-driven decisions.
This is where Statsig comes in. It allows customers to apply the rigor of experimentation analysis to both the simple engagement metrics associated with these campaigns and downstream business metrics that take place later in the customer journey.
Most marketing platforms provide simple analytics that focus on engagement metrics, such as email opens and click-through rates.
However, these tools don’t incorporate metrics from subsequent phases in the journey, including web and mobile app interactions, purchase behavior, and other business outcomes. This gap can lead to a fragmented view of campaign success and make it difficult for marketers to understand the true impact of their experiments below the surface.
You can do better than this! 👇🏼
Statsig’s Warehouse Native Platform is uniquely positioned to sit on top of the data associated with your market campaigns and provide deep analysis on user metrics. These metrics can be derived in any application—Statsig is entirely agnostic to how the data was produced, as long as it lives in your data warehouse, it can be used for test analysis.
Businesses have rich datasets about their customers in their warehouses, transcending just basic clickstream-type metrics. Leveraging your data warehouse for analysis allows you, for example, to understand how an email campaign impacts customer revenue and perform results segmentation during analysis.
A very common use case with warehouse native is incorporating customer cohorts for analysis, such as spend segments (high, medium, low). So now, instead of just understanding a topline “Click Through” metric per test group (as you’re limited to in marketing tools), you can also understand how the campaign impacted revenue and how your customer spend segments behaved as a result of the campaign.
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
When Instagram Stories rolled out, many of us were left behind, giving us a glimpse into the secrets behind Meta’s rollout strategy and tech’s feature experiments. Read More ⇾
Automation in A/B testing has freed data scientists from routine tasks—so what’s next? Ronny Kohavi shares insights on where the real value lies. Read More ⇾