Customer lifecycle and marketing automation platforms like Braze, Marketo, Salesforce Marketing Cloud, and HubSpot offer native A/B testing capabilities that empower marketers to design and run experiments on their customers.
Here are links to the relevant AB Test documentation for these providers: (Braze, SFMC, Marketo, HubSpot).
While these platforms provide the essential tools for configuring, designing, and launching email and push notification experiments, they provide only barebones tools for measuring and analyzing experiments.
The AB Test measurement capabilities offered by these platforms lack the sophistication businesses need to confidently understand the impact of their tests and make data-driven decisions.
This is where Statsig comes in. It allows customers to apply the rigor of experimentation analysis to both the simple engagement metrics associated with these campaigns and downstream business metrics that take place later in the customer journey.
Most marketing platforms provide simple analytics that focus on engagement metrics, such as email opens and click-through rates.
However, these tools don’t incorporate metrics from subsequent phases in the journey, including web and mobile app interactions, purchase behavior, and other business outcomes. This gap can lead to a fragmented view of campaign success and make it difficult for marketers to understand the true impact of their experiments below the surface.
You can do better than this! 👇🏼
Statsig’s Warehouse Native Platform is uniquely positioned to sit on top of the data associated with your market campaigns and provide deep analysis on user metrics. These metrics can be derived in any application—Statsig is entirely agnostic to how the data was produced, as long as it lives in your data warehouse, it can be used for test analysis.
Businesses have rich datasets about their customers in their warehouses, transcending just basic clickstream-type metrics. Leveraging your data warehouse for analysis allows you, for example, to understand how an email campaign impacts customer revenue and perform results segmentation during analysis.
A very common use case with warehouse native is incorporating customer cohorts for analysis, such as spend segments (high, medium, low). So now, instead of just understanding a topline “Click Through” metric per test group (as you’re limited to in marketing tools), you can also understand how the campaign impacted revenue and how your customer spend segments behaved as a result of the campaign.
Experimenting with query-level optimizations at Statsig: How we reduced latency by testing temp tables vs. CTEs in Metrics Explorer. Read More ⇾
Find out how we scaled our data platform to handle hundreds of petabytes of data per day, and our specific solutions to the obstacles we've faced while scaling. Read More ⇾
The debate between Bayesian and frequentist statistics sounds like a fundamental clash, but it's more about how we talk about uncertainty than the actual decisions we make. Read More ⇾
Building a scalable experimentation platform means balancing cost, performance, and flexibility. Here’s how we designed an elastic, efficient, and powerful system. Read More ⇾
Here's how we optimized store cloning, cut processing time from 500ms to 2ms, and engineered FastCloneMap for blazing-fast entity updates. Read More ⇾
It's one thing to have a really great and functional product. It's another thing to have a product that feels good to use. Read More ⇾