We kicked off the new year with our first virtual meetup hosted by John Wilke with guests Craig Sexauer and Pierre Estephan. This new format offers insight into how the Statsig team thinks about implementing experimentation into project plans for anyone who considers themselves a builder. If you weren’t able to join us live, catch the on-demand conversation below!
Product teams are responsible for scaling to achieve aggressive annual KPI targets, often with limited resources. Tune in to get an inside look at the importance of scaling and building with experimentation to hit demanding product targets.
Statsig Data Scientist Craig Sexauer and Engineering Manager Pierre Estephan covered:
How to pick the “right” metrics
Why measuring all product changes matter
Why smaller, scrappy experiments are crucial in H2 to prioritize product features
What insights long-term Holdouts offer that a single experiment does not
Enjoy this on-demand viewing and we hope you can join us live in the future! Morgan Scalzo Community and Event Manager Statsig
Hypothesis Testing often confuses data scientists due to mixed teachings on p-values and significance testing. This article clarifies 10 key concepts with visuals and intuitive explanations.
I discussed 8 A/B testing mistakes with Allon Korem (Bell Statistics) and Tyler VanHaren (Statsig). Learn fixes to improve accuracy and drive better business outcomes.
Introducing Differential Impact Detection: Identify how different user groups respond to treatments and gain useful insights from varied experiment results.
Identify power users to drive growth and engagement. Learn to pinpoint and leverage these key players with targeted experiments for maximum impact.
Simplify data pipelines with Statsig. Use SDKs, third-party integrations, and Data Warehouse Native Solution for effortless data ingestion at any stage.
Learn how we use Statsig to enhance our NestJS API servers, reducing request processing time and CPU usage through performance experiments.