Over the last couple of months, our customer conversations on AI experimentation have increased dramatically (as you may have guessed), so we decided the time is now to dive deep into how experimentation can only benefit the development of AI features.
Enjoy this on-demand viewing and we hope you can join us live in the future!
Got AI on the brain? Statsig can help you run experiments with AI apps, using our recent launch of Statbot. During this Learning Lab we'll teach you how to record important model inputs and outputs, such as prompt, model choices, cost, and latency.
Additionally, we'll provide some tips on how to measure your application's performance and interpret the results to make data-driven decisions.
The authoritative guide on the design and implementation of an in-house feature flagging and AB test assignment platform. Read More ⇾
Standard deviation and variance are essential for understanding data spread, evaluating probabilities, and making informed decisions. Read More ⇾
We’ve expanded our SRM debugging capabilities to allow customers to define custom user dimensions for analysis. Read More ⇾
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾