Guide to onboarding with Statsig

Tue Jan 13 2026

New to Statsig and not sure where to begin? You’re not alone.

Most teams have a similar set of questions when they get started: Where do I begin? What do I set up first? How do I know it’s working?

Onboarding is not just about implementation. It is about putting the right foundations in place and enabling the broader team to use Statsig with confidence. The difference between “we installed it” and “we’re getting value” requires clarity in sequencing, ownership, and success criteria.

This guide outlines a simple three-step approach we’ve seen work across teams as they successfully onboard onto Statsig.

Step 1: Set up your tooling in Statsig

Getting set up correctly is the best way to avoid confusion later. The goal is not to configure every feature on day one (this is where some of our customers get overwhelmed), but to establish a baseline foundation so teams can ship safely, measure impact, and trust the results. To successfully set up your Statsig Console, we recommend the following three steps:

1. Install & initialize the Statsig SDK

Configure your SDK, define user object and targeting, and validate assignment using testing and diagnostics.

When deciding between client and server SDKs, pick the one that best matches your system’s architecture. You can always layer in more advanced configuration later as usage scales.

2. Connect your events and metrics to Statsig

Next, connect the data you’ll use to measure impact. Confirm that key events and metric inputs are flowing into Statsig correctly, and validate the setup end-to-end before expanding usage to a broader team.

If you’re using Statsig Warehouse Native, this is also the point where you’ll connect your warehouse and define a metric catalog. If you’re cloud-based, you’ll focus on event ingestion and validation.

3. Set organization and project settings

Finally, set up basic organization and project structure so teams can work consistently as usage grows, especially if multiple teams will be shipping in Statsig.

Teams that establish naming conventions, templates, and review policies early tend to stay organized and enforce consistent practices as experimentation scales. This is also where you’ll configure things like SSO, member access, teams, projects, and governance.

What "set up" looks like

You’re set up when your SDKs are installed and correctly configured, your events and metrics are connected, and you’ve validated each step with testing and diagnostics. At that point, your team is ready to run experiments, ship behind feature gates, and/or use product analytics with confidence.

Recommended resources

To make adoption easy beyond initial setup, we recommend starting with Statsig University.

Statsig University is Statsig’s dedicated onboarding and training hub. It features step-by-step walkthrough videos and practical how-tos from the team building the product. Hundreds of Statsig users use these courses to onboard faster and standardize setup knowledge across their teams, reducing dependence on just a few experts.

  • Primary resource: Statsig University - Setting Up Statsig

    This course is intended for the Statsig implementation owner. It walks through the three setup steps outlined above including SDK setup, data connection, and organization settings so you can confirm your tooling is working end-to-end before bringing a broader group of teammates into the Statsig platform

  • Supporting reference: Statsig Docs

    Use the docs for detailed written instructions, SDK references, configuration options, and edge cases, especially as you expand to more advanced setups over time

Setting up Statsig

Step 2: Get your teams enabled on Statsig

Once setup is in place, the next unlock is adoption: getting the broader team shipping, measuring, and making decisions in Statsig without relying on a single expert. To enable your teams effectively, focus on the following three steps:

1. Train the broader team on core Statsig console workflows

Ensure new users can confidently navigate the console and complete the core product workflows end-to-end: creating feature gates and experiments, configuring targeting and measurement, reading results, and using diagnostics to build trust in outcomes. The goal is for day-to-day shipping and decision-making to be unblocked without requiring one Statsig expert. To make this repeatable, Statsig University offers a dedicated Product Onboarding course with modules for each core product. This lets everyone learn the workflows on their own time while still starting from the same shared foundation.

2. Align on shared workflows and measurement standards

Before teams can share a consistent understanding of what “good” looks like, experimentation and product development leads should define clear requirements, then share them broadly. This includes how to ship behind feature gates, when to run an experiment versus a rollout, which metrics to use, and how results should be interpreted. A small amount of upfront alignment prevents confusion and mismatched readouts later. For example, one of our customers, Mirage (Formerly Captions), set a simple standard early on: every code change ships behind a feature gate or experiment.

3. Reinforce adoption with lightweight governance and ongoing support

As usage grows, establish simple norms that keep teams consistent such as naming conventions, templates, and experiment review practices plus a clear support path for questions. This helps adoption spread while maintaining quality and trust.

What “enabled” looks like

You’re enabled when the broader team can navigate the Statsig console confidently, understand where tools live, and run the core product workflows end-to-end. For example, Runna is a strong example of what scaled adoption can look like. As Meehir from Runna put it: “Experimentation has become self-serve now, and teams ship features behind feature gates by default.”

Recommended resources

To make team adoption repeatable, we recommend using Statsig University as the default training path so everyone builds the same foundation in Statsig’s core workflows.

  • Primary resource: Statsig University — Product Onboarding

    This course is designed for the broader team. It walks through the Statsig console and core product areas including experiments, feature gates, and product analytics, helping teams build a shared mental model of what workflows are possible. You don’t need to take every module but can choose to prioritize the modules that are most relevant to your team as you get started.

    • Module 1: Experiments deep dive — Create and validate experiments, incorporate stats methodologies, ensure experiments are set up correctly, and analyze results

    • Module 2: Feature gates deep dive — Configure targeting rules, environments, overrides, rollout schedules, monitoring, as well as monitoring and gate health

    • Module 3: Product analytics deep dive — Dashboards, funnels, journeys, retention, and drilldowns to understand behavior and measure impact

    • Module 4: Statsig best practices — Collaboration, governance, and decision-making patterns for scaling experimentation and feature delivery

Product onboarding
  • Supporting resources

    • Statsig University Resource Library: Bite-sized guides, videos, and references for common Statsig topics and workflows that are useful for sharing best practices asynchronously

    • Statsig Docs: Written documentation for deeper technical details, configuration options, and edge cases

    • Statsig Community Slack: A place to ask questions, sanity-check setups, and learn how other teams run experiments and rollouts. As Omar from Lime put it: “We’ve taken a lot of advice on experimentation from the Slack channel and that has helped us coach engineers on the team to run better experiments.”

Step 3: Measuring your onboarding success

Onboarding is easier to manage when success is clearly defined. Broad goals like “get the team enabled” are hard to act on. Clear success criteria help to align expectations, prioritize work, and make progress visible.

Effective onboarding goals answer a few concrete questions: How many people should be using Statsig? What should they be able to do on their own? What decisions should they be comfortable making within a given time frame? Writing these down turns onboarding into a set of milestones the team can work toward.

Onboarding success in practice

One pattern we’ve seen work well is focusing on small, well-scoped wins early. For example, within a month of onboarding, Bed Bath & Beyond, Inc. ran five targeted experiments that led to measurable improvements in user engagement and conversion. These weren’t major redesigns or high-risk changes, they were focused experiments on specific parts of the customer experience, backed by clear hypotheses and prior learnings.

When we work with teams as they onboard, we recommend anchoring these milestones to a simple 30/60/90-day framework. Here is an example of what these milestones can look like:

30/60/90 day framework

Final thoughts

Successful onboarding looks different for every team. Some start with experiments only, while others adopt a broader set of tools. What matters is having a clear path that gets you from setup to insights, with teams who know how to ship, measure, and make decisions confidently.

By working through these three steps, you put the right foundations in place: reliable tooling, enabled teams, and clearly defined success criteria. From there, onboarding shifts into a steady, repeatable way of working rather than a one-time project.

If you’d like support along the way, you can access our courses and guides anytime at learn.statsig.com, or connect with our team in the Statsig Community Slack.

We look forward to seeing what your team builds with Statsig!



Please select at least one blog to continue.

Recent Posts

We use cookies to ensure you get the best experience on our website.
Privacy Policy