This makes it so that you can use your existing events and metrics with Statsig’s experimentation engine. We’re launching with the ability to sync data from Snowflake, BigQuery, Redshift, and Databricks, and we are excited to add more as needed.
As a Statsig user, you will be able to use our powerful stats engine and console experience on top of your existing data, giving you experiment results, feature gate measurement, and diagnostics on the events and metrics your team is already using.
Statsig is a full-stack platform for product experimentation and observability. Alongside experimentation and feature gate tooling, our SDK provides logging tools to allow customers to track their product performance with no need for any other tools.
While our SDK provides this powerful suite of tools for logging and analyzing events and metrics, many companies already have a well-established data organization and rely on internal datasets to track and measure their products. Our users have expressed that recreating and validating complex or critical metrics can be tedious.
We have existing tools to import metrics and events from data warehouses, but these put the onus on customers to create datasets in specific formats and manage the scheduling of the imports. This was very manual, created many points of failure, and we didn’t provide an easy way for customers to fix or backfill data once they’d imported it to Statsig.
Based on the previous pain points, we built the new approach with the following goals in mind:
Quick and easy set-up: Once you have your connection details on hand, it takes less than 5 minutes to get started
Set it and forget it: We’ll take care of keeping import data in sync, and proactively look for and report any issues with your data in Statsig
Keep things consistent: We’ll treat your imported data just the same as SDK data, materializing into experiment results, creating tracking datasets, and eventually allowing you to explore it in tools like Events Explorer.
Here’s what you can do:
In your Statsig metrics page, you’ll be able to find the new “Ingestions” tab
Here, you can give us connection information for one of the supported data warehouses
You’ll give us a SQL snippet that provides a view for your base metric or event data. This can be as simple as a SELECT *
from your existing table!
In the console, you’ll be able to map your existing fields into Statsig fields
Once that’s done you can preview the data we’ll pull, set an ingestion schedule, and optionally load some recent historical data to get started.
We’ll do the work to make sure your data is synced and reflects your source of truth. Some of this work includes:
Running a scheduled pull and processing your data on your chosen schedule
Re-syncing data everywhere in the console when we notice a change from what we previously loaded
Providing notifications and alerts if there are issues with your ingestion so you can quickly address any problems with the connection setup
(Fast follow) supporting self-service backfills, so you can fix broken source data or retroactively add metrics to your experiment results
Here at Statsig, we are on a mission to empower your experimentation culture by making data more accessible.
We’re really excited about this new phase in how you can use Statsig. There’s always more work to do, and we are always happy to hear — and act on — your feedback to help you grow with Statsig.
The docs are here. Give it a try on Statsig, and let us know what you think!
Standard deviation and variance are essential for understanding data spread, evaluating probabilities, and making informed decisions. Read More ⇾
We’ve expanded our SRM debugging capabilities to allow customers to define custom user dimensions for analysis. Read More ⇾
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾