Following up from the Statsig project level compute summary, we've also added an experiment level compute summary - available in Experiment Diagnostics. Out of box it lets you look at compute utilization by job type or metric source. This is helpful to isolate situations where a low value metric is a disproportionate share of compute utilization. When you find this, look at our guide to optimize costs.
You can now connect multiple Snowflake warehouses to your account, enabling better query performance by automatically distributing query jobs across all available warehouses. To set it up, you can head over to Settings > Project > Data Connection, and select Set up additional Warehouses.
When you schedule multiple experiments to be loaded at the same time, Statsig will distribute these queries across the provided warehouses to reduce contention. Spreading queries across compute clusters can often be faster and cheaper(!) when contention causes queries to be backed up.
We have a beta of intelligent Autoscaling in works. Reach out in Slack if you'd like to try it!
Use Case When you need a quick, at-a-glance summary of a key metric, having a single, prominent value can provide immediate insight. Whether you’re monitoring yesterday’s user sign-ups or the total revenue over the past month, a headline figure helps you stay informed without diving into detailed charts.
Why It’s Important
Single Value views allow you to focus on the most critical data points instantly. This feature is especially useful on dashboards, where quick visibility into key metrics supports faster decision-making and keeps your team aligned on important performance indicators.
The Feature: What It Does
You can now directly select Single Value as a widget type when adding items to your Dashboards, making it easier to showcase key metrics prominently without additional configuration.
In addition, within Metric Drilldown, you can choose the Single Value view to display your metric as a headline figure. This feature offers:
Latest Full Data Point: View the most recent complete data point (e.g., yesterday’s total sales or user activities).
Overall Value for Time Range: See the cumulative or average value over the entire selected time range, providing a broader perspective on your metric.
Comparison Options: Select a comparison period to see absolute and percentage changes over time, helping you understand trends and growth.
By incorporating Single Value views into your dashboards and analyses, you can highlight essential metrics at a glance, enabling you and your team to stay updated with minimal effort.
Use Case
When analyzing event data, you often need to understand the cumulative impact of your metrics over time. For example:
“How many times has this feature ever been used?
“How many distinct people have ever used this feature?”
“What is the total revenue generated up to this point?”
Why It’s Important
Viewing metrics as a cumulative sum provides valuable insights into long-term trends and overall growth. It helps you track feature adoption, user engagement, and total impact over time, enabling more informed decision-making.
The Feature: What It Does
In Metric Drilldown, after selecting an event and choosing an aggregation method—such as Event Count, Uniques, Average of Property Value, etc.—you can now apply the Cumulative Sum option to your results. This feature accumulates your selected metric over time, providing a running total in your charts.
When the metric aggregation is set to Uniques, you have two options for calculating the cumulative sum:
Distinct Uniques
What it does: Counts each unique user or unit only once in the cumulative total, regardless of how many times they appear in subsequent time periods.
Use Case: Answers “How many distinct people have ever used this feature?” by providing a deduplicated cumulative count.
Total Uniques
What it does: Counts each occurrence of a user or unit every time they appear, allowing them to be counted multiple times in the cumulative total.
Use Case: Helps you understand “What is the total number of unique user engagements over time, including repeat users?” This provides insight into recurring user activity across different periods.
For other aggregation types:
Event Count: The cumulative sum shows the total number of events over time, helping you track overall engagement.
Average of Property Value: Accumulates average values over time, useful for metrics like cumulative revenue or total session duration.
Sum of Property Value: Accumulates the sum of a chosen property value from your events, useful for questions like "What is the total revenue generated up to this point?" or “What is the cumulative sum of this property over time?” by providing the total accumulated value.
By enabling the Cumulative Sum option, you can transform your metric analyses to capture total impact over time, providing a comprehensive view that supports deeper insights into your product’s performance.
Statsig Cloud launched with user accounting metrics - including retention. We’re now matching this capability with highly flexible Retention Metrics in Warehouse Native. For insight into why we think this matters, check out our blog post!
Retention metrics allow you to calculate the rolling daily retention from one event/user-day status to itself - or another, if desired. The time window retention is measured in is fully customizable - for example, you can measure the % of users that retain into the last 3 days of the next week, exactly 14 days from now, or any time in the next two weeks.
This allows you to directly track if features designed to make your product more interesting, enjoyable, or stickier over time are working, instead of trying to divine this from some combination of “DAU” and “users active at 7/14/28 days from exposure”.
This class of metrics is critical for growth teams focused on growing their userbase; Lenny’s Newsletter published a fantastic piece on how Duolingo used retention metrics to measure and drive long-term install and revenue growth.
Check out the docs, and try it out in Warehouse Native today!
We're excited to announce the beta launch of SCIM (System for Cross-domain Identity Management) support on Statsig! This initial release focuses on seamless integration with Okta for efficient user provisioning and role assignment into your Statsig projects. For more information visit, Statsig Docs.
This is an Enterprise-only feature. If you would like to enroll in the open beta and enable SCIM for your organization, please reach out to us!
User Provisioning: Automatically create and manage user accounts in Statsig based on Okta identities
Role Assignment: Easily assign and manage user roles through Okta, ensuring consistent access controls
Streamlined User Management: Simplify the onboarding process with automated account creation and updates.
Enhanced Security: Centralized identity governance reduces the risk of unauthorized access by ensuring accurate role assignments.
Improved Efficiency: Save time and reduce errors with automated workflows, allowing your team to focus on higher-priority tasks.
Scalability: Easily manage user identities as your organization grows, without the hassle of manual interventions.
This enhancement streamlines user management and improves security by centralizing identity governance. Stay tuned for more updates as we expand SCIM support in the future!
Over the last couple of months, we have seen an influx in the usage of our Dynamic Configs product. We heard from our customers that they would like to create templates for Dynamic Configs that can be re-used by your team or organization. Templates have always existed on Statsig for Feature Gates and Experiments, and now we have extended this feature to Dynamic Configs as well!
Dynamic Configs is a tool used to change application settings in real-time without requiring a restart or redeployment of the application. This allows developers to control operational settings like performance tuning or scaling resources or other configurations on the fly.
Templates enable you to create a blueprint for Dynamic Configs to enable standardization and reusability across your project. Templates can help enforce a standard practice, or make it easy for new configs to get up & running. Templates can be enforced at the Org (via Organization Settings and Role-based Access Controls) or at the Team-level.
Statsig Warehouse Native now lets you get a birds eye view across the compute time experiment analysis incurs in your warehouse. Break this down by experiment, metric source or type of query to find what to optimize.
Common customers we've designed the dashboard to be able to address include
What Metric Sources take the most compute time (useful to focus optimization effort here; see tips here)
What is the split of compute time between full loads vs incremental loads vs custom queries?
How is compute time distributed across experiments? (useful to make sure value realized and compute costs incurred are roughly aligned)
You can find this dashboard in the Left Nav under Analytics -> Dashboards -> Pipeline Overview
This is built using Statsig Product Analytics - you can customize any of these charts, or build new ones yourself. A favorite is to add in your average compute cost, so you can turn slot time per experiment into $ cost per experiment.
Power Analysis is critical input into experiment durations when you care about trustworthy experiments. When you perform Power Analysis for an experiment, the analysis is now automatically attached to the experiment and available to other reviewers. When you start Power Analysis for an experiment, we'll prepopulate any Primary Metrics you've already configured on the experiment.
This feature is rolling out to Statsig Cloud and Warehouse Native customers over the next week.
Experiment Setup Screen
Starting Power Analysis from an Experiment
Alongside latest value metrics, we’re also announcing First-Value metrics. These allow you to see the value from the first record the user logged while exposed to an experiment. Imagine being able to track first purchase value, first subscription plan price, or first-time time-to-load on a new page.
Learn more in our documentation