Statsig Warehouse Native now lets you get a birds eye view across the compute time experiment analysis incurs in your warehouse. Break this down by experiment, metric source or type of query to find what to optimize.
Common customers we've designed the dashboard to be able to address include
What Metric Sources take the most compute time (useful to focus optimization effort here; see tips here)
What is the split of compute time between full loads vs incremental loads vs custom queries?
How is compute time distributed across experiments? (useful to make sure value realized and compute costs incurred are roughly aligned)
You can find this dashboard in the Left Nav under Analytics -> Dashboards -> Pipeline Overview

This is built using Statsig Product Analytics - you can customize any of these charts, or build new ones yourself. A favorite is to add in your average compute cost, so you can turn slot time per experiment into $ cost per experiment.
Use Case
Imagine you’re analyzing user behavior across segments like browser types, referral sources, or search terms. Applying a group-by might return a long list of groups, many with minimal impact on your metrics. This volume of data can make it difficult to focus on the most significant segments.
Why It’s Important
By setting a limit on the number of groups displayed, you can reduce clutter and concentrate on the segments that matter most. This helps you avoid distractions from less impactful data points and enables you to focus on meaningful insights that can inform your decisions.
What It Does
When applying a group-by in your charts, you can now specify a limit on the number of groups returned. The groups are sorted by the highest value of the metric you’re analyzing, so you’ll see only the top-performing segments. To use the feature click the "..." in the group-by section and select "Add Group-By Limit"

Power Analysis is critical input into experiment durations when you care about trustworthy experiments. When you perform Power Analysis for an experiment, the analysis is now automatically attached to the experiment and available to other reviewers. When you start Power Analysis for an experiment, we'll prepopulate any Primary Metrics you've already configured on the experiment.
This feature is rolling out to Statsig Cloud and Warehouse Native customers over the next week.
Experiment Setup Screen

Starting Power Analysis from an Experiment

Alongside latest value metrics, we’re also announcing First-Value metrics. These allow you to see the value from the first record the user logged while exposed to an experiment. Imagine being able to track first purchase value, first subscription plan price, or first-time time-to-load on a new page.
Learn more in our documentation
We’re adding the ability to log-transform sum and count metrics, and measure the average change to unit-level logged values.
Log Transforms are useful when you want to understand if user behavior has generally changed. If a metric is very head-driven, even with winsorization and CUPED the metric movement will generally driven by power users.
Logs are multiplicative, so a user going from spending $1.00 to spending $1.10 is the same “metric lift” as another going from $100 to $110. This means that what they measure is closer to shifts in relative distribution, rather than topline value.
Because of this divorce from “business value,” log metrics are usually not good evaluation criteria for ship decisions, but alongside evaluation metrics, they can easily provide rich context on the change in the distribution of your population.
By default, the transform is the Natural Log, but you can specify a custom base if desired.
Learn more in our documentation.
We launched latest value metrics for user Statuses in March this year, and just extended support to numerical metrics. This will be useful for teams that want to track how experiments impact the “state” of their userbase.
You could already track subscription status, but now you can track users’ current balance, lifetime spend, or LTV - without duplicating the data across multiple different days. Each day in the pulse time-series will reflect the latest value as of that day.
Learn more in our documentation.
We’ve added group-by functionality to retention charts, enabling you to break down your retention analysis by various properties and gain deeper insights into user behavior. This feature allows you to segment your retention data across event properties, user properties, feature gate groups, and experiment variants.
Group-By in retention charts is available for:
Event and User Properties: Break down retention based on event and user properties such as location, company or different context about an event or feature..
Feature Gate Groups: Understand retention among different user groups gated by feature flags.
Experiment Variants: Compare retention across experiment groups to see how different variants impact user retention.
Expanded support for group-by in retention charts is rolling out today.
Cohort analysis is now supported across all chart types in Metrics Explorer. Previously available only in drilldown charts, this feature allows you to filter your analysis to specific user cohorts or compare how different groups perform against various metrics.
Filtering to an interesting cohort is supported across all chart types and can be accomplished by adding a single cohort to your analysis. Cohort comparison is available in metric drilldown, funnel, and retention charts and can be accomplished by adding multiple cohorts to your analysis.
What’s New
Expanded Support: Cohort filtering is now integrated into funnels, retention charts, user journeys, and distribution charts.
Detailed Comparisons: You can compare how different cohorts, such as casual users and power users, navigate through funnels like the add-to-cart flow.
Focused Analysis: Easily scope your analysis to understand how specific user groups perform, helping you identify patterns and behaviors unique to each cohort.
Expanded support for cohort analysis will begin rolling out today.
We’ve updated how Statsig processes events received from Segment to help you gain deeper insights without additional effort on your part. Now, when you send events from Segment into Statsig, we automatically extract and include extra properties such as UTM parameters, referrer information, and page details like URL, title, search parameters, and path.
By leveraging data you’re already collecting with Segment, you can:
Gain More Value Without Extra Work: Utilize the enriched data immediately, increasing the context available for your analysis without any additional implementation.
Analyze Marketing Campaigns More Effectively: Filter events by specific UTM parameters to assess which marketing campaigns drive the most engagement or conversions.
Understand User Acquisition Channels: Use referrer information to see where your users are coming from, helping you optimize outreach and partnerships.
Dive Deeper into User Behavior: Examine page-level details to understand how users interact with different parts of your site or app, allowing you to identify areas that perform well or need improvement.
These improvements make it easier to perform detailed analyses in Metrics Explorer, enabling you to make informed decisions based on comprehensive event data—all from the data you’re already sending through Segment.
Funnels in Metrics Explorer now complete in half the time. This improvement reduces wait times, allowing you to spend less time waiting and more time analyzing your data. With faster results, you can iterate more quickly, explore user behaviors efficiently, and make timely, data-driven decisions.