We're excited to announce the beta launch of SCIM (System for Cross-domain Identity Management) support on Statsig! This initial release focuses on seamless integration with Okta for efficient user provisioning and role assignment into your Statsig projects. For more information visit, Statsig Docs.
This is an Enterprise-only feature. If you would like to enroll in the open beta and enable SCIM for your organization, please reach out to us!
User Provisioning: Automatically create and manage user accounts in Statsig based on Okta identities
Role Assignment: Easily assign and manage user roles through Okta, ensuring consistent access controls
Streamlined User Management: Simplify the onboarding process with automated account creation and updates.
Enhanced Security: Centralized identity governance reduces the risk of unauthorized access by ensuring accurate role assignments.
Improved Efficiency: Save time and reduce errors with automated workflows, allowing your team to focus on higher-priority tasks.
Scalability: Easily manage user identities as your organization grows, without the hassle of manual interventions.
This enhancement streamlines user management and improves security by centralizing identity governance. Stay tuned for more updates as we expand SCIM support in the future!
Over the last couple of months, we have seen an influx in the usage of our Dynamic Configs product. We heard from our customers that they would like to create templates for Dynamic Configs that can be re-used by your team or organization. Templates have always existed on Statsig for Feature Gates and Experiments, and now we have extended this feature to Dynamic Configs as well!
Dynamic Configs is a tool used to change application settings in real-time without requiring a restart or redeployment of the application. This allows developers to control operational settings like performance tuning or scaling resources or other configurations on the fly.
Templates enable you to create a blueprint for Dynamic Configs to enable standardization and reusability across your project. Templates can help enforce a standard practice, or make it easy for new configs to get up & running. Templates can be enforced at the Org (via Organization Settings and Role-based Access Controls) or at the Team-level.
Statsig Warehouse Native now lets you get a birds eye view across the compute time experiment analysis incurs in your warehouse. Break this down by experiment, metric source or type of query to find what to optimize.
Common customers we've designed the dashboard to be able to address include
What Metric Sources take the most compute time (useful to focus optimization effort here; see tips here)
What is the split of compute time between full loads vs incremental loads vs custom queries?
How is compute time distributed across experiments? (useful to make sure value realized and compute costs incurred are roughly aligned)
You can find this dashboard in the Left Nav under Analytics -> Dashboards -> Pipeline Overview
This is built using Statsig Product Analytics - you can customize any of these charts, or build new ones yourself. A favorite is to add in your average compute cost, so you can turn slot time per experiment into $ cost per experiment.
Power Analysis is critical input into experiment durations when you care about trustworthy experiments. When you perform Power Analysis for an experiment, the analysis is now automatically attached to the experiment and available to other reviewers. When you start Power Analysis for an experiment, we'll prepopulate any Primary Metrics you've already configured on the experiment.
This feature is rolling out to Statsig Cloud and Warehouse Native customers over the next week.
Experiment Setup Screen
Starting Power Analysis from an Experiment
Use Case
Imagine you’re analyzing user behavior across segments like browser types, referral sources, or search terms. Applying a group-by might return a long list of groups, many with minimal impact on your metrics. This volume of data can make it difficult to focus on the most significant segments.
Why It’s Important
By setting a limit on the number of groups displayed, you can reduce clutter and concentrate on the segments that matter most. This helps you avoid distractions from less impactful data points and enables you to focus on meaningful insights that can inform your decisions.
What It Does
When applying a group-by in your charts, you can now specify a limit on the number of groups returned. The groups are sorted by the highest value of the metric you’re analyzing, so you’ll see only the top-performing segments. To use the feature click the "..." in the group-by section and select "Add Group-By Limit"
Alongside latest value metrics, we’re also announcing First-Value metrics. These allow you to see the value from the first record the user logged while exposed to an experiment. Imagine being able to track first purchase value, first subscription plan price, or first-time time-to-load on a new page.
Learn more in our documentation
We’re adding the ability to log-transform sum and count metrics, and measure the average change to unit-level logged values.
Log Transforms are useful when you want to understand if user behavior has generally changed. If a metric is very head-driven, even with winsorization and CUPED the metric movement will generally driven by power users.
Logs are multiplicative, so a user going from spending $1.00 to spending $1.10 is the same “metric lift” as another going from $100 to $110. This means that what they measure is closer to shifts in relative distribution, rather than topline value.
Because of this divorce from “business value,” log metrics are usually not good evaluation criteria for ship decisions, but alongside evaluation metrics, they can easily provide rich context on the change in the distribution of your population.
By default, the transform is the Natural Log, but you can specify a custom base if desired.
Learn more in our documentation.
We launched latest value metrics for user Statuses in March this year, and just extended support to numerical metrics. This will be useful for teams that want to track how experiments impact the “state” of their userbase.
You could already track subscription status, but now you can track users’ current balance, lifetime spend, or LTV - without duplicating the data across multiple different days. Each day in the pulse time-series will reflect the latest value as of that day.
Learn more in our documentation.
We’ve added group-by functionality to retention charts, enabling you to break down your retention analysis by various properties and gain deeper insights into user behavior. This feature allows you to segment your retention data across event properties, user properties, feature gate groups, and experiment variants.
Group-By in retention charts is available for:
Event and User Properties: Break down retention based on event and user properties such as location, company or different context about an event or feature..
Feature Gate Groups: Understand retention among different user groups gated by feature flags.
Experiment Variants: Compare retention across experiment groups to see how different variants impact user retention.
Expanded support for group-by in retention charts is rolling out today.
Cohort analysis is now supported across all chart types in Metrics Explorer. Previously available only in drilldown charts, this feature allows you to filter your analysis to specific user cohorts or compare how different groups perform against various metrics.
Filtering to an interesting cohort is supported across all chart types and can be accomplished by adding a single cohort to your analysis. Cohort comparison is available in metric drilldown, funnel, and retention charts and can be accomplished by adding multiple cohorts to your analysis.
What’s New
Expanded Support: Cohort filtering is now integrated into funnels, retention charts, user journeys, and distribution charts.
Detailed Comparisons: You can compare how different cohorts, such as casual users and power users, navigate through funnels like the add-to-cart flow.
Focused Analysis: Easily scope your analysis to understand how specific user groups perform, helping you identify patterns and behaviors unique to each cohort.
Expanded support for cohort analysis will begin rolling out today.