Product Updates

We help you ship faster. And we walk the walk
Akin Olugbade
Product Manager, Statsig
9/19/2024
Permalink ›

⏰ Time Period Comparison in Funnels

You can now compare conversion funnels across different time periods. Now, you can select a specific comparison range—1, 7, or 28 days ago—and view a side-by-side comparison of the entire funnel for each step against the selected time period.

This feature allows you to observe how product changes impact user behavior over time. By comparing different periods, you can easily identify trends, assess the effectiveness of recent changes, and make data-driven decisions to improve your funnel strategy.

Time period comparisons are available in all funnel views including Conversion Rates, Time to Convert, and Conversion Rate over Time.

Funnel Time Comparison
Akin Olugbade
Product Manager, Statsig
9/19/2024
Permalink ›

📊 Distribution Analysis of Event Property Values

You can now analyze distributions for any numerical property on an event, This removes the limitation of only allowing distribution analysis on the default “Value” property. This enhancement provides you with the flexibility to explore and visualize distributions across diverse numerical properties such as session length, purchase amounts, or any numerical property associated with specific events.

This refinement allows for a comprehensive view of the distribution’s shape, going beyond specific percentiles like p90. This broader perspective is useful for identifying significant points within the distribution, helping you detect trends, pinpoint anomalies, and address potential issues more effectively.

Distribution of Property Values
Akin Olugbade
Product Manager, Statsig
9/19/2024
Permalink ›

% Percentage-Based Metric Grouping

We’re excited to announce a new feature that makes it easier to understand metrics in context. You can now view metrics broken down by (grouped-by) an event property, expressed as a percentage of the total metric value, available in both bar charts and time series line charts.

This update allows you to quickly gauge the proportionate impact of different segments or categories within your overall metrics. For instance, you can now see what percentage of total sales each product category represents over time, or what portion of total user sessions specific events constitute.

By presenting data in percentages, this feature simplifies comparative analysis and helps you focus on the relative significance of different data segments.

Akin Olugbade
Product Manager, Statsig
9/19/2024
Permalink ›

⏳ Funnels - Time to Convert Improvements

We’ve made some quality of life improvements to the Time to Convert view in Funnel charts.

  • We now indicate where median time to convert

  • We support custom configuration for the conversion time window to examine. You can now adjust out automatically configured distribution chart by defining a time window to examine by bounding it with a minimum and maximum conversion time. You can also set the granularity of your analysis by selecting a interval size.

Together these quality of life improvements make it easier to better understand the distribution of times it takes to convert through funnels, and zoom in on specific areas in that distribution for more granular understanding.

Improved Time to Convert Config
Shubham Singhal
Product Manager, Statsig
9/10/2024
Permalink ›

Introducing Pause Assignment for Experiments

We're excited to launch Pause Assignment, a new decision type for experiments in Statsig. This feature allows you to halt new user enrollments in an experiment while continuing to analyze the results for previously exposed users.

Example Use-case

Pause Assignment offers flexibility in various scenarios. For instance, consider an e-commerce company running an experiment to measure the impact of discounts on repeat purchases. With a limited budget, you may need to cap the number of discount offers. However, measuring long-term effects on repeat purchases requires ongoing analysis for an extended period. Pause Assignment addresses this challenge by allowing you to stop user enrollment once you've reached your budget limit while maintaining result analysis to assess the impact on repeat purchases.

Implementation in Statsig

To implement Pause Assignment, simply select it from the Make Decision dropdown in your experiment interface. Note: this feature requires Persistent Assignment to be configured.

pause-xp-assign

For detailed information on Pause Assignment, please consult our Statsig Docs. We value your input and encourage you to share your feedback and suggestions for future enhancements.

Shubham Singhal
Product Manager, Statsig

Update email notification preferences in Statsig

Statsig users can now turn on/off their email notifications in the Statsig console settings. Simply go to My Account page in the Project settings and update your preferences under the Notifications tab.

This is a especially useful for teams who are Statsig very frequently and might want to turn off specific categories of emails to manage their inbox.

email-notifs

We hope that this helps you reduce clutter in your inbox while still allowing you to stay on important aspects of your projects in Statsig. As always, we welcome your feedback and suggestions for further improvements.

Vineeth Madhusudanan
Product Manager, Statsig

ID Resolution++

A common problem in experimentation is trying to connect different user identifiers before or after some event boundary - most frequently signups. This often involves running an experiment where the unit of analysis is a logged out identifier, but the evaluation criteria for the experiment is a logged-in metric (e.g. subscription rate, or estimated lifetime value).

Statsig has upgraded our ID resolution solution to handle multiple IDs attached to one user. ID resolution. In addition to the strict 1-1 mapping that we already support, we now offer first-touch mapping to handle 1-many, many-1 and many-many mappings between different ID types. This is extremely flexible and enables use cases like:

  • handling logins across multiple devices

  • mapping users to metrics from different profiles or owned entities

For more information about this feature, check out the documentation. This option is available to all Statsig Warehouse Native experimenters!

image
Akin Olugbade
Product Manager, Statsig
8/31/2024
Permalink ›

🎛️ Global Dashboard Filters

We’ve introduced Global Dashboard Filters, a new feature that allows you to apply property filters across all charts on your dashboard at once. This makes it easier to scope your analysis to specific properties and other criteria, with the results reflected across every chart.

With Global Dashboard Filters, you can efficiently narrow down your analysis to explore insights from different angles, ensuring that each chart on your dashboard provides a consistent and relevant view of the data. This feature simplifies your workflow and helps you focus on the most important aspects of your analysis.

a screenshot of core metrics within a global dashboard filter

Vineeth Madhusudanan
Product Manager, Statsig
8/30/2024
Permalink ›

Knowledge Base

The KB acts as a searchable repository of experiment learning across teams. It helps you find shipped, healthy experiments and gain context on past effort and generate ideas on new things to try.

Make it easy for new team mates to explore and find experiments a team ran, or where a topic was mentioned. Our meta-analysis tools offer more structured means to discover and look across your experiment corpus, but when you do want free text search, this exists.

Knowledge Bank

Vineeth Madhusudanan
Product Manager, Statsig
8/29/2024
Permalink ›

Meta-analysis/Metric Batting Average

(also updates to Experiment Timeline view)

The "batting average" view lets you look at how easy or hard a metric is to move. You can filter to a set of shipped experiments and see how many experiments moved a metric by 1% vs 10%. Like with other meta-analysis views, you can filter down to a team, a tag or even if results were statistically significant.

Common ways to use this include

  • Sniff testing whether the claim that the next experiment will move this metric by 15% is a good idea.

  • Establishing reasonable goals, based on past ability to move this metric

image

Experiment Timeline View

This view now features summary stats (e.g. How many experiments shipped control) so you don't have to sit and manually tally stats here.

Experiment Timeline View

See more

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy