We are happy to announce that you can now do math with your metrics! Metrics Explorer now features new formula capabilities, empowering you to delve deeper into your data with ease.
With this new functionality, you can instantly create and visualize dynamic combinations and transformations of one or more metrics. Whether you're calculating event frequency per user, exploring the relationship between different metrics, or requiring a logarithmic perspective of your data, these insights are now just a few clicks away.
Our enhanced formulas include basic mathematical operations, logarithmic functions, square roots, and more (see our full range here). Moreover, you can now effortlessly add trendlines to your analysis, and any metric in your query can seamlessly integrate as a variable in your formula.
Ready to transform your data analysis? Simply hover over the “+” sign in the Metrics Explorer to start adding and experimenting with formulas.
We’re thrilled to announce the launch of a new, interactive “Summary” tab for Experiments. With Experiment Summary, you can collect all implementation details and the final metric lift results in one place, note down team discussion and action items, and create an enduring artifact of all the learnings your team is taking away from your recently-run Experiment.
You can add to a draft state of your Experiment Summary at any point during the Experiment and then once a decision has been made, the Experiment Summary will become the default tab. You can also export your Experiment Summary to a PDF to share with the broader team.
Today, we’re starting to roll out a set of improvements to the Power Analysis Calculator. Here's what's changing:
The new Power Analysis Calculator is a full-blown hub for creating, storing, and looking up previous power analysis calculations.
Qualifying event audience generation:
Now you can use an event as a qualifying threshold to define the audience you would like to run a power analysis on. For example, if you’re an ecommerce company and would like to run a checkout experiment you could use a “tap_checkout" event to define the audience you want to calculate power for your experiment on.
We’ve introduced a new tab “Past Analyses” into the Power Analysis Calculator, where all previous calculations will live. You can rename these analyses for easy lookup/ collaboration, and view the results inline (as well as play with parameters like MDE and target allocation inline without submitting a new calculation). Each past analysis has a Share Link for easy sharing with your team!
Today we’re rolling out changes that will make it easier to discover and consume product insights from Dashboards. Now you can take advantage of all the power of our main analytics feature, Metrics Explorer, in Dashboards as well.
Sometimes, after drilling into your metrics in Metrics Explorer you may want to save and share the results of those enlightening moments, and consolidate them in one view. Now, you have the ability to save charts from Metrics Explorer onto an existing or new dashboard. From any chart in Metrics Explorer, click the “…” and select the option to Export to Dashboard.
Insight, curiosity, and inspiration don't stop once a Dashboard has been created. Starting today, you can continue analyzing the data from any of your newly saved charts, straight from a Dashboard. Charts saved to dashboard from Metrics Explorer offer the same power and flexibility as the ones in Metrics Explorer. You can modify queries to examine things from a different perspective and, if desired, update the existing chart or create a new one.
Today, we’re excited to start rolling out an easy way to export a shareable summary of your experiment via PDF.
To export a PDF of your experiment summary, go to the Pulse tab in your finished experiment, tap Export, and select Experiment Summary PDF. Your PDF summary will contain-
Key Setup Information, such as hypothesis, actual vs. target duration, primary/ secondary metrics, experiment variants (w/ group descriptions and images), etc.
Results Overview, such as a snapshot of your experiment’s Pulse results, experiment settings (CUPED enabled, etc.), and granular metric-by-metric raw stats
In the future, we’ll also be adding a surface for experiment decision-makers to add more free-form recap text, to provide future viewers of this experiment with additional helpful context.
Stay tuned for continued updates on this surface! And in the meantime, let us know if you have any feedback or feature requests.
We’re starting to roll out a new way to visualize your Pulse metric lifts inline within your Scorecard.
You can now select whether you want to visualize your Pulse results in “Cumulative” view (default), “Daily” view, or “Days Since Exposure” view. You can easily toggle between views via a new toggle inline within your Pulse view controls.
Check it out and let us know what you think, or read more deeply about Pulse in our docs here.
To aid in keeping your Metrics Catalog streamlined and current, we are launching automated metric archival. Any metric that has been inactive for the last 60 days will be automatically scheduled for archival with the option for metric owners to extend or mark a metric as permanent.
We've just started rolling out Experiment Policy controls to customers with Enterprise contracts. Configure good defaults for experiment settings like Bayesian vs Frequentist, Confidence Intervals (or optionally even enforce them). Find it under Organization Settings ➜ Settings ➜ Experiment Settings
Experimentation best practice dictates that an experiment should have a highly targeted set of metrics that you’re actively trying to move, along with a broader swath of metrics you’re monitoring to ensure you don’t regress.
Today, we’re adapting our Scorecard to reflect this best practice, and putting in place some smart limits on the Scorecard—max 10 Primary Metrics and 40 Secondary Metrics. Coming soon will be the ability for Enterprise customers to specify an even tighter limit on Scorecard metrics via their Org Settings if desired.
One bonus implication of these limits is that we’re auto-expanding tagged metric groups, making it even easier to see (and manage) all the individual metrics being added to your Scorecard when you add a metric tag.
Let us know if you have any feedback or questions on this change!
Quickly see where in your source code a feature gate or experiment is referenced to get context for how it is being used. Simply enable Github Code References to see this light up in!