Late last week, we launched an additional logstream on the “Metrics Catalog” tab within “Metrics” to provide more visibility and easier debugging for pre-computed metrics being ingested via our API or one of our integrations (Snowflake/ Redshift/ Bigquery, etc.) NOTE- this additional logstream will only show up if you're ingesting pre-computed metrics.
This is the first part of a multi-step project to improve our pre-computed metrics ingestion experience, from setup through to ongoing usage and debugging. Stay tuned for a slew of improvements in the coming weeks… (and if you have feedback on this process/ specific pain points, don't hesitate to ping me directly!)
As usage of the Statsig platform grows within teams, we’re seeing more and more first-time experiment creators. To this end, we’ve improved our “Setup Checklist” in the “Setup” tab of each experiment. The new checklist includes additional functionality to test your experiment variants inline using ID-based overrides, as well as the ability to test your experiment allocations as they will appear in Production before even starting your experiment.
Note that the new checklist is entirely optional and can be collapsed for our pro experimenters who have been around the block a few times.