Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
affirm Microsoft Notion ea Brex OpenAI Univision DispatchHealth CharacterAI Betterfly Varo vercel
GENERAL

Does Statsig double count events with the same UserId and eventName across experiments?

Date of slack thread: 4/26/24

Anonymous: Hi team, I have 2 experiments and the use same 1 metric (with the same source event). Because we send the conversion event on the same page and also the UserId is the same. So I’m not sure if the Statsig report is double counting if I send 2 the same events for the same UserId (the difference is on the User Attribute). See the example {UserId: test_1, VisitPageA: 1} -> Pass the gate and enter Experiment A {UserId: test_1, VisitPageB: 1} -> Pass the gate and enter Experiment B

On the conversion page, If UserId test_1 entered both experiments A and B, I plan to send the event twice For Experiment A {eventName: conversion, {UserId: test_1, VisitPageA: 1} For Experiment B {eventName: conversion, {UserId: test_1, VisitPageB: 1} -> My concern here is the report of Experiment A and B is double counting because of the same eventName, same UserId

Statbot: When a user sends multiple instances of the same event, Statsig will aggregate these events together, even if they have changes in metadata. This means that if you send the conversion event twice for the same UserId, Statsig will count both events. However, you can use metadata and values to generate custom metrics in the console, which can help differentiate the events in your reporting. For example, you could include VisitPageA or VisitPageB as part of the metadata to distinguish between the conversions for Experiment A and Experiment B (source). To avoid double counting in your reports, you should ensure that the events are logged with the appropriate metadata so that they can be correctly attributed to each experiment. If you’re still concerned about potential double counting, you may want to review how you’re using metadata and values to differentiate the events in your experiments.

Anonymous: Because I setup the gate base on the custom attribute also, so when I send the event with {UserId: test_1, VisitPageA: 1} it should only pass the gate A and count for report of exp A. That is my expectation

Vijaye (Statsig): If the same user enters Exp A & Exp B, and if you send two events, those two events will be counted towards Exp A and Exp B. Events are completely agnostic of experiments you run. We just count the number of events per day and use that for calculation. If you want to dedupe it by user, then you can use the Event DAU metric instead of Event Count metric.

Anonymous: I see, thanks <@U01RGJ9N4N8>, we’ll find another way to figure it out, e.g. using different event for each experiment

Statbot: This thread has been marked as resolved. Press below to re-open.

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy