Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Notion Microsoft Univision ea OpenAI affirm HoneyBook OfferUp Cruise Class101 uiPath Teladoc
GENERAL

Can you extend the scheduled run time window for queries beyond 28 days?

Date of slack thread: 7/7/24

Anonymous: Hi! When creating a scheduled run for a query, the available options for the Time Window (Rolling) range from 1 to 28 days. This limitation poses a challenge as our experiments are expected to extend beyond 28 days. Could you consider adding more options for the Time Window? I suggest including an indefinite option that encompasses the entire duration of the experiment from initiation, which is typically what we need. By the way, in our case, using a query is essential because we cannot rely on the overall result. The overall result includes different groups, each with its own statistics, making the outcome dependent on the random proportion of each group in each experiment. Therefore, we can’t rely on it. Thanks!

Jiakan Wang (Statsig): Even if the experiment runs for longer than 28 days, we typically recommend only looking at the results for the last 7 or 14 days because it reflects the actual impact of the experiment minus any kind of novelty effect. Note that even if you choose last 7 days as the window, we still include ALL users exposed to the experiment for the entire duration, the window only controls during which time period we compare the metrics for between users in different groups. Also, the overall result is essentially the same as if we extend the explore query window to be the experiment’s entire duration.

Anonymous: I understand your recommendation and actually, I agree the shorter the duration is the better. However:

  1. The groups are the essential data in our case. The total experiment results are not relevant as I explained.
  2. The duration should be determined by statistically significant results. And that can be more time than the arbitrary duration that you picked.
  3. I believe that this feature request is very simple to add from your side. For us it might be the relevancy of your product to our needs. Thanks

Vijaye (Statsig): Could you elaborate on #1? Are you not keeping the targeting or variants and their sizes constant over the entire duration of the experiment?

Anonymous: Hi, I’m not sure that I completely understand your question. I’ll try answering: We have several groups each gets a pre-set percentage of the users that participate, and this percentage is constant over the duration of the experiment. Anyway, our need is for an option for a bigger window - I don’t see the downside of adding it - and I can assume it should be relatively very simple. I assume that we aren’t the only ones with this issue. Your tool can be very good and this is just an arbitrary limitation of duration for an essential feature. Thanks

Vijaye (Statsig): Thanks Yoav. We want to help you with the experiment decision you are trying to make, and hence the questions to understand the setup. Doing an arbitrarily large window is not that straightforward. We actually have to non-incrementally recompute the variance for each duration window. And most experimentation experts agree that we should ignore the novelty effects that come with experiments. Hence we chose these standard windows in consultation with customers. For example, if the last 28-day window gives you a different result than the total experiment duration window, it’s prudent to use the 28-day window result over the total experiment duration. When you say “groups” do you mean variants? Could you share the link to your experiment so we can take a look?

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy