KNOWLEDGE BASE

Useful treasure trove of knowledge

Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:07 PM
you probably don’t need a perfect 10/10 split either, so maybe aim for 7/13 or better
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:06 PM
this will be a little tedious with 20, but doable. The benefit is that everything else will be just like a regular experiment
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:05 PM
Once you’ve found a good randomization, you can run the experiment as usual
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:05 PM
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:05 PM
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:04 PM
If the # you want to be randomly split is a small number, what you can do is try start -> reallocate -> start -> reallocate the experiment a few times until you’ve got a “randomization” that has your important few clients are evenly split between groups. Each time you “reallocate”, the client ids’ groups will be reshuffled, and you can use the “check group for a user” tool to check them
Eric Zimanyi
Friday, November 11, 2022 at 7:04 PM
For this experiment, we've got 20 client ids that we want to split 10/10 (and only measure those 20 for experiment success)
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:59 PM
or i guess, how many of them do you want to control that they are evenly split among the groups? Like do you have 10 big clients you want to be evenly split, or the number is more like 100 or more
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:58 PM
do you know roughly how many client ids there are?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:58 PM
got it, so you are testing using client id, not user id, and you have a small # of client ids, right?
Eric Zimanyi
Friday, November 11, 2022 at 6:36 PM
Based on that background, do you have any advice on the recommended way to use Statsig for this?
Eric Zimanyi
Friday, November 11, 2022 at 6:36 PM
Some background on why we'd like to pre-assign ids to experiment groups: • We're targeting a level above users (by client id, where there are many users in a client, all of which should see the same experiment values) • Different clients can have very different usage patterns (across a few dimensions); we want to ensure we end up with a reasonably similar distribution of client types in control/test. • In principle maybe we could hope that with a big enough experiment things would average out, but we're not confident that will happen given the size of the experiment we want to run.
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:09 PM
Hi Eric Zimanyi - do you mind elaborate a little on why you’d want to pre select which user goes into control and test? We don’t support this directly from the UI, and in fact, any user that’s overridden into a specific group will be excluded from the results, because they are usually biased somewhat (think employees, dogfooders, etc.) If this is completely intentional and you have a good reason to hand pick users into groups and avoid bias, you can achieve this by manually exposing the users into the correct groups by calling our https://docs.statsig.com/http-api#log-exposure-event|log exposure http API for all of your users with the correct group. Once they have an exposure logged, their analytical events afterwards will be attributed to their logged group as long as they have the matching IDs. The caveat here is that you can’t use our SDK’s `getExperiment` API to get their group assignment in code - you will have to decide which group a user should be in by their ID yourself.
Eric Zimanyi
Friday, November 11, 2022 at 5:16 PM
Does anyone have advice on setting up experiments with pre-selected control/experiment groups? For this particular experiment I'm setting up, I have a list of IDs that I know I want in the control/experiment sides and don't want any random assignment beyond that. Would I just enter "force variant overrides" for those IDs, then set the actual experiment allocation to 0% (so no other targeting happens)? Or is there a better way of doing this?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:09 PM
Hi Eric Zimanyi - do you mind elaborate a little on why you’d want to pre select which user goes into control and test? We don’t support this directly from the UI, and in fact, any user that’s overridden into a specific group will be excluded from the results, because they are usually biased somewhat (think employees, dogfooders, etc.) If this is completely intentional and you have a good reason to hand pick users into groups and avoid bias, you can achieve this by manually exposing the users into the correct groups by calling our https://docs.statsig.com/http-api#log-exposure-event|log exposure http API for all of your users with the correct group. Once they have an exposure logged, their analytical events afterwards will be attributed to their logged group as long as they have the matching IDs. The caveat here is that you can’t use our SDK’s `getExperiment` API to get their group assignment in code - you will have to decide which group a user should be in by their ID yourself.
Eric Zimanyi
Friday, November 11, 2022 at 6:36 PM
Some background on why we'd like to pre-assign ids to experiment groups: • We're targeting a level above users (by client id, where there are many users in a client, all of which should see the same experiment values) • Different clients can have very different usage patterns (across a few dimensions); we want to ensure we end up with a reasonably similar distribution of client types in control/test. • In principle maybe we could hope that with a big enough experiment things would average out, but we're not confident that will happen given the size of the experiment we want to run.
Eric Zimanyi
Friday, November 11, 2022 at 6:36 PM
Based on that background, do you have any advice on the recommended way to use Statsig for this?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:58 PM
got it, so you are testing using client id, not user id, and you have a small # of client ids, right?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:58 PM
do you know roughly how many client ids there are?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 6:59 PM
or i guess, how many of them do you want to control that they are evenly split among the groups? Like do you have 10 big clients you want to be evenly split, or the number is more like 100 or more
Eric Zimanyi
Friday, November 11, 2022 at 7:04 PM
For this experiment, we've got 20 client ids that we want to split 10/10 (and only measure those 20 for experiment success)
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:04 PM
If the # you want to be randomly split is a small number, what you can do is try start -> reallocate -> start -> reallocate the experiment a few times until you’ve got a “randomization” that has your important few clients are evenly split between groups. Each time you “reallocate”, the client ids’ groups will be reshuffled, and you can use the “check group for a user” tool to check them
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:05 PM
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:05 PM
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:05 PM
Once you’ve found a good randomization, you can run the experiment as usual
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:06 PM
this will be a little tedious with 20, but doable. The benefit is that everything else will be just like a regular experiment
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:07 PM
you probably don’t need a perfect 10/10 split either, so maybe aim for 7/13 or better
Eric Zimanyi
Friday, November 11, 2022 at 7:11 PM
One additional point that I probably didn't make clear above is that we have many more than 20 clients, but we only want these 20 to be in the experiment results (as they are picked specifically for being representative for this experiment).
Eric Zimanyi
Friday, November 11, 2022 at 7:11 PM
I assume that if I start the experiment and reallocate it will allocate *all* the client ids, and there's not a good way to filter down to just these 20?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:25 PM
do you want to run the experiment on the rest of the clients?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:25 PM
if you don’t want their result, why still put them in the experiment?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:37 PM
If you don’t want them to be in the result, you can simply override them into the control group, so that they all get the control experience and not included in the result. Let me know if there is a reason you still want to randomize their groups
Eric Zimanyi
Friday, November 11, 2022 at 7:38 PM
No, we only want to run the experiment and collect data on these. To clarify: • We have X (>>20) clients • We have selected 20 we want to use for the experiment, 10 in control and 10 in experiment • We'd like to manually set up those 20 in the experiment
Eric Zimanyi
Friday, November 11, 2022 at 7:38 PM
Sorry if that was not clear above!
Eric Zimanyi
Friday, November 11, 2022 at 7:38 PM
(And definitely just interested in the best way of doing this, if my approach is not best!)
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:39 PM
Yeah then you can do what i said just now - override everyone else into control so they aren’t in the experiment (you can also write code to check if they are in the 20, and if not, don’t expose to the experiment).
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:40 PM
Then for the 20, do what I suggested earlier to reset the randomization a few times until they are split evenly
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:43 PM
If you want full control of who is in which group without keep resetting the experiment, then the only way is using the http api to manually log their exposure, and make sure in your code, you do ```let controlIDs = [...]; // hard code your client ids in these let testIDs = [...]; if (controlIDs.contains(clientID)) { // call the http api to expose the client to control group, then show control group experience } else if (testIDs.contains(clientID)) { // call the http api to expose the client to control group, then show test group experience } else { // do whatever is right for the rest }```
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:44 PM
These 2 options give you the same result, one is a little more tedious and needs some luck to get the right randomization, one is more coding but give you precise control of who falls into which group, so up to you which one to go with
Eric Zimanyi
Friday, November 11, 2022 at 7:45 PM
For the first solution (randomizing) would it also be reasonable to set up a gate before the experiment to filter to the client ids we're interested in?
Eric Zimanyi
Friday, November 11, 2022 at 7:46 PM
The reason is that we are signing up new clients, so having an exclusion list in the experiment would not be super maintainable
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:47 PM
what you should do is create a gate that passes if the client id is not any of the 20, and then override that gate to control
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:47 PM
then you won’t worry about new clients falling into the experiment
Eric Zimanyi
Friday, November 11, 2022 at 7:48 PM
ok, thanks! yeah, I think that for the short-term one of these will be a reasonable workaround
Eric Zimanyi
Friday, November 11, 2022 at 7:48 PM
longer-term are there any plans to improve support for this use case in the product?
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:54 PM
no problem. The reason we don’t have an easy way to do this is because this is very easily misused and will lead to the wrong result, but we do know there are legit use cases, and that’s one of the reason why we have the API for you to log your own user’s group right now. I can see us implementing something that will help you find the right/balanced randomization based on some criteria at some point, or maybe a way to allow you to use overrides and include them in the results too. It’s not on our short term roadmap yet, but I think longer term we will likely support this better
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 7:54 PM
cc: MA (Statsig)
Eric Zimanyi
Friday, November 11, 2022 at 9:02 PM
Thanks! Yeah, I think some simpler workflow for doing this would be helpful. While the workaround of checking the ids in code might work, it also eliminates a lot of the benefits of using a platform like Statsig if we need to implement our own flag checking on top.
Eric Zimanyi
Friday, November 11, 2022 at 9:03 PM
Maybe even some way for us to include customer segments as attributes, and the tell Statsig to randomize (but evenly distribute by these attributes) would be helpful.
Jiakan Wang (Statsig)
Friday, November 11, 2022 at 9:49 PM
oh yeah, we do have plan to have stratified sampling (some time next year). It won’t necessarily allow you to specify which exact ID should be in which group, but it will try to balance based on some attributes
Eric Zimanyi
Friday, November 11, 2022 at 9:54 PM
I think that would probably solve our use case (as we don't actually *need* exact specification, it was just how we were achieving the stratified sampling). Would definitely be interested in info on this as it gets built.
Isaac Elmore
Friday, November 11, 2022 at 11:04 AM
Not yet--still waiting to push our first experiment!
tore (statsig)
Friday, November 11, 2022 at 1:47 AM
Great! We’re publishing a prod version of that soon
Andy Moon [Design at Statsig]
Friday, November 11, 2022 at 1:04 AM
:wave: Hey everyone! I was wondering if any one of you have used our “Insights” feature before?
Isaac Elmore
Friday, November 11, 2022 at 11:04 AM
Not yet--still waiting to push our first experiment!
David Sepulveda
Friday, November 11, 2022 at 12:16 AM
Just to clarify, I mean “from certain date onwards”.
Ken Cheng
Thursday, November 10, 2022 at 11:56 PM
Emma Dahl (Statsig) sry for the late response, I can also help. https://calendly.com/ken-30min-officehour/30min
Daniel (Statsig)
Thursday, November 10, 2022 at 10:57 PM
We do have plans to make the override experience better, but it is still a while out. If you are just looking for something to do with test, you can call the overrideGate or overrideConfig methods https://github.com/statsig-io/react-sdk/blob/2b8d188a1ae79290ba7d26ecd8c7983855263e7a/src/Statsig.ts#L180 Let me know if this doesn’t help
Khoi Bui
Thursday, November 10, 2022 at 10:36 PM
with Cruise LLC. Yeah looks like it’s because I don’t have admin access. I’ll get in touch with our org admins. Thanks!
Alex Coleman (Statsig)
Thursday, November 10, 2022 at 10:35 PM
Which company are you from? It's possible project creation is disabled for non-admins in your organization. And where are you trying to create the project from? Those errors should be unrelated, likely just an adblocker
Khoi Bui
Thursday, November 10, 2022 at 10:23 PM
:wave: When trying to create a new project, I’m getting `failed to create new project` errors. Our company uses Okta to provide auth for accessing statsig. Looking at the network console, I’m seeing a lot of the same request being blocked. What’s the best way to triage this issue?
Alex Coleman (Statsig)
Thursday, November 10, 2022 at 10:35 PM
Which company are you from? It's possible project creation is disabled for non-admins in your organization. And where are you trying to create the project from? Those errors should be unrelated, likely just an adblocker
Khoi Bui
Thursday, November 10, 2022 at 10:36 PM
with Cruise LLC. Yeah looks like it’s because I don’t have admin access. I’ll get in touch with our org admins. Thanks!
Sahil Ahuja
Thursday, November 10, 2022 at 9:47 PM
Is it possible or are there plans to add overrides for a react or react native client? For example if I wanted to override the value of a gate locally on my device/client? Thanks!!
Daniel (Statsig)
Thursday, November 10, 2022 at 10:57 PM
We do have plans to make the override experience better, but it is still a while out. If you are just looking for something to do with test, you can call the overrideGate or overrideConfig methods https://github.com/statsig-io/react-sdk/blob/2b8d188a1ae79290ba7d26ecd8c7983855263e7a/src/Statsig.ts#L180 Let me know if this doesn’t help
Sahil Ahuja
Thursday, November 10, 2022 at 9:46 PM
It worked beautifully! Thanks so much tore (statsig) for the quick response and help here!!
Ritvik (Statsig)
Thursday, November 10, 2022 at 8:57 PM
Note that custom queries also use the attributes of the very first exposure, so they would be in the location = A bucket for those purposes too
Christoph Meier
Thursday, November 10, 2022 at 7:43 PM
That makes sense. Thanks!
Vineeth
Thursday, November 10, 2022 at 7:42 PM
With your new example, assuming excluding users from location B happens via a targeting gate on the experiment, they'd no longer get the experience within the experiment. They'd still be tracked for analytics purposes within the experiment, since they have already been exposed.
Vineeth
Thursday, November 10, 2022 at 7:40 PM
No. Targeting is deterministic based on the current set of user properties being evaluated. So if a user starts out with status tier = "Free" and switches to "Paid", feature gates will evaluate them based on the current status of this property. If they were exposed to an experiment when Free, for analytics purposes they'll be tracked as a Free user, even though they're now getting Paid features, since the point of the experiment may be to measure how many people you can move from Free- > Paid.
Christoph Meier
Thursday, November 10, 2022 at 7:36 PM
Quick question: If a user attribute changes after the first exposure to an experiment, would that user still be exposed to an experiment if that attribute is being filtered in a user segment? For example we have an experiment that is excluding users from location B. 1. Day 0, user hits experiment with `location='A'`, and passes the rule. 2. Day 1 the user comes back, this time with `location='B'` What happens?
Vineeth
Thursday, November 10, 2022 at 7:40 PM
No. Targeting is deterministic based on the current set of user properties being evaluated. So if a user starts out with status tier = "Free" and switches to "Paid", feature gates will evaluate them based on the current status of this property. If they were exposed to an experiment when Free, for analytics purposes they'll be tracked as a Free user, even though they're now getting Paid features, since the point of the experiment may be to measure how many people you can move from Free- > Paid.
Vineeth
Thursday, November 10, 2022 at 7:42 PM
With your new example, assuming excluding users from location B happens via a targeting gate on the experiment, they'd no longer get the experience within the experiment. They'd still be tracked for analytics purposes within the experiment, since they have already been exposed.
Christoph Meier
Thursday, November 10, 2022 at 7:43 PM
That makes sense. Thanks!
Ritvik (Statsig)
Thursday, November 10, 2022 at 8:57 PM
Note that custom queries also use the attributes of the very first exposure, so they would be in the location = A bucket for those purposes too
Isaac Elmore
Thursday, November 10, 2022 at 5:19 PM
Thank you, that makes so much sense!
Isaac Elmore
Thursday, November 10, 2022 at 5:19 PM
:man-facepalming:
Jiakan Wang (Statsig)
Thursday, November 10, 2022 at 5:12 PM
Yep, in dogfood mode, the allocation % is still being respected, so if you want to test the experiences before starting the experiment, either use an override to force yourself into a group, or allocate more % to your experiment so you naturally fall into one of the groups.
Vijaye (Statsig)
Thursday, November 10, 2022 at 5:10 PM
Not allocated usually means that the experiment is not rolled out 100% and the user you are checking happens to be not part of the experiment.
Isaac Elmore
Thursday, November 10, 2022 at 4:23 PM
Thanks for the helpful responses from the Statsig team! :raised_hands: I'm having trouble figuring out some behavior I'm experiencing. This is what my diagnostics page looks like for an experiment (the experiment is not yet started, and in dogfood mode). On the client side, I'm calling `Statsig.initializeAsync()` in my main activity's `onCreate()` with a callback that then performs my calls to `Statsig.getExperiment()` . There are no checks to `getExperiment()` before the initialization callback. ~Any ideas why I'm getting "Uninitialized" as a result in diagnostics on the first open?~ Also, why aren't my devices being allocated since I'm in dogfood mode? > ~Any ideas why I'm getting "Uninitialized" as a result in diagnostics on the first open?~ Found the answer to my own question for this part. I was inadvertently calling `updateUserAsync()` with my own custom ID before `Statsig.initializeAsync()`!
Vijaye (Statsig)
Thursday, November 10, 2022 at 5:10 PM
Not allocated usually means that the experiment is not rolled out 100% and the user you are checking happens to be not part of the experiment.
Jiakan Wang (Statsig)
Thursday, November 10, 2022 at 5:12 PM
Yep, in dogfood mode, the allocation % is still being respected, so if you want to test the experiences before starting the experiment, either use an override to force yourself into a group, or allocate more % to your experiment so you naturally fall into one of the groups.
Isaac Elmore
Thursday, November 10, 2022 at 5:19 PM
:man-facepalming:
Isaac Elmore
Thursday, November 10, 2022 at 5:19 PM
Thank you, that makes so much sense!
tore (statsig)
Thursday, November 10, 2022 at 3:56 AM
Yeah been looking into it, sorry for not saying something sooner. I just published a beta version of the SDK, which I think should help. Would you mind giving that a try? https://www.npmjs.com/package/statsig-react-native/v/4.7.2-beta.0
[Statsig] GB Lee
Thursday, November 10, 2022 at 12:07 AM
To MA (Statsig)’s point, this improvement is coming soon. And the footer is going to be sticky rather than it scrolling off from the view. This will extend across all entities so it’ll be consistent experience throughout the platform
Abhishek S
Wednesday, November 9, 2022 at 11:55 PM
Thank you folks and my apologies — blocking call followed by statsig warning led me to believe I had incorrect config for caching.
Sahil Ahuja
Wednesday, November 9, 2022 at 11:55 PM
tore (statsig) have you had a chance to take a look at this? :slightly_smiling_face:
MA (Statsig)
Wednesday, November 9, 2022 at 11:53 PM
Hi Jacob! We actually just launched a new UX treatment for Save on Experiments that roots the Save Button to the bottom of the footer as you suggest above! We're in the process of building this out for Feature Gates as we speak. Definitely know this is a pain point... hang in there, fix coming soon! cc/ Tyler (Engineer - Statsig)

We use cookies to ensure you get the best experience on our website.

Privacy Policy