How Statsig Designs SDKs for Different Application Environments

Jiakan Wang
Fri Oct 22 2021
EXPERIMENT SOFTWARE-DEVELOPMENT A-B-TESTING ANALYTICS FEATURE-FLAGS

At Statsig, we want to enable our customers to ship and test new features faster, and with confidence — an important part of this is to make sure our SDKs not only provide the necessary APIs, but also do it in a way that works seamlessly with the environments their applications are in. Specifically, we design any SDK for either client-side or server-side applications. Curious what the differences are? Read on!

Client-Side

Typical client side applications include things like mobile apps and websites, and they normally have these common characteristics for which we designed these SDKs for:

1. Serves a single user at a time

The user object is provided to Statsig at initialization time, where the SDK makes a single network request to Statsig server to fetch all the feature gates and experiment values for the given user. This way, checking feature gates or experiments uses local values rather than issuing more network requests, and every event log is associated with the same user implicitly.

await Statsig.initialize("client-api-key", user);
var feature_on = Statsig.checkGate("new_landing_page_design");
if (feature_on) {
// show new landing page
} else {
// show old landing page
}

We also take advantage of this characteristic and provide experiments at the device level so that users get a consistent experience before and after sign-in/up, as well as getting events generated post sign-in/up attributed correctly. Many of our customers find this feature useful when running experiments on improving their apps’ sign-up flow.

2. Not in a secure environment, i.e. assume everything is public

We provide client API keys for you to use in client SDKs, and the values our server return about your features and experiments are all one-way hashed. This way your new secretive features will not be leaked just because you have set up a feature gate for it on Statsig.

3. The device is not always connected to the Internet

For a mobile/desktop app, we cannot assume the user has network connection at any given time, so we took extra steps to ensure features and experiments work properly and events are not lost when offline.

For feature/experiment values, we always store the latest values we retrieve from Statsig server in the device’s local storage, so that when we cannot fetch the newest values from the server, we can fallback to the cached values. Now what if there was no cache? Our APIs require that you provide a reasonable default value to be used when this happens, so that your code will never have a problem.

var experiment = Statsig.getExperiment("new_design_copies");
var banner_text = experiment.get<String>("banner_text", "default banner text");
// Feature gates always have a default value of False
var feature_on = Statsig.checkGate("new_landing_page_design");

For events that were logged during an offline session, we also store them in the local storage and send them next time the user comes online.

4. Sensitive to binary size, data usage and latency

We try to use as few external dependencies as possible, and always try to pick the smallest packages that can get the job done when needed so that we don’t bloat app sizes. For example, our JavaScript client SDK is only 12kb minified + Gzipped.

We only make one light network request to fetch all values for the user at initialization, and the server latency for this request is less than 5ms. It also has a built-in and configurable timeout in case the request takes too long for any reason, so that we never block app start for too long. Events generated on the SDK are batched and only flushed periodically to save data usage.

Server-Side

Server applications are quite different on the other hand. They are generally long running and serve requests from many clients/users at any given time. Therefore our server SDKs have a few features that cater for the needs of this type of environments:

1. Serves many users from one machine

A server can be receiving requests for lots of users at any given time, so it’s very important for Statsig server SDKs to be very performant and to not making extra network requests when asked to evaluate something for a given user.

We decided that it would be the best to construct all features gates and experiments with a set of rules, each of which consists of essentially one operator and multiple operands, and then give each SDK a built-in evaluator so that it can understand and evaluate all of these rules within the SDK itself. As a result, the SDK downloads all of the rules for feature gates and experiments (a.k.a rule sets) during initialization through only 1 network request to Statsig’s server, and can handle all of the evaluation by itself afterwards.

Check out our previous blog about feature gates evaluation if you’d like to learn more!

2. Each server runs for a long time

New feature gates can be added and experiments can be shipped during a server session, so we need to keep the rule sets updated at all times in order to serve values that are up to date. To achieve this, our SDKs query the backend every few seconds for changes, and save any change locally.

Because the rule sets are cached locally, the server SDKs are super reliable and will work even though there is no connection to Statsig’s server for a while. In the rare case that your server cannot connect to Statsig’s server during initialization, we also provide a way for you to bootstrap rule sets with a cached version that can be exported from the SDK in a previous session.

Interested in taking a look at our SDKs? All of them are open source and you can find the GitHub links and documentation here.


Try Statsig Today

Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.

MORE POSTS

Recently published

My Summer as a Statsig Intern

RIA RAJAN

This summer I had the pleasure of joining Statsig as their first ever product design intern. This was my first college internship, and I was so excited to get some design experience. I had just finished my freshman year in college and was still working on...

Read more

Long-live the 95% Confidence Interval

TIMOTHY CHAN

The 95% confidence interval currently dominates online and scientific experimentation; it always has. Yet it’s validity and usefulness is often questioned. It’s called too conservative by some [1], and too permissive by others. It’s deemed arbitrary...

Read more

Realtime Product Observability with Apache Druid

JASON WANG

Statsig’s Journey with Druid This is the text version of the story that we shared at Druid Summit Seattle 2022. Every feature we build at Statsig serves a common goal — to help you better know about your product, and empower you to make good decisions for...

Read more

Quant vs. Qual

MARGARET-ANN SEGER

💡 How to decide between leaning on data vs. research when diagnosing and solving product problems Four heuristics I’ve found helpful when deciding between data vs. research to diagnose + solve a problem. Earth image credit of Moncast Drawing. As a PM, data...

Read more

The Importance of Default Values

TORE

Have you ever sent an email to the wrong person? Well I have. At work. From a generic support email address. To a group of our top customers. Facepalm. In March of 2018, I was working on the games team at Facebook. You may remember that month as a tumultuous...

Read more
ANNOUNCEMENT

CUPED on Statsig

CRAIG

Run experiments with more speed and accuracy We’re pleased to announce the rollout of CUPED for all our customers. Statsig will now automatically use CUPED to reduce variance and bias on experiments’ key metrics. This gives you access to a powerful experiment...

Read more

We use cookies to ensure you get the best experience on our website.

Privacy Policy