The culture in News Feed wasn’t just to use experiments to verify that our intuitions about what changes may lead to improvements were true. We also used experiments much like any good scientist would to understand more about their field—we started with a question, formed a hypothesis, and then went about constructing an experiment that would allow us to test our hypothesis and hopefully get an answer to our question.
The questions were usually centered around understanding more about the people who used our products, such as “is watching a video or liking a video a stronger signal of someone’s interest in the video?”, or “does receiving more likes on content you create make you more interested in creating even more content?”
Asking these more open-ended questions, and then answering them by running experiments, allowed us to come up with the products we should build in the first place. Instead of relying on our intuitions about what we should build or which features have the greatest potential, we could then prioritize building the features that we believed would result in the biggest impact, run experiments once those features were implemented, and finally make decisions on what features to ship. Experimentation guided the full process - from brainstorming, to prioritization, to launching.
Here’s an example of this method in action.
The situation: we want to boost content creation on the platform, because we know having more content is good for the ecosystem. We have engineers ready to build new tools and entry points for creation, but we don’t know what kind of content creation is best for the ecosystem—is it photo content, video content, posts that link to external blogs, or is it all the same.
We don’t want to spend time building things for all of these different types; we’d rather identify the most impactful areas to build and focus on those.
The solution: the first step is to figure out which of these types of creation is most beneficial to the ecosystem. In order to do this, we can construct a set of experiments that each increase the prevalence of a different type of content in peoples’ News Feeds by the same amount through minor adjustments to the ranking algorithm, to simulate what it would be like if creation increased for that type of content (one condition would increase photo content by 5%, another condition would increase video content by 5%, and so on).
We then choose a metric that to us defines ecosystem value; in this case it could be time spent on the platform. Next we let the experiment run, and observe which condition increases time spent the most. If the video content boost won, then we conclude it’s most efficient for us to dedicate our time to developing video creation tools, whereas if the photo content boost won, we would dedicate our time to developing tools for photo creation.
The end result is that we were able to use experimentation to quickly identify where to focus our efforts based on real data, instead of relying on our intuition.
We also applied experimentation in other novel ways — from automating the process of running experiments, reading results, and iterating on configurations, to training person-level models on the results of experiments in order to personalize configurations to each individual.
Experimentation can bring a lot of value to any organization working on improving their products. Whether you are working on a very early stage product and want to verify that the changes you are making are truly good for metrics before launching them fully, or you are a mature product that is trying to streamline the process of tuning the many parameters that drive your product behavior, Statsig can help you understand and grow your product.
Kong is our Typescript-based write-once-run on every SDK framework. “Write once, run anywhere” is always a dream for programmers, and now we have just that!
LaunchDarkly was mandatory for every new feature in Motion’s backend, web app, and Chrome extension. "It was obvious this was a huge mistake."
Last Tuesday, Statsig brought a cadre of data science and experimentation fans together at a loft space in San Francisco for the first-ever Data Science Meetup.
Well-designed experimentation is the first step in creating a rollout structure that consistently delivers optimal results—whatever they may be.
Using data and experimentation, the Obama 2012 campaign generated over one billion dollars in donations, nearly $700,000,000 of which were online.
It’s only my first week yet, but each day I am more and more impressed by the team’s velocity, excitement, and transparency, and feeling more sure that I’ve made the right decision for /me/.
Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.