Customers are often eager to leave their legacy platform behind and make the move over to Statsig. This can, however, feel like a daunting task with a lot of uncertainty. At Statsig, customer success is paramount and we aim to ensure that the migration process is well-understood.
In this section, we’ll help build the mental model of what a ‘platform migration’ means and define some of the activities therein.
What a migration is not:
A purely technical exercise
An automatable process that requires little planning and minimal human oversight (any vendor promising this has little experience or is lying 😅)
A lift-and-shift of everything in your old platform over to your new platform. This typically isn’t necessary.
A doomsday event that necessarily requires a hard “cut-off date” of the old platform (although this can simplify the process).
What a migration actually is:
An exercise of change management providing an opportunity to:
cleanse tech debt
define clear ownership
promote democratization of testing
educate teams to accelerate adoption
A process of taking inventory of the existing metrics and data sources available for measuring your tests and new features
A decision-making process to determine what entities (if any) need to be ported to your new platform. See the section “What actually gets moved over” below for more.
An engineering project to technically enable product owners and engineers to use the platform for various apps, and integrate necessary data sources
Connect with our data scientists and engineers, and ask questions or just hang out with other cool folks that believe in an experimentation culture!
The decision-making process doesn’t need to be overcomplicated! Here are how our customers typically determine what needs to be migrated.
Experiments are short-lived, so the idea of migrating them doesn't necessarily make sense. Any historical experiments and their results should be captured/documented internally for later reference. Any new experiments should be created in Statsig.
Feature-flag migration involves taking inventory of what is in place and determining which flags need to persist, or if it's best to simply scrub unneeded flags from the codebase to reduce tech debt and start fresh with Statsig gates.
There are generally 2 categories of flags customers have in their codebase:
Temporary flags that were used for a rollout, but are no longer used or needed. This category of flags could simply be scrubbed from the codebase when switching to Statsig
Permanent flags that are in place as a kill switch or a means for delivering targeted functionality based on user entitlement. These flags should be migrated to statsig gates.
Questions to ask on this topic include:
Is there some sort of wrapper or abstraction sitting on top of your existing flagging solution? This will make your life easier, whereby there are fewer reference points to your old tools, giving your engineers a more ‘centralized’ approach to implementing Statsig tools.
What is the volume of flags? Would it necessitate some automation, or is it manageable on an ad-hoc basis? Here is where we can script some automations and leverage Statsig’s Console API to create gates.
Coming over from LaunchDarkly? We have a migration tool for automating the copy of your flags to Statsig!
Metric Definitions and event tracking should be ported to Statsig. Statsig can readily support any existing data and analytics systems you may be using via our integrations with Data Warehouses, CDPs, analytics providers, and via our SDK & http APIs.
How are you currently measuring success signals for your tests and features? Which metrics (quantitative measure of user behavior) and events are needed immediately? (Think core metrics and upcoming testing roadmap).
Do you have vendor-specific SDK tracking calls throughout your app code? All Statsig SDKs support direct event logging.
Are you using a data collection or product analytics service to collect your events and conduct analysis? Statsig supports ingest integrations with the major analytics platforms, CDPs and ETL tools.
Do you have your events and computed metrics in your warehouse that you’d like to integrate into Statsig to measure your tests and gates? Statsig has data
ingest integrations with the major data warehouse providers to ingest raw data and also supports the ability to ingest your custom pre-computed metrics directly from the warehouse (docs).
✔️ Determine if there is a hard cutoff requirement for the incumbent platform. It’s possible that some teams may be more dependent on it, and will take them a bit more time to ramp off. Coordinate the switch-off/switch-on plan across testing teams.
✔️ Determine a suitable Statsig org/project structure based on needs to partition your test efforts by use-case or business unit. Typically, projects represent a shared set of testing objectives/surfaces/metrics. There is no one-size-fits-all solution for this and we’re happy to workshop org design with you.
✔️ Determine who will serve as admin and take responsibility for administrative tasks such as access governance and user roles.
✔️ Determine and port the necessary entities to Statsig based on the principles outlined in “What actually gets moved over” above.
✔️ Determine and document the typical targeting groups to whom you ship tests/features and map these to Statsig segments.
✔️ Determine how to best use your project management software to empower testing teams to collaborate with engineering teams (what is the ideal workflow for socializing test specs)
✔️ Turn off your legacy platform (eventually) … happy testing in Statsig 🧪 📈 💸
Did I miss something? Let me know and I’ll incorporate it here. 👏🏼
Statsig's experts are on standby to answer any questions about experimentation at your organization.
💡 Also, reach out to our Enterprise Engineering team to learn more about how we’ve successfully migrated some of our of largest customers and set them up for success.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.
A lot has changed in the past year. New hires, new products, and a new office (or two!) GB Lee tells the tale alongside pictures and illustrations:
A deep dive into CUPED: Why it was invented, how it works, and how to use CUPED to run experiments faster and with less bias.
With the statsig-langchain package, developers can set up event logging and experiment assignment in their Langchain application within minutes, unlocking online experimentation in Langchain applications