Split is a software company that provides a platform for feature flagging and release management, allowing users to release features securely and measure their impact.
Split offers a range of feature management features and tools to help manage rollouts, run tests, and analyze their impact on a company's metrics.
Split.io feature flags allow users to toggle features on and off with the click of a button, which is useful when faulty features are identified. By using Split's feature flags, any feature can be launched behind a gate with parameters that determine which audience sees it, and which one does not.
With Split.io's experimentation platform, users can make better product decisions through controlled product tests.
Split.io's "attribution engine" merges feature flag data with selected event data to help its users make decisions more quickly, while also providing feature-level observability.
Experiments can also ingest data from multiple sources, including SDKs, APIs, and integrations with other platforms like Segment and mParticle.
In order to provide more accurate analysis, Split processes events through its Intelligent Results Engine, which ingests data from multiple sources and and joins it to features, automatically flagging features' impact on important metrics.
This stats engine supports:
A/B/n testing with automatic randomization user grouping
Data ingestion from multiple sources that can process different types of metrics (like Sum or Count)
Automatic metric attribution that can map metrics to features
Despite its potential upside, Split has its own limitations in key areas. Potential Split users should determine how impactful these shortcomings are for their own use cases.
Split.io has a few key technical limitations that may decrease its viability as a feature flagging and experimentation platform, including:
Lack of feature flags with automatic analytics, meaning it doesn't automatically quantify the impact of every single feature that teams ship
Pricing complications, including a lack of seat-based pricing, and no unlimited MAU
No real-time analytics, it takes a while for results to appear after launching an experiment or feature
No out-of-the-box advanced targeting, so Split users cannot base experiments and rollouts on granular conditions (like app version, mobile/desktop, and country) without configuring it themselves
Lack of collaboration features like discussions, tagging, commenting, and sharing
While some feature management platforms offer unlimited seats, Split charges a base of $33 per seat per month for its cheapest plan, which caps at 50,000 "monthly traffic keys," which roughly map to unique users.
Split charges extra for these keys, which makes scaling difficult—and pricing for extra keys requires a custom quote. This gets very costly compared to platforms that provide one million free metered events per month.
Split currently does not offer a way to configure scheduled rollouts and let the system smoothly take care of smartly rolling out features based on custom metrics. There also isn't an option to easily review past changes and quickly revert to a known healthy state.
Users should also be aware that Split.io does not provide the ability to establish guardrails for your core critical metrics and set up alerts when an experiment breaches those guardrails.
Both are good options overall, with their own unique and pros and cons. If you're considering a feature management platform, be sure to do in-depth comparisons and vet each solution for your organization's specific needs.
Some features that are dealbreakers for some are unimportant for others. In the end, it's all about finding a deal that's right for you and your team.
See also: Statsig versus Split.io feature comparison.
* Based on feature research that was conducted in September 2023.
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾
When Instagram Stories rolled out, many of us were left behind, giving us a glimpse into the secrets behind Meta’s rollout strategy and tech’s feature experiments. Read More ⇾