Split is a software company that provides a platform for feature flagging and release management, allowing users to release features securely and measure their impact.
Split offers a range of feature management features and tools to help manage rollouts, run tests, and analyze their impact on a company's metrics.
Split.io feature flags allow users to toggle features on and off with the click of a button, which is useful when faulty features are identified. By using Split's feature flags, any feature can be launched behind a gate with parameters that determine which audience sees it, and which one does not.
With Split.io's experimentation platform, users can make better product decisions through controlled product tests.
Split.io's "attribution engine" merges feature flag data with selected event data to help its users make decisions more quickly, while also providing feature-level observability.
Experiments can also ingest data from multiple sources, including SDKs, APIs, and integrations with other platforms like Segment and mParticle.
In order to provide more accurate analysis, Split processes events through its Intelligent Results Engine, which ingests data from multiple sources and and joins it to features, automatically flagging features' impact on important metrics.
This stats engine supports:
A/B/n testing with automatic randomization user grouping
Data ingestion from multiple sources that can process different types of metrics (like Sum or Count)
Automatic metric attribution that can map metrics to features
Despite its potential upside, Split has its own limitations in key areas. Potential Split users should determine how impactful these shortcomings are for their own use cases.
Split.io has a few key technical limitations that may decrease its viability as a feature flagging and experimentation platform, including:
Lack of feature flags with automatic analytics, meaning it doesn't automatically quantify the impact of every single feature that teams ship
Pricing complications, including a lack of seat-based pricing, and no unlimited MAU
No real-time analytics, it takes a while for results to appear after launching an experiment or feature
No out-of-the-box advanced targeting, so Split users cannot base experiments and rollouts on granular conditions (like app version, mobile/desktop, and country) without configuring it themselves
Lack of collaboration features like discussions, tagging, commenting, and sharing
While some feature management platforms offer unlimited seats, Split charges a base of $33 per seat per month for its cheapest plan, which caps at 50,000 "monthly traffic keys," which roughly map to unique users.
Split charges extra for these keys, which makes scaling difficult—and pricing for extra keys requires a custom quote. This gets very costly compared to platforms that provide one million free metered events per month.
Split currently does not offer a way to configure scheduled rollouts and let the system smoothly take care of smartly rolling out features based on custom metrics. There also isn't an option to easily review past changes and quickly revert to a known healthy state.
Users should also be aware that Split.io does not provide the ability to establish guardrails for your core critical metrics and set up alerts when an experiment breaches those guardrails.
Both are good options overall, with their own unique and pros and cons. If you're considering a feature management platform, be sure to do in-depth comparisons and vet each solution for your organization's specific needs.
Some features that are dealbreakers for some are unimportant for others. In the end, it's all about finding a deal that's right for you and your team.
See also: Statsig versus Split.io feature comparison.
* Based on feature research that was conducted in September 2023.
Find out how we scaled our data platform to handle hundreds of petabytes of data per day, and our specific solutions to the obstacles we've faced while scaling. Read More ⇾
Building a scalable experimentation platform means balancing cost, performance, and flexibility. Here’s how we designed an elastic, efficient, and powerful system. Read More ⇾
The debate between Bayesian and frequentist statistics sounds like a fundamental clash, but it's more about how we talk about uncertainty than the actual decisions we make. Read More ⇾
Here's how we optimized store cloning, cut processing time from 500ms to 2ms, and engineered FastCloneMap for blazing-fast entity updates. Read More ⇾
It's one thing to have a really great and functional product. It's another thing to have a product that feels good to use. Read More ⇾
Stratified sampling enhances A/B tests by reducing variance and improving group balance for more reliable results. Read More ⇾