RUIDs (Rodrigo’s Unique Identifiers) are 64 bit ids mathematically guaranteed to be unique when generated within the same RUID root. Check it out on GitHub.
An RUID root is a set of RUID generators where each generator can be uniquely identified through shared configuration. E.g. a root can be implemented as a set of VMs on the same subnet, each identified by the last n bits of its internal IP address.
The canonical version of RUIDs (this repo) uses 41 bits for timestamp, 14 bits for a monotonically increasing sequence, and 9 bits for the root id.
- 41 bits is enough to cover Rodrigo’s projected lifespan in milliseconds. - 14 bits is about the # of RUIDs that can be generated single threaded in Rodrigo’s personal computer (~20M ids per second). - 9 bits is what remains after the calculations above, and is used for root id. The root id is further split into 5 bits for a cluster id, and 4 bits for a node id.
RUIDs are designed with time travel as a requirement. Whereas other unique id implementations fail (sometimes silently) if the system generating ids goes back in time, RUIDs will still output valid, unique ids.
In v0.1, this is achieved by:
Defining a millisecond maximum time travel threshold MMTTT (sometimes shortened as M2T3).
Comparing the current generation timestamp Ct with the previous generation timestamp Pt. When Ct < Ct + MMTTT < Pt, RUIDs are generated with Pt as the timestamp.
Sleeping for MMTTT when the server starts, and validating the system clock indeed increased by at least MMTTT at the end.
Note that timestamps for RUIDs generated after time travel and before MMTTT has elapsed will not match the system’s clock, which is both a feature and a bug (unsurprisingly, time travel incurs bug/feature duality).
Unfortunately this design is not mathematically correct if time travel happens while the RUID generator is not running; plans for fixing this bug — technically a higgs-bugson — are underway and planned for a v2 release of RUID.
Being coded in Rust and statically linked to musl, the RUID generator is exceptionally performant. v0.1 provides RUIDs via an actix HTTP server, for ease of integration and testing. The resulting standalone docker container is less than 15MB uncompressed. Further optimizations can be made by moving to a more performant RPC framework, and are planned for the RUID v1 release.
Rodrigo needed unique 64 bit ids to run benchmarks against 128 bit UUIDs in various distributed, database-intensive scenarios. Rodrigo was unsatisfied with existing implementations for various reasons, including questionable programming language choices and flaky project names.
You probably don’t need distributed 64 bit ids, so no, you shouldn’t use RUID.
However, if you do need distributed 64-bit ids, give it a shot. Setting up RUID is easier than alternatives, since there is no external dependency at all (the single dependency on IP attribution is solved implicitly by DHCP). If Rust is not your thing, you can port RUID over to your favorite environment: RUID has < 100 SLOC, so porting it over is still easier than reusing (and configuring) any alternatives that depend on an external service.
RUIDs were inspired by the great efforts other engineers have gone through to generate 64 bit application-unique identifiers. In particular, inspiration was drawn from Instagram’s IDs, Twitter’s Snowflake, and Sony’s Sonyflake.
To see how RUID is currently implemented, head over to GitHub: https://github.com/statsig-io/ruid
Thanks to Jason Leung on Unsplash for the cover photo!
Experimenting with query-level optimizations at Statsig: How we reduced latency by testing temp tables vs. CTEs in Metrics Explorer. Read More ⇾
Find out how we scaled our data platform to handle hundreds of petabytes of data per day, and our specific solutions to the obstacles we've faced while scaling. Read More ⇾
The debate between Bayesian and frequentist statistics sounds like a fundamental clash, but it's more about how we talk about uncertainty than the actual decisions we make. Read More ⇾
Building a scalable experimentation platform means balancing cost, performance, and flexibility. Here’s how we designed an elastic, efficient, and powerful system. Read More ⇾
Here's how we optimized store cloning, cut processing time from 500ms to 2ms, and engineered FastCloneMap for blazing-fast entity updates. Read More ⇾
It's one thing to have a really great and functional product. It's another thing to have a product that feels good to use. Read More ⇾