A Rust implementation of The One Billion Row Challenge
This is currently the fastest implementation of the challenge. It runs for about 7-8 seconds in MBP 2019 (2.6 GHz 6-Core Intel Core i7).
As a comparison, the current top 3 Java implementations run in my machine for more than 15 seconds. Other implementations (C, C++, Rust) shared in the Discussions run for more than 12 seconds.
Summary of the approach:
- Uses a general-purpose float parser (not an input-specific one)
- Uses a general-purpose hash function (not an input-specific one)
- Uses hashbrown's
HashMap
, which is exactly the same asHashMap
in the stdlib, except the former has theentry_ref
method. It turns out usingentry_ref
has practically the same performance as using stdlib'sHashMap
+ theget_mut
-then-insert
approach. That's because the input data has a very few unique city names, compared to the total input lines. - Uses crossbeam_channel for sending data from the main thread (which only reads the input file) to the worker threads. Tried using
rayon
before, but the lock contention became a bottleneck.
The current implementation reads the input file sequentially. This can be further improved by reading it in parallel.
To run:
cargo run --release -- ./path/to/measurements.txt