lockfree
is a collection of lock-free data structures written in standard C++11 and suitable for all platforms - from deeply embedded to HPC.
Lock-free data structures are data structures that are thread and interrupt safe without having to use mutual exclusion mechanisms. Lock-free data structures are most useful for inter process communication, but due to the efficiency of lockfree
, it can safely be used for single threaded uses as well, making it good for general purpose use.
- Written in standard C++11, compatible with all platforms supporting it
- All data structures are thread and multicore safe in single producer single consumer scenarios
- No dynamic allocation
- Optimized for high performance
- MIT Licensed
- Additional APIs for newer C++ versions
At the moment the following data structures are available:
- Queue - Best for single element operations, extremely fast, simple API consisting of only 2 methods.
- Ring Buffer - A more general data structure with the ability to handle multiple elements at a time, uses standard library copies making it very fast for bulk operations.
- Bipartite Buffer - A variation of the ring buffer with the ability to always provide linear space in the buffer, enables in-buffer processing.
All of these are available as separate libraries and ring buffer and bipartite buffer also have C library variants.
There are three main ways to get the library:
- Using CMake FetchContent()
- As a git submodule
- By downloading a release from GitHub
For cache coherent systems the False Sharing phenomenom can reduce performance to some extent which is why passing LOCKFREE_CACHE_COHERENT
as true
is advisable. This aligns the indexes to LOCKFREE_CACHELINE_LENGTH
, 64
by default.
On embedded systems, LOCKFREE_CACHE_COHERENT
should almost always be left as false
.
Some systems have a non-typical cacheline length (for instance the apple M1/M2 CPUs have a cacheline length of 128 bytes), and LOCKFREE_CACHELINE_LENGTH
should be set accordingly in those cases.
The biggest reason you would want to use a lockfree data structure in such a scenario would be performance. Locking has a non-neglegible runtime cost on hosted systems as every lock requires a syscall.
Additional benefits would be performance from cache locality as lockfree data structures are array-based or code portability to non-POSIX environments.
While locking usually isn't expensive on embedded systems such as microcontrollers, there is a wide variety of RTOS-es and no standardized API for locking. The fact that multiple architectures are present from 8051 to RISC-V means that architecture-specific locking methods are not standardized either.
lockfree
provides a way to build portable embedded code with a neglegible performance cost as opposed to locking, code using lockfree
can be compiled to run on any embedded platform supporting C++11. Additionally, the code can easily be tested on a host machine without the need for mocking.
- Type safety, as data structures are type and size templated
- Much simpler and less error-prone instantiation
- Higher performance due to compile-time known size and header-only implementation
- Encapsulation, the data buffer is a class member instead of being passed by a pointer
All structures in lockfree
are bounded, array-based, lockfree and waitfree for single consumer single producer scenarios. For more insight into lock-free programming, take a look at this brilliant talk series from Herb Sutter.