Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Ensure our mutexes handle recursive locking properly
We use shared-memory pthread mutexes to handle mutual exclusion in Libpod. It turns out that these have configurable options for how to handle a recursive lock (IE, a thread trying to lock a lock that the same thread had previously locked). The mutex can either deadlock, or allow the duplicate lock without deadlocking. Default behavior is, helpfully, unspecified, so if not explicitly set there is no clear indication of which of these behaviors will be seen. Unfortunately, today is the first I learned of this, so our initial implementation did *not* explicitly set our preferred behavior. This turns out to be a major problem with a language like Golang, where multiple goroutines can (and often do) use the same OS thread. So we can have two goroutines trying to stop the same container, and if the no-deadlock mutex behavior is in use, both threads will successfully acquire the lock because the C library, not knowing about Go's lightweight threads, sees the same PID trying to lock a mutex twice, and allows it without question. It appears that, at least on Fedora/RHEL/Debian libc, the default (unspecified) behavior of the locks is the non-deadlocking version - so, effectively, our locks have been of questionable utility within the same Podman process for the last four years. This is somewhat concerning. What's even more concerning is that the Golang-native sync.Mutex that was also in use did nothing to prevent the duplicate locking (I don't know if I like the implications of this). Anyways, this resolves the major issue of our locks not working correctly by explicitly setting the correct pthread mutex behavior. Signed-off-by: Matthew Heon <[email protected]>
- Loading branch information