-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync: reduce contention between Map operations with new-but-disjoint keys #21035
Comments
I'm interested but I can't commit right now, the one idea that comes to my mind would require hacking runtime/hashmap* otherwise we'd have to double hash the keys. |
Very interesting task, I'd love to take this one. I think it should definitely be doable for the Go1.10 Milestone. I'll think about the design some more before I can 100% commit to doing this one in due time. |
@orcaman What I wanted to do is something like https://github.com/OneOfOne/cmap/tree/master/v2, I use a slightly modified version of this in my code, but I wanted to do that with sync.Map in a way, but that'd require a lot more runtime knowledge to be able to use runtime/map's hasher directly to do the sharding rather than double hash it. Sadly I don't have the knowledge nor time to learn the internals of runtime/map right now. By all means if you can do that, it'd be great or we can discuss it.
|
I think you can just use a slice of locks and dirty maps with the low order bits of the hash performing the selection - but this requires access to the internal hash code. |
@bcmills I am willing to give this a try. Is there a document that desribes using internal facilities when an implementation is part of the stdlib? |
I don't know of any good starting docs, but perhaps @randall77 or @aclements can point you in the right direction. |
The `golang/go#21035` reference is no longer valid in GitHub's readme rendering.
really a good idea |
Someone on Reddit pointed me to this discussion. I was dealing with an issue related to sync.Map contention and worked on it in the context of FrankenPHP cgo handles. Here was my CL to deal with cgo handles: https://go-review.googlesource.com/c/go/+/600875 I also wrote a blog post that shows the difference in regards to contention (with benchmarks): https://withinboredom.info/2024/08/12/optimizing-cgo-handles/ I ended up going with a growable slice and freelist approach. It's pretty darn fast. |
The new implementation of |
@mknyszek do you know of a specific issue/proposal (or at least a branch/gerrit link) where I could track the state (and perf/design characteristics) of this "new implementation of |
@akavel I think Michael is talking about internal/sync implementation which is used inside unique package. |
Hi @akavel, new APIs need to go through the proposal process, but improving the internal implementation of existing APIs does not, so my guess is there probably was not a specific proposal for updating the implementation of As I understand the history, this new implementation was originally merged in CL 573956 as an internal There is a large stack of more recent CLs that add more functionality I think with the goal of supporting There are probably other interesting bits of commentary if you poke around the history some more. If people are curious about performance improvements, it could be helpful to test it out via benchmarks for their use cases, which is relatively easy via
|
Thanks @thepudds. Yeah, there wasn't even a tracking issue for this since the fact that it came to be at all was somewhat coincidental. It turned out the new design was faster than The commit messages didn't include the benchmark results on |
@mknyszek I think it would be super cool if you could post them here, I suppose other/future readers might also be interested; I got here during some quick research trying to evaluate feasibility of sync.Map for my use, leading me to reach out for thirdparty replacements. If others happen to trek a similar path in the future, could be useful to them. |
Hi @akavel and anyone else interested -- @mknyszek posted some micro-benchmark results for the new sync.Map implementation in #70683, along with some brief discussion. I would definitely encourage anyone curious to try out the new implementation (e.g., via Also note that he mentioned you can currently disable via |
I was curious, and I ran some of our internal benchmarks against Gotip. We're testing a single sync.Map "graph" with a set of "elements". The test is effectively a "LoadOrStore" test where the new "element" creation is 200x more expensive, in allocs, than a lookup.
The results are good in the pathological case: see ~20% win when creating a lot of new elements, and iterating at the same time. Comparing gotip with experiment enabled/disabled:
More interesting were the results on 1.23.4 vs gotip (so unrelated to diff PR):
Tangentially, we're using Bazel, so it took me some time to figure out if I got the experiment wired through correctly. Was printing the enabled experiments as part of the benchmark preamble ever considered?
|
The performance degradation is interesting. Where are the allocations coming from? What's the diff in the memory profile? I'm wondering if it's from the callsite or something else. Also, if you're up for checking it, what does Go 1.23.4 vs. gotip-with-experiment-off look like? |
The Go 1.9 implementation of
sync.Map
uses a singleMutex
to guard the read-write map containing new keys. That makesStore
calls with different new keys always contend with each other, and also contend withLoad
calls with different new keys, even if theLoad
s andStore
s for each key are confined to a single thread.That doesn't really matter for the
sync.Map
use-cases in the standard library because they do not operate on new keys in the steady state, but it limits the utility ofsync.Map
for use-cases involving a high rate of churn. Such use-cases may include:We should explore ways to address new-key contention, such as sharding the read-write maps and associated locks (as suggested in #20360), journaling writes (and using a Bloom or HyperLogLog filter to avoid reading the journal, along the lines of #21032), or storing the read-write map in an atomic tree data structure instead of a built-in
map
.The text was updated successfully, but these errors were encountered: