-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dmu_object_alloc() is single-threaded #4703
Labels
Type: Performance
Performance improvement or performance problem
Comments
This was referenced Jun 26, 2016
Code for the prototype mentioned above is available in this commit: ahrens@240b227 |
11 tasks
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
May 10, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a “next object to allocate” for each CPU. Each of these “next object”s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread’s “next object” reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Closes openzfs#4703
12 tasks
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
May 11, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a “next object to allocate” for each CPU. Each of these “next object”s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread’s “next object” reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Closes openzfs#4703
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
May 11, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a “next object to allocate” for each CPU. Each of these “next object”s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread’s “next object” reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Closes openzfs#4703
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
May 11, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a “next object to allocate” for each CPU. Each of these “next object”s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread’s “next object” reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Closes openzfs#4703
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
May 22, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a “next object to allocate” for each CPU. Each of these “next object”s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread’s “next object” reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Closes openzfs#4703
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
May 31, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a “next object to allocate” for each CPU. Each of these “next object”s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread’s “next object” reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Closes openzfs#4703
ahrens
added a commit
to ahrens/zfs
that referenced
this issue
Jun 9, 2017
dmu_object_alloc() is single-threaded, so when multiple threads are creating files in a single filesystem, they spend a lot of time waiting for the os_obj_lock. To improve performance of multi-threaded file creation, we must make dmu_object_alloc() typically not grab any filesystem-wide locks. The solution is to have a "next object to allocate" for each CPU. Each of these "next object"s is in a different block of the dnode object, so that concurrent allocation holds dnodes in different dbufs. When a thread's "next object" reaches the end of a chunk of objects (by default 4 blocks worth -- 128 dnodes), it will be reset to the per-objset os_obj_next, which will be increased by a chunk of objects (128). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This decreases lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 128 allocations. This results in a 70% performance improvement to multi-threaded object creation (where each thread is creating objects in its own directory), from 67,000/sec to 115,000/sec, with 8 CPUs. Work sponsored by Intel Corp. Reviewed-by: Ned Bass <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes openzfs#4703
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Using a benchmark which has 32 threads creating 2 million files in the same directory, on a machine with 16 CPU cores, and workarounds for several other issues, I noticed that
dmu_object_alloc()
was using about 55% of all CPU, most of the time waiting to acquire the os_obj_lock:In order to increase parallelism of object allocation, we must solve two problems:
os_obj_lock
(or not grab it at all). Theos_obj_lock
protectsos_obj_next
, but we may not need to hold it during the rest ofdmu_object_alloc()
, especially during the call todnode_hold_impl()
.dmu_object_alloc()
, callingdnode_hold_impl()
concurrently. Since we allocate adjacent objects with ani++
-style allocator, the several threads will be holding adjacent dnodes, which all come from the same dbuf. As a result, the threads will contend on the dbuf’s locks.A relatively simple way to address this problem would be to have a “next object to allocate” for each CPU. Each of these “next object”s would be in a different block of the dnode object, so that concurrent allocation would be holding dnodes in different dbufs. When a thread’s “next object” reaches the end of the block, it will be reset to the per-objset os_obj_next, which will be increased by a block’s worth of objects (32). Only when manipulating the os_obj_next will we need to grab the os_obj_lock. This should decrease lock contention dramatically, because each thread only needs to grab the os_obj_lock briefly, once per 32 allocations.
A prototype of the above showed that a ~20% performance improvement on the benchmark is possible.
Once other bugs are fixed (to the point that we see the large lock contention on
os_obj_lock
), the reward for fixing this issue is medium-high, and the cost is low-medium. The code changes are localized todmu_object_alloc()
. Because this will change which object IDs are allocated, there is potential for object numbers to become more scattered, hurting locality when reading them in. We will need to spend some time evaluating this impact.The text was updated successfully, but these errors were encountered: