-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-16896 common: Spill Over Evictable Buckets Implementation #15646
Conversation
Ticket title is 'Spill Over Evictable Buckets (SOEMB) Implementation' |
c8ee33f
to
7ffe011
Compare
The DAV_v2 allocator now includes support for Spill Over Evictable Buckets (SOEMB). All global allocations will continue to utilize the standard non-evictable memory buckets, while spillover allocations from evictable memory buckets will be directed to SOEMB. In the current implementation, SOEMB remains locked in the memory cache, similar to the behavior of non-evictable memory buckets. Signed-off-by: Sherin T George <[email protected]>
7ffe011
to
cb10687
Compare
@@ -559,6 +559,7 @@ dav_tx_begin_v2(dav_obj_t *pop, jmp_buf env, ...) | |||
sizeof(struct tx_range_def)); | |||
tx->first_snapshot = 1; | |||
tx->pop = pop; | |||
heap_soemb_reserve(pop->do_heap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might needs be done before the umem_cache_reserve() call in the future?
When we support turning SOE bucket to evict-able (once a SOE bucket isn't qualified as a SOE anymore), allocator might need to pass the SOE set to umem_cache_reserve(), so that we can ensure that all SOE buckets are loaded in umem_cache_reserve().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this will be moved when SOEMB starts using the evictable MB pool. For this release SOEMB will use the cache pages from umem_cache_reserve() for creating a new non-evictable SOEMB. Hence this code is moved to the end of tx_begin() so that the page can be initialized in the same way as non-evictable memory bucket.
|
||
smbrt->svec[SOEMB_ACTIVE_CNT - 1] = NULL; | ||
smbrt->fur_idx = 0; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite follow the logic here (and the SOE selecting in heap_soemb_active_get()), are we trying to use the buckets in SOE set in a round-robin manner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The allocator will maintain SOEMB_ACTIVE_CNT == 3 active SOEMBs. The allocator will first attempt spill over to svec[0] and if it fails svec[1] and finally svec[2]. If the allocation still fails then it will spill over to the global non-evictable memory buckets.
fur_idx indicates the furthest index of active SOEMB list that was used to do the spill over within a TX. If its value is greater than 1 then in the next tx_begin(), heap_soemb_reserve() will mark svec[0] as passive and left shifts other active SOEMBs.
break; | ||
} | ||
break; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Above loop is to create ensure there is always 1 available SOE bucket or SOEMB_ACTIVE_CNT buckets? (The loop breaks if any bucket setup successfully).
I think the goal here is to ensure enough free space being pinned in memory for the potential spilling over happened in next transaction, right? So we'd replace any unqualified bucket (in 'svec') with qualified one to satisfy the space requirement. It looks to me it's too late to remove the unqualified bucket from 'svec' in heap_recycle_soembs().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outside of the early boot phase, this condition primarily occurs when fur_idx is greater than 1, which triggers a left shift of the active SOEMBs by one position. This implies that only svec[2] has to be populated.
The left shift happen when atleast one allocation within a TX failed to spill over to svec[0] and svec[1] and allocator ends up using svec[2]. Hence svec[2] will be very sparsely populated. After the left shift svec[0] will be almost full, svec[1] sparsely populated and svec[2] not populated (or least populated if obtained from passive list). The assumption made here is that the free space in svec[1] and svec[2] is sufficient to satisfy the next TX.
The DAV_v2 allocator now includes support for Spill Over Evictable Buckets (SOEMB). All global allocations will continue to utilize the standard non-evictable memory buckets, while spillover allocations from evictable memory buckets will be directed to SOEMB. In the current implementation, SOEMB remains locked in the memory cache, similar to the behavior of non-evictable memory buckets.
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: