Skip to content

Commit

Permalink
address feedback
Browse files Browse the repository at this point in the history
Signed-off-by: Paul Dagnelie <[email protected]>
  • Loading branch information
pcd1193182 committed Jul 29, 2019
1 parent d229838 commit 87e3760
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 4 deletions.
16 changes: 16 additions & 0 deletions man/man5/zfs-module-parameters.5
Original file line number Diff line number Diff line change
Expand Up @@ -370,6 +370,22 @@ larger).
Use \fB1\fR for yes and \fB0\fR for no (default).
.RE

.sp
.ne 2
.na
\fBzfs_metaslab_max_size_cache_sec\fR (ulong)
.ad
.RS 12n
When we unload a metaslab, we cache the size of the largest free chunk. We use
that cached size to determine whether or not to load a metaslab for a given
allocation. As more frees accumulate in that metaslab while it's unloaded, the
cached max size becomes less and less accurate. After a number of seconds
controlled by this tunable, we stop considering the cached max size and start
considering only the histogram instead.
.sp
Default value: \fB3600 seconds\fR (one hour)
.RE

.sp
.ne 2
.na
Expand Down
14 changes: 10 additions & 4 deletions module/zfs/metaslab.c
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ int max_disabled_ms = 3;
* Time (in seconds) to respect ms_max_size when the metaslab is not loaded.
* To avoid 64-bit overflow, don't set above UINT32_MAX.
*/
uint64_t max_size_cache_sec = 3600; /* 1 hour */
unsigned long zfs_metaslab_max_size_cache_sec = 3600; /* 1 hour */

static uint64_t metaslab_weight(metaslab_t *);
static void metaslab_set_fragmentation(metaslab_t *);
Expand Down Expand Up @@ -2599,11 +2599,13 @@ metaslab_should_allocate(metaslab_t *msp, uint64_t asize, boolean_t try_hard)
/*
* If the metaslab is loaded, ms_max_size is definitive and we can use
* the fast check. If it's not, the ms_max_size is a lower bound (once
* set), and we should use the fast check unless we're in try_hard.
* set), and we should use the fast check as long as we're not in
* try_hard and it's been less than zfs_metaslab_max_size_cache_sec
* seconds since the metaslab was unloaded.
*/
if (msp->ms_loaded ||
(msp->ms_max_size != 0 && !try_hard &&
gethrtime() < msp->ms_unload_time + SEC2NSEC(max_size_cache_sec)))
(msp->ms_max_size != 0 && !try_hard && gethrtime() <
msp->ms_unload_time + SEC2NSEC(zfs_metaslab_max_size_cache_sec)))
return (msp->ms_max_size >= asize);

boolean_t should_allocate;
Expand Down Expand Up @@ -5696,6 +5698,10 @@ MODULE_PARM_DESC(metaslab_df_max_search,
module_param(metaslab_df_use_largest_segment, int, 0644);
MODULE_PARM_DESC(metaslab_df_use_largest_segment,
"when looking in size tree, use largest segment instead of exact fit");

module_param(zfs_metaslab_max_size_cache_sec, ulong, 0644);
MODULE_PARM_DESC(zfs_metaslab_max_size_cache_sec,
"how long to trust the cached max chunk size of a metaslab");
/* END CSTYLED */

#endif

0 comments on commit 87e3760

Please sign in to comment.