Skip to content

Commit

Permalink
Update skc_obj_alloc for spl kmem caches that are backed by Linux
Browse files Browse the repository at this point in the history
Currently, for certain sizes and classes of allocations we use
SPL caches that are backed by caches in the Linux Slab allocator
to reduce fragmentation and increase utilization of memory. The
way things are implemented for these caches as of now though is
that we don't keep any statistics of the allocations that we
make from these caches.

This patch enables the tracking of allocated objects in those
SPL caches by making the trade-off of grabbing the cache lock
at every object allocation and free to update the respective
counter.

Additionally, this patch makes those caches visible in the
/proc/spl/kmem/slab special file.

As a side note, enabling the specific counter for those caches
enables SDB to create a more user-friendly interface than
/proc/spl/kmem/slab that can also cross-reference data from
slabinfo. Here is for example the output of one of those
caches in SDB that outputs the name of the underlying Linux
cache, the memory of SPL objects allocated in that cache,
and the percentage of those objects compared to all the
objects in it:
```
> spl_kmem_caches | filter obj.skc_name == "zio_buf_512" | pp
name        ...            source total_memory util
----------- ... ----------------- ------------ ----
zio_buf_512 ... kmalloc-512[SLUB]       16.9MB    8
```

Signed-off-by: Serapheim Dimitropoulos <[email protected]>
  • Loading branch information
sdimitro committed Oct 16, 2019
1 parent 177c79d commit 3a7308f
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 6 deletions.
11 changes: 11 additions & 0 deletions module/os/linux/spl/spl-kmem-cache.c
Original file line number Diff line number Diff line change
Expand Up @@ -1467,6 +1467,14 @@ spl_kmem_cache_alloc(spl_kmem_cache_t *skc, int flags)
obj = kmem_cache_alloc(slc, kmem_flags_convert(flags));
} while ((obj == NULL) && !(flags & KM_NOSLEEP));

/*
* Even though we leave everything up to the underlying cache
* we still keep track of how many objects we've allocated in
* it for better debuggability.
*/
spin_lock(&skc->skc_lock);
skc->skc_obj_alloc++;
spin_unlock(&skc->skc_lock);
goto ret;
}

Expand Down Expand Up @@ -1540,6 +1548,9 @@ spl_kmem_cache_free(spl_kmem_cache_t *skc, void *obj)
*/
if (skc->skc_flags & KMC_SLAB) {
kmem_cache_free(skc->skc_linux_cache, obj);
spin_lock(&skc->skc_lock);
skc->skc_obj_alloc--;
spin_unlock(&skc->skc_lock);
return;
}

Expand Down
28 changes: 22 additions & 6 deletions module/os/linux/spl/spl-proc.c
Original file line number Diff line number Diff line change
Expand Up @@ -437,11 +437,29 @@ slab_seq_show(struct seq_file *f, void *p)

ASSERT(skc->skc_magic == SKC_MAGIC);

/*
* Backed by Linux slab see /proc/slabinfo.
*/
if (skc->skc_flags & KMC_SLAB)
if (skc->skc_flags & KMC_SLAB) {
/*
* This cache is backed by a generic Linux kmem cache which
* has its own accounting. For these caches we only track
* the number of active allocated objects that exist within
* the underlying Linux slabs. For the overall statistics of
* the underlying Linux cache please refer to /proc/slabinfo.
*/
spin_lock(&skc->skc_lock);
seq_printf(f, "%-36s ", skc->skc_name);
seq_printf(f, "0x%05lx %9s %9lu %8s %8u "
"%5s %5s %5s %5s %5lu %5s %5s %5s %5s\n",
(long unsigned)skc->skc_flags,
"-",
(long unsigned)(skc->skc_obj_size * skc->skc_obj_alloc),
"-",
(unsigned)skc->skc_obj_size,
"-", "-", "-", "-",
(long unsigned)skc->skc_obj_alloc,
"-", "-", "-", "-");
spin_unlock(&skc->skc_lock);
return (0);
}

spin_lock(&skc->skc_lock);
seq_printf(f, "%-36s ", skc->skc_name);
Expand All @@ -461,9 +479,7 @@ slab_seq_show(struct seq_file *f, void *p)
(long unsigned)skc->skc_obj_deadlock,
(long unsigned)skc->skc_obj_emergency,
(long unsigned)skc->skc_obj_emergency_max);

spin_unlock(&skc->skc_lock);

return (0);
}

Expand Down

0 comments on commit 3a7308f

Please sign in to comment.