Skip to content

Commit

Permalink
mm/gup: accelerate thp gup even for "pages != NULL"
Browse files Browse the repository at this point in the history
The acceleration of THP was done with ctx.page_mask, however it'll be
ignored if **pages is non-NULL.

The old optimization was introduced in 2013 in 240aade ("mm:
accelerate mm_populate() treatment of THP pages").  It didn't explain why
we can't optimize the **pages non-NULL case.  It's possible that at that
time the major goal was for mm_populate() which should be enough back
then.

Optimize thp for all cases, by properly looping over each subpage, doing
cache flushes, and boost refcounts / pincounts where needed in one go.

This can be verified using gup_test below:

  # chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10

Before:    13992.50 ( +-8.75%)
After:       378.50 (+-69.62%)

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Lorenzo Stoakes <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: James Houghton <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: Kirill A . Shutemov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
xzpeter authored and akpm00 committed Aug 18, 2023
1 parent ffe1e78 commit 57edfcf
Showing 1 changed file with 44 additions and 7 deletions.
51 changes: 44 additions & 7 deletions mm/gup.c
Original file line number Diff line number Diff line change
Expand Up @@ -1282,16 +1282,53 @@ static long __get_user_pages(struct mm_struct *mm,
goto out;
}
next_page:
if (pages) {
pages[i] = page;
flush_anon_page(vma, page, start);
flush_dcache_page(page);
ctx.page_mask = 0;
}

page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask);
if (page_increm > nr_pages)
page_increm = nr_pages;

if (pages) {
struct page *subpage;
unsigned int j;

/*
* This must be a large folio (and doesn't need to
* be the whole folio; it can be part of it), do
* the refcount work for all the subpages too.
*
* NOTE: here the page may not be the head page
* e.g. when start addr is not thp-size aligned.
* try_grab_folio() should have taken care of tail
* pages.
*/
if (page_increm > 1) {
struct folio *folio;

/*
* Since we already hold refcount on the
* large folio, this should never fail.
*/
folio = try_grab_folio(page, page_increm - 1,
foll_flags);
if (WARN_ON_ONCE(!folio)) {
/*
* Release the 1st page ref if the
* folio is problematic, fail hard.
*/
gup_put_folio(page_folio(page), 1,
foll_flags);
ret = -EFAULT;
goto out;
}
}

for (j = 0; j < page_increm; j++) {
subpage = nth_page(page, j);
pages[i + j] = subpage;
flush_anon_page(vma, subpage, start + j * PAGE_SIZE);
flush_dcache_page(subpage);
}
}

i += page_increm;
start += page_increm * PAGE_SIZE;
nr_pages -= page_increm;
Expand Down

0 comments on commit 57edfcf

Please sign in to comment.