Skip to content

Commit

Permalink
mm: memmap_init: iterate over memblock regions rather that check each…
Browse files Browse the repository at this point in the history
… PFN

When called during boot the memmap_init_zone() function checks if each PFN
is valid and actually belongs to the node being initialized using
early_pfn_valid() and early_pfn_in_nid().

Each such check may cost up to O(log(n)) where n is the number of memory
banks, so for large amount of memory overall time spent in early_pfn*()
becomes substantial.

Since the information is anyway present in memblock, we can iterate over
memblock memory regions in memmap_init() and only call memmap_init_zone()
for PFN ranges that are know to be valid and in the appropriate node.

[[email protected]: fix a compilation warning from Clang]
  Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: fix the incorrect hole in fast_isolate_freepages()]
  Link: http://lkml.kernel.org/r/[email protected]
  Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Baoquan He <[email protected]>
Signed-off-by: Mike Rapoport <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Tested-by: Hoan Tran <[email protected]>	[arm64]
Cc: Brian Cain <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Greentime Hu <[email protected]>
Cc: Greg Ungerer <[email protected]>
Cc: Guan Xuetao <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Ley Foon Tan <[email protected]>
Cc: Mark Salter <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Nick Hu <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Russell King <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Thomas Bogendoerfer <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Qian Cai <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Baoquan He authored and torvalds committed Jun 4, 2020
1 parent da50c57 commit 73a6e47
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 28 deletions.
4 changes: 3 additions & 1 deletion mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -1409,7 +1409,9 @@ fast_isolate_freepages(struct compact_control *cc)
cc->free_pfn = highest;
} else {
if (cc->direct_compaction && pfn_valid(min_pfn)) {
page = pfn_to_page(min_pfn);
page = pageblock_pfn_to_page(min_pfn,
pageblock_end_pfn(min_pfn),
cc->zone);
cc->free_pfn = min_pfn;
}
}
Expand Down
43 changes: 16 additions & 27 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -5951,23 +5951,6 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn)
return false;
}

#ifdef CONFIG_SPARSEMEM
/* Skip PFNs that belong to non-present sections */
static inline __meminit unsigned long next_pfn(unsigned long pfn)
{
const unsigned long section_nr = pfn_to_section_nr(++pfn);

if (present_section_nr(section_nr))
return pfn;
return section_nr_to_pfn(next_present_section_nr(section_nr));
}
#else
static inline __meminit unsigned long next_pfn(unsigned long pfn)
{
return pfn++;
}
#endif

/*
* Initially all pages are reserved - free ones are freed
* up by memblock_free_all() once the early boot process is
Expand Down Expand Up @@ -6007,14 +5990,6 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
* function. They do not exist on hotplugged memory.
*/
if (context == MEMMAP_EARLY) {
if (!early_pfn_valid(pfn)) {
pfn = next_pfn(pfn);
continue;
}
if (!early_pfn_in_nid(pfn, nid)) {
pfn++;
continue;
}
if (overlap_memmap_init(zone, &pfn))
continue;
if (defer_init(nid, pfn, end_pfn))
Expand Down Expand Up @@ -6130,9 +6105,23 @@ static void __meminit zone_init_free_lists(struct zone *zone)
}

void __meminit __weak memmap_init(unsigned long size, int nid,
unsigned long zone, unsigned long start_pfn)
unsigned long zone,
unsigned long range_start_pfn)
{
memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY, NULL);
unsigned long start_pfn, end_pfn;
unsigned long range_end_pfn = range_start_pfn + size;
int i;

for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);
end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn);

if (end_pfn > start_pfn) {
size = end_pfn - start_pfn;
memmap_init_zone(size, nid, zone, start_pfn,
MEMMAP_EARLY, NULL);
}
}
}

static int zone_batchsize(struct zone *zone)
Expand Down

0 comments on commit 73a6e47

Please sign in to comment.