Skip to content

Commit

Permalink
filemap: Cache the value of vm_flags
Browse files Browse the repository at this point in the history
After we have unlocked the mmap_lock for I/O, the file is pinned, but
the VMA is not.  Checking this flag after that can be a use-after-free.
It's not a terribly interesting use-after-free as it can only read one
bit, and it's used to decide whether to read 2MB or 4MB.  But it
upsets the automated tools and it's generally bad practice anyway,
so let's fix it.

Reported-by: [email protected]
Fixes: 4687fdb ("mm/filemap: Support VM_HUGEPAGE for file mappings")
Cc: [email protected]
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
  • Loading branch information
Matthew Wilcox (Oracle) committed Jun 9, 2022
1 parent 6bf74cd commit dcfa24b
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions mm/filemap.c
Original file line number Diff line number Diff line change
Expand Up @@ -2991,19 +2991,20 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
struct address_space *mapping = file->f_mapping;
DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff);
struct file *fpin = NULL;
unsigned long vm_flags = vmf->vma->vm_flags;
unsigned int mmap_miss;

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/* Use the readahead code, even if readahead is disabled */
if (vmf->vma->vm_flags & VM_HUGEPAGE) {
if (vm_flags & VM_HUGEPAGE) {
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
ra->size = HPAGE_PMD_NR;
/*
* Fetch two PMD folios, so we get the chance to actually
* readahead, unless we've been told not to.
*/
if (!(vmf->vma->vm_flags & VM_RAND_READ))
if (!(vm_flags & VM_RAND_READ))
ra->size *= 2;
ra->async_size = HPAGE_PMD_NR;
page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
Expand All @@ -3012,12 +3013,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
#endif

/* If we don't want any read-ahead, don't bother */
if (vmf->vma->vm_flags & VM_RAND_READ)
if (vm_flags & VM_RAND_READ)
return fpin;
if (!ra->ra_pages)
return fpin;

if (vmf->vma->vm_flags & VM_SEQ_READ) {
if (vm_flags & VM_SEQ_READ) {
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
page_cache_sync_ra(&ractl, ra->ra_pages);
return fpin;
Expand Down

0 comments on commit dcfa24b

Please sign in to comment.