Skip to content

Commit

Permalink
mm/migrate: make isolate_movable_page() skip slab pages
Browse files Browse the repository at this point in the history
In the next commit we want to rearrange struct slab fields to allow a larger
rcu_head. Afterwards, the page->mapping field will overlap with SLUB's "struct
list_head slab_list", where the value of prev pointer can become LIST_POISON2,
which is 0x122 + POISON_POINTER_DELTA.  Unfortunately the bit 1 being set can
confuse PageMovable() to be a false positive and cause a GPF as reported by lkp
[1].

To fix this, make isolate_movable_page() skip pages with the PageSlab flag set.
This is a bit tricky as we need to add memory barriers to SLAB and SLUB's page
allocation and freeing, and their counterparts to isolate_movable_page().

Based on my RFC from [2]. Added a comment update from Matthew's variant in [3]
and, as done there, moved the PageSlab checks to happen before trying to take
the page lock.

[1] https://lore.kernel.org/all/[email protected]/
[2] https://lore.kernel.org/all/[email protected]/
[3] https://lore.kernel.org/all/[email protected]/

Reported-by: kernel test robot <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Hyeonggon Yoo <[email protected]>
  • Loading branch information
tehcaster committed Nov 21, 2022
1 parent bc29d5b commit 8b88176
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 5 deletions.
15 changes: 12 additions & 3 deletions mm/migrate.c
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,22 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
if (unlikely(!get_page_unless_zero(page)))
goto out;

if (unlikely(PageSlab(page)))
goto out_putpage;
/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
smp_rmb();
/*
* Check PageMovable before holding a PG_lock because page's owner
* assumes anybody doesn't touch PG_lock of newly allocated page
* so unconditionally grabbing the lock ruins page's owner side.
* Check movable flag before taking the page lock because
* we use non-atomic bitops on newly allocated page flags so
* unconditionally grabbing the lock ruins page's owner side.
*/
if (unlikely(!__PageMovable(page)))
goto out_putpage;
/* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */
smp_rmb();
if (unlikely(PageSlab(page)))
goto out_putpage;

/*
* As movable pages are not isolated from LRU lists, concurrent
* compaction threads can race against page migration functions
Expand Down
6 changes: 5 additions & 1 deletion mm/slab.c
Original file line number Diff line number Diff line change
Expand Up @@ -1370,6 +1370,8 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,

account_slab(slab, cachep->gfporder, cachep, flags);
__folio_set_slab(folio);
/* Make the flag visible before any changes to folio->mapping */
smp_wmb();
/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0)))
slab_set_pfmemalloc(slab);
Expand All @@ -1387,9 +1389,11 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)

BUG_ON(!folio_test_slab(folio));
__slab_clear_pfmemalloc(slab);
__folio_clear_slab(folio);
page_mapcount_reset(folio_page(folio, 0));
folio->mapping = NULL;
/* Make the mapping reset visible before clearing the flag */
smp_wmb();
__folio_clear_slab(folio);

if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += 1 << order;
Expand Down
6 changes: 5 additions & 1 deletion mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -1800,6 +1800,8 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,

slab = folio_slab(folio);
__folio_set_slab(folio);
/* Make the flag visible before any changes to folio->mapping */
smp_wmb();
if (page_is_pfmemalloc(folio_page(folio, 0)))
slab_set_pfmemalloc(slab);

Expand Down Expand Up @@ -2000,8 +2002,10 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
int pages = 1 << order;

__slab_clear_pfmemalloc(slab);
__folio_clear_slab(folio);
folio->mapping = NULL;
/* Make the mapping reset visible before clearing the flag */
smp_wmb();
__folio_clear_slab(folio);
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += pages;
unaccount_slab(slab, order, s);
Expand Down

0 comments on commit 8b88176

Please sign in to comment.