On Fri, May 08, 2020 at 02:30:48PM -0400, Johannes Weiner wrote:
> When replacing one page with another one in the cache, we have to
> decrease the file count of the old page's NUMA node and increase the
> one of the new NUMA node, otherwise the old node leaks the count and
> the new node eventually underflows its counter.
> 
> Fixes: 74d609585d8b ("page cache: Add and replace pages using the XArray")
> Signed-off-by: Johannes Weiner <[email protected]>
> Reviewed-by: Alex Shi <[email protected]>
> Reviewed-by: Shakeel Butt <[email protected]>
> Reviewed-by: Joonsoo Kim <[email protected]>
> ---
>  mm/filemap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index af1c6adad5bd..2b057b0aa882 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -808,11 +808,11 @@ int replace_page_cache_page(struct page *old, struct 
> page *new, gfp_t gfp_mask)
>       old->mapping = NULL;
>       /* hugetlb pages do not participate in page cache accounting. */
>       if (!PageHuge(old))
> -             __dec_node_page_state(new, NR_FILE_PAGES);
> +             __dec_node_page_state(old, NR_FILE_PAGES);
>       if (!PageHuge(new))
>               __inc_node_page_state(new, NR_FILE_PAGES);
>       if (PageSwapBacked(old))
> -             __dec_node_page_state(new, NR_SHMEM);
> +             __dec_node_page_state(old, NR_SHMEM);
>       if (PageSwapBacked(new))
>               __inc_node_page_state(new, NR_SHMEM);
>       xas_unlock_irqrestore(&xas, flags);


Reviewed-by: Balbir Singh <[email protected]>

Reply via email to