functions may also have been affected.
Clamp the length of the last entry in the sg list to be the expected
length.
Signed-off-by: Matthew Wilcox (Oracle)
Fixes: 0b62af28f249 ("i915: convert shmem_sg_free_table() to use a folio_batch")
Cc: sta...@vger.kernel.org # 6.5.x
L
All users are now converted to use the folio_batch so we can get rid of
this data structure.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagevec.h | 63 +++--
mm/swap.c | 18 ++--
2 files changed, 13 insertions(+), 68
These files no longer need pagevec.h, mostly due to function declarations
being moved out of it.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/fadvise.c| 1 -
mm/memory_hotplug.c | 1 -
mm/migrate.c| 1 -
mm/readahead.c | 1 -
mm/swap_state.c | 1 -
5 files changed, 5
We don't use pagevecs for the LRU cache any more, and we don't know
that the failed invalidations were due to the folio being in an
LRU cache. So rename it to be more accurate.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/fadvise.c | 16 +++-
mm/internal.h |
Remove one of the last remaining users of pagevec.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/gpu/drm/i915/i915_gpu_error.c | 50 +--
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c
b/drivers/gpu/drm/i915
Most of these should just refer to the LRU cache rather than the
data structure used to implement the LRU cache.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/huge_memory.c| 2 +-
mm/khugepaged.c | 6 +++---
mm/ksm.c| 6 +++---
mm/memory.c | 6 +++---
mm
Remove the last usage of pagevecs. There is a slight change here; we
now free the folio_batch as soon as it fills up instead of freeing the
folio_batch when we try to add a page to a full batch. This should have
no effect in practice.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux
All callers have now been converted to call check_move_unevictable_folios().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 1 -
mm/vmscan.c | 17 -
2 files changed, 18 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index
: Matthew Wilcox (Oracle)
---
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 55 +--
1 file changed, 31 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 33d5d5178103..8f1633c3fb93 100644
--- a
This performs the same role as __pagevec_release(), ie skipping the
check for batch length of 0.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagevec.h | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index
We're almost done with the pagevec -> folio_batch conversion. Finish
the job.
Matthew Wilcox (Oracle) (13):
afs: Convert pagevec to folio_batch in afs_extend_writeback()
mm: Add __folio_batch_release()
scatterlist: Add sg_set_folio()
i915: Convert shmem_sg_free_table()
Remove a few hidden compound_head() calls by converting the returned
page to a folio once and using the folio APIs.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/gpu/drm/drm_gem.c | 68 ++-
1 file changed, 39 insertions(+), 29 deletions(-)
diff --git a
Removes a folio->page->folio conversion for each folio that's involved.
More importantly, removes one of the last few uses of a pagevec.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/afs/write.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/
This should always have been called folio_batch_count().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagevec.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 42aad53e382e..3a9d29dd28a3 100644
--- a
olios.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/scatterlist.h | 24
1 file changed, 24 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index ec46d8e8e49d..77df3d7b18a6 100644
--- a/include/linux/scatterlist.h
+++ b/include/
The current code does not protect against swapoff of the underlying
swap device, so this is a bug fix as well as a worthwhile reduction in
code complexity.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/memcontrol.c | 24 ++--
1 file changed, 2 insertions(+), 22 deletions
Instead of calling find_get_entry() for every page index, use an XArray
iterator to skip over NULL entries, and avoid calling get_page(),
because we only want the swap entries.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Johannes Weiner
---
mm/madvise.c | 21 -
1 file
tthew Wilcox (Oracle)
---
mm/filemap.c| 13 +++--
mm/swap_state.c | 2 +-
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index d64f6f76bc0b..2f134383b0ae 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1567,19 +1567,19 @@ EXPORT_S
Avoid bumping the refcount on pages when we're only interested in the
swap entries.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Johannes Weiner
---
fs/proc/task_mmu.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
Convert shmem_getpage_gfp() (the only remaining caller of
find_lock_entry()) to cope with a head page being returned instead of
the subpage for the index.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 9 +
mm/filemap.c| 25
i915 does not want to see value entries. Switch it to use
find_lock_page() instead, and remove the export of find_lock_entry().
Move find_lock_entry() and find_get_entry() to mm/internal.h to discourage
any future use.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Johannes Weiner
Provide this functionality from the swap cache. It's useful for
more than just mincore().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 7 +++
mm/mincore.c | 28 ++--
mm/swap_state.c | 32
3
- Rename thp_valid_index() to thp_contains()
- Fix thp_contains() for hugetlbfs and swapcache
- Add find_lock_head() wrapper around pagecache_get_page()
Matthew Wilcox (Oracle) (8):
mm: Factor find_get_incore_page out of mincore_page
mm: Use find_get_incore_page in memcontrol
mm: Optimi
Add a new FGP_HEAD flag which avoids calling find_subpage() and add a
convenience wrapper for it.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 32 ++--
mm/filemap.c| 9 ++---
2 files changed, 32 insertions(+), 9 deletions
This avoids a call to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 6594baae7cd2..8c354277108d 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1667,7 +1667,6
The current code does not protect against swapoff of the underlying
swap device, so this is a bug fix as well as a worthwhile reduction in
code complexity.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/memcontrol.c | 25 ++---
1 file changed, 2 insertions(+), 23 deletions
Avoid bumping the refcount on pages when we're only interested in the
swap entries.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/proc/task_mmu.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 5066b0251ed8..e42d9e5
i915 does not want to see value entries. Switch it to use
find_lock_page() instead, and remove the export of find_lock_entry().
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 ++--
mm/filemap.c | 1 -
2 files changed, 2
callers do
is find the head page, which we just threw away. As part of auditing
all the callers, I found some misuses of the APIs and some plain
inefficiencies that I've fixed.
The diffstat is unflattering, but I added more kernel-doc.
Matthew Wilcox (Oracle) (8):
mm: Factor find_get_swap_pag
There are only three callers remaining of find_get_entry().
find_get_swap_page() is happy to get the head page instead of the subpage.
Add find_subpage() calls to find_lock_entry() and pagecache_get_page()
to avoid auditing all their callers.
Signed-off-by: Matthew Wilcox (Oracle)
---
include
Convert the one caller of find_lock_entry() to cope with a head page
being returned instead of the subpage for the index.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 12
mm/filemap.c| 25 +++--
mm/shmem.c | 15
Instead of calling find_get_entry() for every page index, use an XArray
iterator to skip over NULL entries, and avoid calling get_page(),
because we only want the swap entries.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/madvise.c | 21 -
1 file changed, 12 insertions
Provide this functionality from the swap cache. It's useful for
more than just mincore().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 7 +++
mm/mincore.c | 28 ++--
mm/swap_state.c | 31 +++
3
33 matches
Mail list logo