Displaying two registers per line takes 15 lines. That improves to just
10 lines if we display three registers per line, which reduces the amount
of information lost when oopses are cut off. It stays within 80 columns
and matches x86-64.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/arm64
From: Kent Overstreet
This converts from seq_buf to printbuf. Here we're using printbuf with
an external buffer, meaning it's a direct conversion.
Signed-off-by: Kent Overstreet
Cc: Dan Williams
Cc: Dave Hansen
Cc: nvd...@lists.linux.dev
---
tools/testing/nvdimm/test/ndtest.c | 22 ++
he current cpu_relax()
implementation intact for now.
The API change breaks all users except for the two which have been
converted. This is an RFC, and I'm willing to fix all the rest.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/dcache.c | 25 ++--
pendencies.
This split_lock would also give us somewhere to queue waiters, should we
choose to do that. Or a centralised place to handle PREEMPT_RT mutexes.
But I'll leave that for someone who knows what they're doing; for now
this keeps the same implementation.
Matthew Wilcox (Oracle)
I want to use split_lock_init() for a global symbol, so rename this
local one.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/x86/kernel/cpu/intel.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index
Bitlocks do not currently participate in lockdep. Conceptually, a
bit_spinlock is a split lock, eg across each bucket in a hash table.
The struct split_lock gives us somewhere to record the lockdep_map.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/split_lock.h | 37
Make bit_spin_lock() and variants variadic to help with the transition.
The split_lock parameter will become mandatory at the end of the series.
Also add bit_spin_lock_nested() and bit_spin_unlock_assign() which will
both be used by the rhashtable code later.
Signed-off-by: Matthew Wilcox (Oracle
Make hlist_bl_lock() and hlist_bl_unlock() variadic to help with the
transition. Also add hlist_bl_lock_nested().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/list_bl.h | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/linux/list_bl.h b/include
Allow lockdep to track the dm-snap bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/md/dm-snap.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 8f3ad87e6117..4c2a01e433de 100644
--- a/drivers/md
Allow lockdep to track the d_hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/dcache.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index 7d24ff7eb206..a3861d330001 100644
--- a/fs/dcache.c
+++ b/fs
Allow lockdep to track the fscache cookie hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/fscache/cookie.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index 751bc5b1cddf..65d514d12592 100644
--- a
Allow lockdep to track the hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/gfs2/quota.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index 9b1aca7e1264..a933eb441ee9 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2
Allow lockdep to track the mbcache hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/mbcache.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 97c54d3a2227..4ce03ea348dd 100644
--- a/fs/mbcache.c
Now that all users have been converted, require the split_lock parameter
be passed to hlist_bl_lock() and hlist_bl_unlock().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/list_bl.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/list_bl.h b
Allow lockdep to track the airq bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/s390/include/asm/airq.h | 5 +++--
drivers/s390/cio/airq.c | 3 +++
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/s390/include/asm/airq.h b/arch/s390/include/asm/airq.h
Allow lockdep to track the zram bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/block/zram/zram_drv.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index cf8deecc39ef..8b678cc6ed21
Allow lockdep to track the journal bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/jbd2/journal.c| 18 ++
include/linux/jbd2.h | 10 ++
2 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index
Allow lockdep to track slub's page bit spin lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/slub.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 9c0e26ddf300..2ed2abe080ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -346,19 +3
Allow lockdep to track zsmalloc's pin bit spin lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/zsmalloc.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9a7c91c14b84..9d89a1857901 100644
--- a/mm/zsmalloc.c
+++
NeilBrown noticed the same problem with bit spinlocks that I did,
but chose to solve it locally in the rhashtable implementation rather
than lift it all the way to the bit spin lock implementation. Convert
rhashtables to use split_locks.
Signed-off-by: Matthew Wilcox (Oracle)
Cc: NeilBrown
Now that all users have been converted, require the split_lock parameter
be passed to bit_spin_lock(), bit_spin_unlock() and variants. Use it
to track the lockdep state of each lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/bit_spinlock.h | 26 ++
1 file
It's often inconvenient to use BIO_MAX_PAGES due to min() requiring the
sign to be the same. Introduce bio_max_segs() and change BIO_MAX_PAGES to
be unsigned to make it easier for the users.
Signed-off-by: Matthew Wilcox (Oracle)
---
v2:
- Rename from bio_limit() to bio_max_segs()
- Reba
All callers of find_get_entries() use a pvec, so pass it directly
instead of manipulating it in the caller.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 3 +--
mm/filemap.c| 14 ++
mm/shmem.c | 11 +++
mm/swap.c
d-off-by: Matthew Wilcox (Oracle)
---
mm/shmem.c | 11 +--
1 file changed, 1 insertion(+), 10 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 271548ca20f3..a7bbc4ed9677 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -840,7 +840,6 @@ unsigned long shmem_swap_usage(struct vm_area_struct
Simplifies the callers and uses the existing functionality of
find_get_entries().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagevec.h | 5 ++---
mm/swap.c | 8
mm/truncate.c | 36
3 files changed, 14
All callers want to fetch the full size of the pvec.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagevec.h | 2 +-
mm/swap.c | 4 ++--
mm/truncate.c | 10 --
3 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/include/linux/pagevec.h b
use the XArray directly instead of using the pagevec abstraction.
The code is simpler and more efficient.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/shmem.c | 61 +-
1 file changed, 24 insertions(+), 37 deletions(-)
diff --git a/mm/shmem.c
This simplifies the callers and leads to a more efficient implementation
since the XArray has this functionality already.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 4 ++--
mm/filemap.c| 9 +
mm/shmem.c | 10 --
mm/swap.c
pagevec_lookup_entries() is now just a wrapper around find_get_entries()
so remove it and convert all its callers.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagevec.h | 3 ---
mm/swap.c | 36 ++--
mm/truncate.c | 9
whether 'end'
is inclusive or exclusive and I didn't want to make extensive changes
to ensure they were consistent.
Matthew Wilcox (Oracle) (7):
mm: Use pagevec_lookup in shmem_unlock_mapping
mm: Rewrite shmem_seek_hole_data
mm: Add an 'end' parameter to find_get_en
Instead of calling find_get_entry() for every page index, use an XArray
iterator to skip over NULL entries, and avoid calling get_page(),
because we only want the swap entries.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/madvise.c | 21 -
1 file changed, 12 insertions
i915 does not want to see value entries. Switch it to use
find_lock_page() instead, and remove the export of find_lock_entry().
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 ++--
mm/filemap.c | 1 -
2 files changed, 2
Provide this functionality from the swap cache. It's useful for
more than just mincore().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 7 +++
mm/mincore.c | 28 ++--
mm/swap_state.c | 31 +++
3
There are only three callers remaining of find_get_entry().
find_get_swap_page() is happy to get the head page instead of the subpage.
Add find_subpage() calls to find_lock_entry() and pagecache_get_page()
to avoid auditing all their callers.
Signed-off-by: Matthew Wilcox (Oracle)
---
include
Convert the one caller of find_lock_entry() to cope with a head page
being returned instead of the subpage for the index.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 12
mm/filemap.c| 25 +++--
mm/shmem.c | 15
The current code does not protect against swapoff of the underlying
swap device, so this is a bug fix as well as a worthwhile reduction in
code complexity.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/memcontrol.c | 25 ++---
1 file changed, 2 insertions(+), 23 deletions
Avoid bumping the refcount on pages when we're only interested in the
swap entries.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/proc/task_mmu.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 5066b0251ed8..e42d9e5
This avoids a call to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 6594baae7cd2..8c354277108d 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1667,7 +1667,6
callers do
is find the head page, which we just threw away. As part of auditing
all the callers, I found some misuses of the APIs and some plain
inefficiencies that I've fixed.
The diffstat is unflattering, but I added more kernel-doc.
Matthew Wilcox (Oracle) (8):
mm: Factor find_get_swap_pag
Convert unlock_page() to call unlock_folio(). By using a folio we avoid
doing a repeated compound_head() This shortens the function from 120
bytes to 76 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 16 +++-
mm/filemap.c| 27
Pages being added to the page cache should already be folios, so
turn add_to_page_cache_lru() into a wrapper. Saves hundreds of
bytes of text.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 13 +++--
mm/filemap.c| 62
With my config, this function shrinks from 480 bytes to 240 bytes
due to elimination of repeated calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/filemap.c b/mm
Wilcox (Oracle)
---
mm/filemap.c | 45 -
1 file changed, 24 insertions(+), 21 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index f1b65f777539..56ff6aa24265 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1673,33 +1673,33 @@ EXPORT_SYMBOL
These new functions are the folio analogues of the PageFlags functions.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page
This is like lock_page() but for use by callers who know they have a folio.
Convert __lock_page() to be __lock_folio(). This saves one call to
compound_head() per contended call to lock_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 21 +++--
mm
t folio' that always refers to an entire
(possibly compound) page, and points to the head page (or base page).
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 5 +
include/linux/mm_types.h | 17 +
2 files changed, 22 insertions(+)
diff --git a/inc
If we know we have a folio, we can call put_folio() instead of put_page()
and save the overhead of calling compound_head(). Also skips the
devmap checks.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff
Move the declaration into mm/internal.h and rename the function to
rotate_reclaimable_folio(). This eliminates all five of the calls to
compound_head() in this function.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 1 -
mm/filemap.c | 2 +-
mm/internal.h
ns to use
folios. Eventually, we'll be able to convert some of the PageFoo flags
to be only available as FolioFoo flags.
I have a Zoom call this Friday at 18:00 UTC (13:00 Eastern,
10:00 Pacific, 03:00 Tokyo, 05:00 Sydney, 19:00 Berlin).
Meeting ID: 960 8868 8749, passcode 2097152
Feel
If we know we have a folio, we can call get_folio() instead of get_page()
and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b
This is like lock_page_killable() but for use by callers who
know they have a folio. Convert __lock_page_killable() to be
__lock_folio_killable(). This saves one call to compound_head() per
contended call to lock_page_killable().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux
in this patch series is to make
it clear that one cannot wait (for the page lock or writeback) on a
tail page. I don't believe there were any places which could miss a
wakeup due to this, but it's hard to prove that without struct folio.
Now the compiler proves it for us.
Matthew Wilc
This is just a convenience wrapper for callers with folios; pgdat can
be reached from tail pages as well as head pages.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index
If we know we have a folio, we can call put_folio() instead of put_page()
and save the overhead of calling compound_head(). Also skips the
devmap checks.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff
Add compatibility wrappers for code which has not yet been converted
to use folios.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 12 ++--
mm/page-writeback.c | 27 +--
2 files changed, 23 insertions(+), 16 deletions(-)
diff --git a
hen adding private data to the head page / folio).
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm_types.h | 16 ++
include/linux/pagemap.h | 48
2 files changed, 45 insertions(+), 19 deletions(-)
diff --git a/include/linux/mm
This is like lock_page_killable() but for use by callers who
know they have a folio. Convert __lock_page_killable() to be
__lock_folio_killable(). This saves one call to compound_head() per
contended call to lock_page_killable().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux
There's already a hidden compound_head() call in trylock_page(), so
just make it explicit in the caller, which may later have a folio
for its own reasons. This saves a call to compound_head() inside
__lock_page_or_retry().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h
Cachefiles was relying on wait_page_key and wait_bit_key being the
same layout, which is fragile. Now that wait_page_key is exposed in
the pagemap.h header, we can remove that fragility. Also switch it
to use the folio directly instead of the page.
Signed-off-by: Matthew Wilcox (Oracle
All callers have a folio, so use it directly.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 4ece44f694f6..a2d9ee6e78ae 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
Reinforce that if we're waiting for a bit in a struct page, that's
actually in the head page by changing the type from page to folio.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 6 +++---
mm/filemap.c| 30 --
2 files c
Turn wait_on_page_locked() and wait_on_page_locked_killable() into
wrappers.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 16
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index
The one caller of test_clear_page_writeback() already has a folio, so make
it clear that test_clear_page_writeback() operates on the entire folio.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 2 +-
mm/filemap.c | 2 +-
mm/page-writeback.c| 18
Add a wrapper function for users that are not yet converted to folios.
With a distro config, this function shrinks from 213 bytes to 105 bytes
due to elimination of repeated calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 6 +-
mm/filemap.c
When the caller already has a folio, this saves a call to compound_head().
If not, the call to compound_head() is merely moved.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/io_uring.c | 2 +-
include/linux/pagemap.h | 14 +++---
mm/filemap.c| 6 +++---
3 files
The callers will eventually all have a folio, but for now do the
conversion at the call sites.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 95015bc57bb7..648f78577ab7
This is like lock_page() but for use by callers who know they have a folio.
Convert __lock_page() to be __lock_folio(). This saves one call to
compound_head() per contended call to lock_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 21 +++--
mm
We must deal with folios here otherwise we'll get the wrong waitqueue
and fail to receive wakeups.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/afs/write.c | 2 +-
include/linux/pagemap.h | 14 ++-
mm/filemap.c| 54 ++--
This saves a few calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 648f78577ab7..e997f4424ed9 100644
--- a/mm/filemap.c
+++ b/mm
callers to operate on folios.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 16 +++-
mm/filemap.c| 27 ++-
2 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index
mem_cgroup_page_lruvec() already expects a head page, so this will add some
typesafety once we can remove mem_cgroup_page_lruvec().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/memcontrol.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/include/linux/memcontrol.h b
.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/fscache.h| 6 +++
include/linux/page-flags.h | 104 ++---
2 files changed, 90 insertions(+), 20 deletions(-)
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index a1c928fe98e7..f1a5eddaa2c0
folio_index() is the equivalent of page_index() for folios. folio_page()
finds the page in a folio for a page cache index. folio_contains()
tells you whether a folio contains a particular page cache index.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 23
These are the folio equivalent of page_mapping() and page_file_mapping().
Adjust page_file_mapping() and page_mapping_file() to use folios
internally.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 23 +++
mm/swapfile.c | 6 +++---
mm/util.c
If we know we have a folio, we can call get_folio() instead of get_page()
and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/include/linux/mm.h b
Allow page counters to be more readily modified by callers which have
a folio. Name these wrappers with 'stat' instead of 'state' as requested
by Linus here:
https://lore.kernel.org/linux-mm/CAHk-=wj847sudr-kt+46ft3+xffgiwpgthvm7djwgdi4cvr...@mail.gmail.com/
Signed-off-by: Ma
t folio' that always refers to an entire
(possibly compound) page, and points to the head page (or base page).
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 26 ++
include/linux/mm_types.h | 17 +
2 files changed, 43 insertions
These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mmdebug.h | 20
1 file changed, 20 insertions(+)
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 5d0767cb424a
The memcontrol code already assumes that page_memcg() will be called
with a non-tail page, so make that more natural by wrapping it with a
folio API.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/memcontrol.h | 16
mm/memcontrol.c| 36
4 and powerpc32 builds
as well as x86, but I wouldn't be surprised if the buildbots tell me I
missed something.
Matthew Wilcox (Oracle) (4):
mm/vmalloc: Change the 'caller' type to unsigned long
mm/util: Add kvmalloc_node_caller
mm/vmalloc: Use kvmalloc to allocate the table of
explicit function name.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/arm/include/asm/io.h | 6 +--
arch/arm/include/asm/mach/map.h | 3 --
arch/arm/kernel/module.c | 4 +-
arch/arm/mach-imx/mm-imx3.c | 2 +-
arch/arm/mach-ixp4xx/common.c
Allow the caller of kvmalloc to specify who counts as the allocator
of the memory instead of assuming it's the immediate caller.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 4 +++-
include/linux/slab.h | 2 ++
mm/util.c
ation speed of vmalloc(4MB) by approximately
5% in our benchmark. It's still dominated by the 1024 calls to
alloc_pages_node(), which will be the subject of a later patch.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/vmalloc.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff
.
Signed-off-by: Matthew Wilcox (Oracle)
---
MAINTAINERS | 7 +++
arch/arm64/kernel/module.c| 3 +--
arch/arm64/net/bpf_jit_comp.c | 3 +--
arch/parisc/kernel/module.c | 5 ++---
arch/x86/hyperv/hv_init.c | 3 +--
5 files changed, 12 insertions(+), 9 deletions(-)
diff
Open-coding this function meant it missed out on the recent bugfix
for waiters being woken by a delayed wake event from a previous
instantiation of the page.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/afs/write.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/afs
This is the killable version of wait_on_page_writeback.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 1 +
mm/page-writeback.c | 16
2 files changed, 17 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 139678f382ff
pages instead of units
of folios (Zi Yan)
v3:
- Rebase on next-20210127. Two major sources of conflict, the
generic_file_buffered_read refactoring (in akpm tree) and the
fscache work (in dhowells tree).
v2:
- Pare patch series back to just infrastructure and the page waiting
Allow page counters to be more readily modified by callers which have
a folio. Name these wrappers with 'stat' instead of 'state' as requested
by Linus here:
https://lore.kernel.org/linux-mm/CAHk-=wj847sudr-kt+46ft3+xffgiwpgthvm7djwgdi4cvr...@mail.gmail.com/
Signed-off-by: Ma
Some functions
grow a little while others shrink. I presume the compiler is making
different inlining decisions.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mm.h | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/in
If we know we have a folio, we can call get_folio() instead of get_page()
and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mm.h | 26 +-
1 file changed, 17 insertions(+), 9 deletions(-)
diff
page.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 78
include/linux/mm_types.h | 36 +++
2 files changed, 114 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cb1e191da319..9b7e3fa12fd3
These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mmdebug.h | 20
1 file changed, 20 insertions(+)
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index
Cachefiles was relying on wait_page_key and wait_bit_key being the
same layout, which is fragile. Now that wait_page_key is exposed in
the pagemap.h header, we can remove that fragility
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/cachefiles/rdwr.c| 7 +++
include/linux/pagemap.h | 1
These are just convenience wrappers for callers with folios; pgdat and
zone can be reached from tail pages as well as head pages.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mm.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/mm.h
d() in get_page()
& put_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm_types.h | 16 ++
include/linux/pagemap.h | 48
2 files changed, 45 insertions(+), 19 deletions(-)
diff --git a/include/linux/mm_types.h b/include/lin
saves 1727 bytes of text with the distro-derived config that
I'm testing due to removing a double call to compound_head() in
PageSwapCache().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 120 ++---
1 file changed, 100 insertions(+
t any path that uses unlock_folio() will execute
4 fewer instructions.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 3 ++-
mm/filemap.c| 27 ++-
mm/folio-compat.c | 6 ++
3 files changed, 18 insertions(+), 18 deletions(-)
ntire sequence will disappear.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 14 --
include/linux/pagemap.h | 35 +--
include/linux/swap.h| 6 ++
mm/Makefile | 2 +-
mm/folio-compat.c | 13
es to 403 bytes, saving 111 bytes. The text
shrinks by 132 bytes in total.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/io_uring.c | 2 +-
include/linux/pagemap.h | 17 -
mm/filemap.c| 31 ---
3 files changed, 17 insertions(+
: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 9 ++---
mm/filemap.c| 10 --
mm/memory.c | 8
3 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 3cd1b5e28593..38f4ee28a3a5 100644
1 - 100 of 455 matches
Mail list logo