Hi all,
When page table operations require synchronization with software/lockless
walkers, they call tlb_remove_table_sync_{one,rcu}() after flushing the
TLB (tlb->freed_tables or tlb->unshared_tables).
On architectures where the TLB flush already sends IPIs to all target CPUs,
the subsequent sync IPI broadcast is redundant. This is not only costly on
large systems where it disrupts all CPUs even for single-process page table
operations, but has also been reported to hurt RT workloads[1].
This series introduces tlb_table_flush_implies_ipi_broadcast() to check if
the prior TLB flush already provided the necessary synchronization. When
true, the sync calls can early-return.
A few cases rely on this synchronization:
1) hugetlb PMD unshare[2]: The problem is not the freeing but the reuse
of the PMD table for other purposes in the last remaining user after
unsharing.
2) khugepaged collapse[3]: Ensure no concurrent GUP-fast before collapsing
and (possibly) freeing the page table / re-depositing it.
Two-step plan as David suggested[4]:
Step 1 (this series): Skip redundant sync when we're 100% certain the TLB
flush sent IPIs. INVLPGB is excluded because when supported, we cannot
guarantee IPIs were sent, keeping it clean and simple.
Step 2 (future work): Send targeted IPIs only to CPUs actually doing
software/lockless page table walks, benefiting all architectures.
Regarding Step 2, it obviously only applies to setups where Step 1 does not
apply: like x86 with INVLPGB or arm64. Step 2 work is ongoing; early
attempts showed ~3% GUP-fast overhead. Reducing the overhead requires more
work and tuning; it will be submitted separately once ready.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1]
https://lore.kernel.org/linux-mm/[email protected]/
[2]
https://lore.kernel.org/linux-mm/[email protected]/
[3]
https://lore.kernel.org/linux-mm/[email protected]/
[4]
https://lore.kernel.org/linux-mm/[email protected]/
v5 -> v6:
- Use static_branch to eliminating the branch overhead (per Peter)
- https://lore.kernel.org/linux-mm/[email protected]/
v4 -> v5:
- Drop per-CPU tracking (active_lockless_pt_walk_mm) from this series;
defer to Step 2 as it adds ~3% GUP-fast overhead
- Keep pv_ops property false for PV backends like KVM: preempted vCPUs
cannot be assumed safeļ¼per Sean)
https://lore.kernel.org/linux-mm/[email protected]/
- https://lore.kernel.org/linux-mm/[email protected]/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/[email protected]/
- https://lore.kernel.org/linux-mm/[email protected]/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/[email protected]/
- https://lore.kernel.org/linux-mm/[email protected]/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/[email protected]/
- https://lore.kernel.org/linux-mm/[email protected]/
Lance Yang (2):
mm/mmu_gather: prepare to skip redundant sync IPIs
x86/tlb: skip redundant sync IPIs for native TLB flush
arch/x86/include/asm/paravirt_types.h | 5 +++++
arch/x86/include/asm/smp.h | 3 +++
arch/x86/include/asm/tlb.h | 16 +++++++++++++++-
arch/x86/include/asm/tlbflush.h | 3 +++
arch/x86/kernel/paravirt.c | 20 ++++++++++++++++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/mm/tlb.c | 14 ++++++++++++++
include/asm-generic/tlb.h | 17 +++++++++++++++++
mm/mmu_gather.c | 15 +++++++++++++++
9 files changed, 93 insertions(+), 1 deletion(-)
--
2.49.0