Use per-vma locks when reading /proc/pid/smaps and /proc/pid/numa_maps
similar to /proc/pid/maps to reduce contention on central mmap_lock. One
major difference between maps and smaps/numa_maps reading is that the
latter executes page table walk which can't be done under RCU due to a
possibility of sleeping. Therefore we drop RCU read lock before this walk
while keeping the VMA locked. After the walk we retake RCU read lock,
reset VMA iterator and proceed with the next VMA.

The last two patches extend /proc/pid/maps test to cover /proc/pid/smaps
reading during concurrent address space modification.

Changes since v1[1]
- moved drop_rcu earlied in smap_gather_stats to avoid sleeping under
RCU lock in shmem_swap_usage(), per Sashiko
- skip page walks for gate VMA in show_numa_map(), per Sashiko
- introduced parse_vma_line() and copy_line() helper functions to ensure
input string passed to sscanf() is always NUL-terminated, per Sashiko
- used FIXTURE_VARIANT to run both maps and smaps tests in a single
test run, per Liam R. Howlett

Applies over mm-unstable.

[1] https://lore.kernel.org/all/[email protected]/

Suren Baghdasaryan (3):
  fs/proc/task_mmu: read proc/pid/{smaps|numa_maps} under per-vma lock
  selftests/proc: ensure the test is performed at the right page
    boundary
  selftests/proc: add /proc/pid/smaps tearing tests

 fs/proc/task_mmu.c                            | 195 +++++++++---
 tools/testing/selftests/proc/proc-maps-race.c | 293 ++++++++++++++----
 2 files changed, 387 insertions(+), 101 deletions(-)


base-commit: 761e9fad336afb6fe2cd488c7bd522e2783064fc
-- 
2.54.0.545.g6539524ca2-goog


Reply via email to