On Fri, Nov 28, 2025 at 3:43 AM Matthew Wilcox <[email protected]> wrote: > > [dropping individuals, leaving only mailing lists. please don't send > this kind of thing to so many people in future] > > On Thu, Nov 27, 2025 at 12:22:16PM +0800, Barry Song wrote: > > On Thu, Nov 27, 2025 at 12:09 PM Matthew Wilcox <[email protected]> wrote: > > > > > > On Thu, Nov 27, 2025 at 09:14:36AM +0800, Barry Song wrote: > > > > There is no need to always fall back to mmap_lock if the per-VMA > > > > lock was released only to wait for pagecache or swapcache to > > > > become ready. > > > > > > Something I've been wondering about is removing all the "drop the MM > > > locks while we wait for I/O" gunk. It's a nice amount of code removed: > > > > I think the point is that page fault handlers should avoid holding the VMA > > lock or mmap_lock for too long while waiting for I/O. Otherwise, those > > writers and readers will be stuck for a while. > > There's a usecase some of us have been discussing off-list for a few > weeks that our current strategy pessimises. It's a process with > thousands (maybe tens of thousands) of threads. It has much more mapped > files than it has memory that cgroups will allow it to use. So on a > page fault, we drop the vma lock, allocate a page of ram, kick off the > read, sleep waiting for the folio to come uptodate, once it is return, > expecting the page to still be there when we reenter filemap_fault. > But it's under so much memory pressure that it's already been reclaimed > by the time we get back to it. So all the threads just batter the > storage re-reading data.
Is this entirely the fault of re-entering the page fault? Under extreme memory pressure, even if we map the pages, they can still be reclaimed quickly? > > If we don't drop the vma lock, we can insert the pages in the page table > and return, maybe getting some work done before this thread is > descheduled. If we need to protect the page from being reclaimed too early, the fix should reside within LRU management, not in page fault handling. Also, I gave an example where we may not drop the VMA lock if the folio is already up to date. That likely corresponds to waiting for the PTE mapping to complete. > > This use case also manages to get utterly hung-up trying to do reclaim > today with the mmap_lock held. SO it manifests somewhat similarly to > your problem (everybody ends up blocked on mmap_lock) but it has a > rather different root cause. > > > I agree there’s room for improvement, but merely removing the "drop the MM > > locks while waiting for I/O" code is unlikely to improve performance. > > I'm not sure it'd hurt performance. The "drop mmap locks for I/O" code > was written before the VMA locking code was written. I don't know that > it's actually helping these days. I am concerned that other write paths may still need to modify the VMA, for example during splitting. Tail latency has long been a significant issue for Android users, and we have observed it even with folio_lock, which has much finer granularity than the VMA lock. > > > The change would be much more complex, so I’d prefer to land the current > > patchset first. At least this way, we avoid falling back to mmap_lock and > > causing contention or priority inversion, with minimal changes. > > Uh, this is an RFC patchset. I'm giving you my comment, which is that I > don't think this is the right direction to go in. Any talk of "landing" > these patches is extremely premature. While I agree that there are other approaches worth exploring, I remain entirely unconvinced that this patchset is the wrong direction. With the current retry logic, it substantially reduces mmap_lock acquisitions and represents a clear low-hanging fruit. Also, I am not referring to landing the RFC itself, but to a subsequent formal patchset that retries using the per-VMA lock. Thanks Barry
