At the end of netfs_unlock_read_folio() in which folios are marked
appropriately for copying to the cache (either with by being marked dirty
and having their private data set or by having PG_private_2 set) and then
unlocked, the folio_queue struct has the entry pointing to the folio
cleared.  This presents a problem for netfs_pgpriv2_write_to_the_cache(),
which is used to write folios marked with PG_private_2 to the cache as it
expects to be able to trawl the folio_queue list thereafter to find the
relevant folios, leading to a hang.

Fix this by not clearing the folio_queue entry if we're going to do the
deprecated copy-to-cache.  The clearance will be done instead as the folios
are written to the cache.

This can be reproduced by starting cachefiles, mounting a ceph filesystem
with "-o fsc" and writing to it.

Fixes: 796a4049640b ("netfs: In readahead, put the folio refs as soon 
extracted")
Reported-by: Max Kellermann <max.kellerm...@ionos.com>
Closes: 
https://lore.kernel.org/r/CAKPOu+_4m80thNy5_fvROoxBm689YtA0dZ-=gcmkzwysy4s...@mail.gmail.com/
Signed-off-by: David Howells <dhowe...@redhat.com>
cc: Jeff Layton <jlay...@kernel.org>
cc: Ilya Dryomov <idryo...@gmail.com>
cc: Xiubo Li <xiu...@redhat.com>
cc: ne...@lists.linux.dev
cc: ceph-de...@vger.kernel.org
cc: linux-fsde...@vger.kernel.org
---
 fs/netfs/read_collect.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index 47ed3a5044e2..e8624f5c7fcc 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -62,10 +62,14 @@ static void netfs_unlock_read_folio(struct 
netfs_io_subrequest *subreq,
                } else {
                        trace_netfs_folio(folio, netfs_folio_trace_read_done);
                }
+
+               folioq_clear(folioq, slot);
        } else {
                // TODO: Use of PG_private_2 is deprecated.
                if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
                        netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, 
slot);
+               else
+                       folioq_clear(folioq, slot);
        }
 
        if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) {
@@ -77,8 +81,6 @@ static void netfs_unlock_read_folio(struct 
netfs_io_subrequest *subreq,
                        folio_unlock(folio);
                }
        }
-
-       folioq_clear(folioq, slot);
 }
 
 /*

Reply via email to