Op 08-11-2024 om 18:32 schreef David Howells:
A rolling buffer is a series of folios held in a list of folio_queues.  New
folios and folio_queue structs may be inserted at the head simultaneously
with spent ones being removed from the tail without the need for locking.

The rolling buffer includes an iov_iter and it has to be careful managing
this as the list of folio_queues is extended such that an oops doesn't
incurred because the iterator was pointing to the end of a folio_queue
segment that got appended to and then removed.

We need to use the mechanism twice, once for read and once for write, and,
in future patches, we will use a second rolling buffer to handle bounce
buffering for content encryption.

Signed-off-by: David Howells <dhowe...@redhat.com>
cc: Jeff Layton <jlay...@kernel.org>
cc: ne...@lists.linux.dev
cc: linux-fsde...@vger.kernel.org
---
  fs/netfs/Makefile              |   1 +
  fs/netfs/buffered_read.c       | 119 ++++-------------
  fs/netfs/direct_read.c         |  14 +-
  fs/netfs/direct_write.c        |  10 +-
  fs/netfs/internal.h            |   4 -
  fs/netfs/misc.c                | 147 ---------------------
  fs/netfs/objects.c             |   2 +-
  fs/netfs/read_pgpriv2.c        |  32 ++---
  fs/netfs/read_retry.c          |   2 +-
  fs/netfs/rolling_buffer.c      | 225 +++++++++++++++++++++++++++++++++
  fs/netfs/write_collect.c       |  19 +--
  fs/netfs/write_issue.c         |  26 ++--
  include/linux/netfs.h          |  10 +-
  include/linux/rolling_buffer.h |  61 +++++++++
  include/trace/events/netfs.h   |   2 +
  15 files changed, 375 insertions(+), 299 deletions(-)
  create mode 100644 fs/netfs/rolling_buffer.c
  create mode 100644 include/linux/rolling_buffer.h
[...]
diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
index 88f2adfab75e..0722fb9919a3 100644
--- a/fs/netfs/direct_write.c
+++ b/fs/netfs/direct_write.c
@@ -68,19 +68,19 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb 
*iocb, struct iov_iter *
                 * request.
                 */
                if (async || user_backed_iter(iter)) {
-                       n = netfs_extract_user_iter(iter, len, &wreq->iter, 0);
+                       n = netfs_extract_user_iter(iter, len, 
&wreq->buffer.iter, 0);
                        if (n < 0) {
                                ret = n;
                                goto out;
                        }
-                       wreq->direct_bv = (struct bio_vec *)wreq->iter.bvec;
+                       wreq->direct_bv = (struct bio_vec 
*)wreq->buffer.iter.bvec;
                        wreq->direct_bv_count = n;
                        wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
                } else {
-                       wreq->iter = *iter;
+                       wreq->buffer.iter = *iter;
                }
- wreq->io_iter = wreq->iter;
+               wreq->buffer.iter = wreq->buffer.iter;
Is this correct, an assignment to itself?
        }
__set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);
[...]

Reply via email to