Really sorry, looks like there is some issue with my mail filters. I was not aware this patch was already reviewed. Just checked by chance qemu-devels weblink and saw reviews. Sorry for missing it. I will re-post by early next week.

On 25/02/25 2:05 am, Peter Xu wrote:
!-------------------------------------------------------------------|
   CAUTION: External Email

|-------------------------------------------------------------------!

On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
is accounted for in the OPTMEM limit. If there is any error with sending
zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
socket error queue. This error queue is freed when userspace reads it.

Usually, if there are continuous failures, we merge the metadata into a single
SKB and free another one. However, if there is any out-of-order processing or
an intermittent zerocopy failures, this error chain can grow significantly,
exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
allocate any new SKB, leading to an ENOBUF error.

To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
we flush the error queue and retry once more.
Could you add some more info on how this issue could be reproduced?  When
it happens, it should only report zerocopy skipped and that's the only case
below would be helpful, am I right?  (otherwise it should fail sendmsg()
anyway in other form)

We allocate some memory for zerocopy metadata, this is not accounted in tcp_send_queue but it is accounted in optmem_limit.

https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1607

Also when the zerocopy data is sent and acked, we try to free this allocated skb as we can see in below code.

https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1751

In case, we get __msg_zerocopy_callback() on continous ranges and skb_zerocopy_notify_extend() passes, we merge the ranges and free up the current skb. But if that is not the case, we insert that skb in error queue and it won't be freed until we do error flush from userspace. This is possible when either zerocopy packets are skipped in between or it is always skipped but we get out of order acks on packets. As a result this error chain keeps growing, exhausthing the optmem_limit. As a result when new zerocopy sendmsg request comes, it won't be able to allocate the metadata and returns with ENOBUF.

I understand there is another bug of why zerocopy pakcets are getting skipped and which could be our deployment specific. But anyway live migrations should not fail, it is fine to mark zerocopy skipped but not fail?



Please copy Fabiano when repost.

sure


Signed-off-by: Manish Mishra <manish.mis...@nutanix.com>
---
  include/io/channel-socket.h |  1 +
  io/channel-socket.c         | 52 ++++++++++++++++++++++++++++++++-----
  2 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
index ab15577d38..6cfc66eb5b 100644
--- a/include/io/channel-socket.h
+++ b/include/io/channel-socket.h
@@ -49,6 +49,7 @@ struct QIOChannelSocket {
      socklen_t remoteAddrLen;
      ssize_t zero_copy_queued;
      ssize_t zero_copy_sent;
+    bool new_zero_copy_sent_success;
  };
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 608bcf066e..c7f576290f 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -37,6 +37,11 @@
#define SOCKET_MAX_FDS 16 +#ifdef QEMU_MSG_ZEROCOPY
+static int qio_channel_socket_flush_internal(QIOChannel *ioc,
+                                             Error **errp);
+#endif
+
  SocketAddress *
  qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
                                       Error **errp)
@@ -65,6 +70,7 @@ qio_channel_socket_new(void)
      sioc->fd = -1;
      sioc->zero_copy_queued = 0;
      sioc->zero_copy_sent = 0;
+    sioc->new_zero_copy_sent_success = FALSE;
ioc = QIO_CHANNEL(sioc);
      qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
@@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
      size_t fdsize = sizeof(int) * nfds;
      struct cmsghdr *cmsg;
      int sflags = 0;
+    bool zero_copy_flush_pending = TRUE;
It'll be nice to add some comment above this variable explaining the issue.

sure


memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS)); @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
              goto retry;
          case ENOBUFS:
              if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
-                error_setg_errno(errp, errno,
-                                 "Process can't lock enough memory for using 
MSG_ZEROCOPY");
-                return -1;
+                if (zero_copy_flush_pending) {
+                    ret = qio_channel_socket_flush_internal(ioc, errp);
Calling a socket specific helper in generic qiochannel path may not be a
good idea.

Maybe we could still stick with qio_channel_flush(), but insteadx add a new
parameter to qio_channel_flush(..., bool *succeeded), only pass in
"succeeded == NULL" here, not collect the cached result. Then the other
multifd use case it can pass in a valid "succeeded" pointer.

sure


+                    if (ret < 0) {
+                        error_setg_errno(errp, errno,
+                                         "Zerocopy flush failed");
+                        return -1;
+                    }
+                    zero_copy_flush_pending = FALSE;
+                    goto retry;
+                } else {
+                    error_setg_errno(errp, errno,
+                                     "Process can't lock enough memory for "
+                                     "using MSG_ZEROCOPY");
+                    return -1;
+                }
              }
              break;
          }
@@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
#ifdef QEMU_MSG_ZEROCOPY
-static int qio_channel_socket_flush(QIOChannel *ioc,
-                                    Error **errp)
+static int qio_channel_socket_flush_internal(QIOChannel *ioc,
+                                             Error **errp)
  {
      QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
      struct msghdr msg = {};
@@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
          /* No errors, count successfully finished sendmsg()*/
          sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
- /* If any sendmsg() succeeded using zero copy, return 0 at the end */
+        /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
          if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
-            ret = 0;
After this change, IIUC this function will return either -1 or 1, never 0.
Not sure whether it's intended.  May worth add a comment above the function.

yes, this function won't return 0, but it is anyway a new and internal functions?

qio_channel_socket_flush() will still have the same behavior.

+            sioc->new_zero_copy_sent_success = TRUE;
          }
      }
return ret;
  }
+static int qio_channel_socket_flush(QIOChannel *ioc,
+                                    Error **errp)
+{
+    QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
+    int ret;
+
+    ret = qio_channel_socket_flush_internal(ioc, errp);
+    if (ret < 0) {
+        return ret;
+    }
+
+    if (sioc->new_zero_copy_sent_success) {
+        sioc->new_zero_copy_sent_success = FALSE;
+        ret = 0;
+    }
+
+    return ret;
+}
+
  #endif /* QEMU_MSG_ZEROCOPY */
static int
--
2.43.0


Reply via email to