On Mon, Apr 28, 2025 at 01:46:43PM -0500, Eric Blake wrote:
> This is a rebase of patches that Rich Jones first posted in 2023:
> https://lists.gnu.org/archive/html/qemu-devel/2023-03/msg03320.html
> 
> I'm still leaving the series in an RFC state while I'm trying to
> improve the code to better play with all of the multi-queue changes in
> the block layer in the meantime.  In my own testing, the patches as
> presented here are not making any noticeable difference in qemu-img
> convert to a local destination file (multi-conn=1 was not much
> different than multi-conn=4, although I did validate that multiple
> sockets were in use and the round robin code was working).  Other
> benchmarks did show improvements, such as qemu convert targeting an
> nbdkit server on a curl backend getting a 20% boost when I ramped
> multi-conn from 1 to 4.
>
> I have more patches on top of these to post to the list once I can get
> benchmark numbers that make more sense.

I'm curious if you are benchmarking UNIX or TCP sockets ?

If UNIX sockets, then that recent patch about increasing socket buffer
size for UNIX sockets rings alarm bells. ie, I wonder if the artificially
low UNIX socket buf size on Linux is preventing you seeing any significant
multi-conn benefits ?

IOW, add that socket buf size patch on top of this series, before
benchmarking again ?

> 
> Richard W.M. Jones (4):
>   nbd: Add multi-conn option
>   nbd: Split out block device state from underlying NBD connections
>   nbd: Open multiple NBD connections if multi-conn is set
>   nbd: Enable multi-conn using round-robin
> 
>  qapi/block-core.json |   8 +-
>  block/coroutines.h   |   5 +-
>  block/nbd.c          | 796 +++++++++++++++++++++++++------------------
>  3 files changed, 479 insertions(+), 330 deletions(-)

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


Reply via email to