This is a rebase of patches that Rich Jones first posted in 2023:
> https://lists.gnu.org/archive/html/qemu-devel/2023-03/msg03320.html
I'm still leaving the series in an RFC state while I'm trying to improve the code to better play with all of the multi-queue changes in the block layer in the meantime. In my own testing, the patches as presented here are not making any noticeable difference in qemu-img convert to a local destination file (multi-conn=1 was not much different than multi-conn=4, although I did validate that multiple sockets were in use and the round robin code was working). Other benchmarks did show improvements, such as qemu convert targeting an nbdkit server on a curl backend getting a 20% boost when I ramped multi-conn from 1 to 4. I have more patches on top of these to post to the list once I can get benchmark numbers that make more sense. Richard W.M. Jones (4): nbd: Add multi-conn option nbd: Split out block device state from underlying NBD connections nbd: Open multiple NBD connections if multi-conn is set nbd: Enable multi-conn using round-robin qapi/block-core.json | 8 +- block/coroutines.h | 5 +- block/nbd.c | 796 +++++++++++++++++++++++++------------------ 3 files changed, 479 insertions(+), 330 deletions(-)
This series makes my day. I have a request from our partners to implement exactly this feature to improve the backup restoration performance in their scenario. The case is the following +---------+ +----------+ +----------+ +------+ | Storage |->|IPsec gate|...Internet...|IPsec gate|->|Target| +---------+ +----------+ +----------+ +------+ In this case the performance could be limited not by the Internet of something else, but with the raw CPU performance in VMs which are doing encryption/decryption. I was specifically requested to implement multi-conn inside QEMU client, which was used to run the system from the backup while restoration is performed. This makes a lot of sense for me even if there is no direct benefit without bottleneck in the middle. Thank you in advance, Den