On Mon, Apr 28, 2025 at 08:27:54PM +0100, Richard W.M. Jones wrote:
> On Mon, Apr 28, 2025 at 01:46:47PM -0500, Eric Blake wrote:
> [...]
> 
> This all looks similar to when I posted it before.  However I noted a
> couple of problems ...
> 
> > (XXX) The strategy here is very naive.  Firstly if you were going to 
> > open them, you'd probably want to open them in parallel.  Secondly it
> > would make sense to delay opening until multiple parallel requests are
> > seen (perhaps above some threshold), so that simple or shortlived NBD
> > operations do not require multiple connections to be made.
> 
> > (XXX) This uses a naive round-robin approach which could be improved.
> > For example we could look at how many requests are in flight and
> > assign operations to the connections with fewest.  Or we could try to
> > estimate (based on size of requests outstanding) the load on each
> > connection.  But this implementation doesn't do any of that.
> 
> Plus there was a third rather more fundamental problem that apparently
> I didn't write about.  That is that connections were serialised on a
> single thread (called from many coroutines).  This bottleneck meant
> that there wasn't very much advantage to multi-conn, compared to what
> we get in libnbd / nbdcopy.
> 
> Are these fixed / planned to be fixed, especially the third?

That indeed is what I hope to address - to provide additional patches
on top of this rebase to make it possible to specify a pool of more
than one thread using the recent multiqueue work in the block layer.
It's taking me longer than I want to get something that I'm happy with
posting.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org


Reply via email to