On Mon, Dec 18, 2017 at 07:35:48PM +0000, Felipe Franciosi wrote: > >> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. > > > > SPDK vhost-user targets only expect max 128 segments. They also > > pre-allocate I/O task structures when QEMU connects to the vhost-user > > device. > > > > Supporting up to 1022 segments would result in significantly higher memory > > usage, reduction in I/O queue depth processed by the vhost-user target, or > > having to dynamically allocate I/O task structures - none of which are > > ideal. > > > > What if this was just bumped from 126 to 128? I guess I’m trying to > > understand the level of guest and host I/O performance that is gained with > > this patch. One I/O per 512KB vs. one I/O per 4MB - we are still only > > talking about a few hundred IO/s difference. > > SeaBIOS also makes the assumption that the queue size is not bigger than 128 > elements. > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23
And what happens if it's bigger? Looks like a bug to me. > Perhaps a better approach is to make the value configurable (ie. add the > "max_segments" property), but set the default to 128-2. In addition to what > Jim pointed out, I think there may be other legacy front end drivers which > can assume the ring will be at most 128 entries in size. > > With that, hypervisors can choose to bump the value higher if it's known to > be safe for their host+guest configuration. > > Cheers, > Felipe For 1.0 guests can just downgrade to 128 if they want to save memory. So it might make sense to gate this change on 1.0 enabled by guest. > > > > -Jim > > > > >