On Tue, 10 Nov 2020 14:18:39 +0100 Christian Borntraeger <borntrae...@de.ibm.com> wrote:
> On 10.11.20 11:40, Halil Pasic wrote: > > On Tue, 10 Nov 2020 09:47:51 +0100 > > Christian Borntraeger <borntrae...@de.ibm.com> wrote: > > > >> > >> > >> On 09.11.20 19:53, Halil Pasic wrote: > >>> On Mon, 9 Nov 2020 17:06:16 +0100 > >>> Cornelia Huck <coh...@redhat.com> wrote: > >>> > >>>>> @@ -20,6 +21,11 @@ static void virtio_ccw_blk_realize(VirtioCcwDevice > >>>>> *ccw_dev, Error **errp) > >>>>> { > >>>>> VirtIOBlkCcw *dev = VIRTIO_BLK_CCW(ccw_dev); > >>>>> DeviceState *vdev = DEVICE(&dev->vdev); > >>>>> + VirtIOBlkConf *conf = &dev->vdev.conf; > >>>>> + > >>>>> + if (conf->num_queues == VIRTIO_BLK_AUTO_NUM_QUEUES) { > >>>>> + conf->num_queues = MIN(4, current_machine->smp.cpus); > >>>>> + } > >>>> > >>>> I would like to have a comment explaining the numbers here, however. > >>>> > >>>> virtio-pci has a pretty good explanation (use 1:1 for vqs:vcpus if > >>>> possible, apply some other capping). 4 seems to be a bit arbitrary > >>>> without explanation, although I'm sure you did some measurements :) > >>> > >>> Frankly, I don't have any measurements yet. For the secure case, > >>> I think Mimu has assessed the impact of multiqueue, hence adding Mimu to > >>> the cc list. @Mimu can you help us out. > >>> > >>> Regarding the normal non-protected VMs I'm in a middle of producing some > >>> measurement data. This was admittedly a bit rushed because of where we > >>> are in the cycle. Sorry to disappoint you. > >>> > >>> The number 4 was suggested by Christian, maybe Christian does have some > >>> readily available measurement data for the normal VM case. @Christian: > >>> can you help me out? > >> My point was to find a balance between performance gain and memory usage. > >> As a matter of fact, virtqueue do consume memory. So 4 looked like a > >> reasonable default for me for large guests as long as we do not have > >> directed > >> interrupts. > >> > >> Now, thinking about this again: If we want to change the default to > >> something > >> else in the future (e.g. to num vcpus) then the compat handling will get > >> really complicated. > > > > Regarding compat handling, I believe we would need a new property for > > virtio-blk-ccw: something like def_num_queues_max. Then logic would > > morph to MIN(def_num_queues_max, current_machine->smp.cpus), and we could > > relatively freely do compat stuff on def_num_queues_max. > > > > IMHO not pretty but certainly doable. > > > >> > >> So we can > >> - go with num queues = num cpus. But this will consume memory > >> for guests with lots of CPUs. > > > > In absence of data that showcases the benefit outweighing the obvious > > detriment, I lean towards finding this option the least favorable. > > > >> - go with the proposed logic of min(4,vcpus) and accept that future compat > >> handling > >> is harder > > > > IMHO not a bad option, but I think I would still feel better about a > > more informed decision. In the end, the end user can already specify the > > num_queues explicitly, so I don't think this is urgent. > > > >> - defer this change > > > > So I tend to lean towards deferring. > > Yes, I was pushing this for 5.2 to avoid compat handling. But maybe it is > better > to wait and do it later. But we should certainly continue the discussion to > have > something for the next release. <going through older mails> Do we have a better idea now about which values would make sense here? > > > > > Another thought is, provided the load is about evenly spread on the > > different virtqueues, if the game is about vCPU locality, one could > > think of decreasing the size of each individual virtqueue while > > increasing their number, with the idea of not paying much more in terms > > of memory. The queue size however needs to be a power of 2, > > so there is a limit on the granularity. > > > > Regarding secure VMs, currently we have to cramp at least the swiotlb and > > the virtqueues into ZONE_DMA. So increasing the number of > > virtqueues heavily may get us into new trouble with exotic configs. > > > > Regards, > > Halil > > >