On Tue, May 17, 2022 at 05:12:28PM +0200, Igor Mammedov wrote: > On Tue, 17 May 2022 14:38:58 +0200 > dzej...@gmail.com wrote: > > > From: Jaroslav Jindrak <dzej...@gmail.com> > > > > Prior to the introduction of the prealloc-threads property, the amount > > of threads used to preallocate memory was derived from the value of > > smp-cpus passed to qemu, the amount of physical cpus of the host > > and a hardcoded maximum value. When the prealloc-threads property > > was introduced, it included a default of 1 in backends/hostmem.c and > > a default of smp-cpus using the sugar API for the property itself. The > > latter default is not used when the property is not specified on qemu's > > command line, so guests that were not adjusted for this change suddenly > > started to use the default of 1 thread to preallocate memory, which > > resulted in observable slowdowns in guest boots for guests with large > > memory (e.g. when using libvirt <8.2.0 or managing guests manually). > > current behavior in QEMU is intentionally conservative. threads > number is subject to host configuration and limitations management > layer puts on it and it's not QEMU job to conjure magic numbers that > are host/workload depended.
I think that's missing the point. QEMU *did* historically set the prealloc threads equal to num CPUs, so we have precedent here. The referenced commit lost that behaviour because it only wired up the defaults in one particular CLI scenario. That's a clear regression on QEMU's side. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|