On Wed, Feb 26, 2025 at 10:53:01AM +0100, David Hildenbrand wrote:
> > > As commented offline, maybe one would want the option to enable the
> > > alternative mode, where such updates (in the SHM region) are not sent to
> > > vhost-user devices. In such a configuration, the MEM_READ / MEM_WRITE
> > > would be unavoidable.
> > 
> > At first, I remember we discussed two options, having update messages
> > sent to all devices (which was deemed as potentially racy), or using
> > MEM_READ / MEM _WRITE messages. With this version of the patch there
> > is no option to avoid the mem_table update messages, which brings me
> > to my point in the previous message: it may make sense to continue
> > with this patch without MEM_READ/WRITE support, and leave that and the
> > option to make mem_table updates optional for a followup patch?
> 
> IMHO that would work for me.

I'm happy with dropping MEM_READ/WRITE. If the memslots limit becomes a
problem then it will be necessary to think about handling things
differently, but there are many possible uses of VIRTIO Shared Memory
Regions that will not hit the limit and I don't see a need to hold them
back.

Stefan

> 
> > 
> > > 
> > > What comes to mind are vhost-user devices with limited number of
> > > supported memslots.
> > > 
> > > No idea how relevant that really is, and how many SHM regions we will
> > > see in practice.
> > 
> > In general, from what I see they usually require 1 or 2 regions,
> > except for virtio-scmi which requires >256.
> 
> 1/2 regions are not a problem. Once we're in the hundreds for a single
> device, it will likely start being a problem, especially when you have more
> such devices.
> 
> BUT, it would likely be a problem even with the alternative approach where
> we don't communicate these regions to vhost-user: IIRC, vhost-net in
> the kernel is usually limited to a maximum of 509 memslots as well as
> default. Similarly, older KVM only supports a total of 509 memslots.
> 
> See https://virtio-mem.gitlab.io/user-guide/user-guide-qemu.html
> "Compatibility with vhost-net and vhost-user".
> 
> In libvhost-user, and rust-vmm, we have a similar limit of ~509.
> 
> 
> Note that for memory devices (DIMMs, virtio-mem), we'll use up to 256
> memslots in case all devices support 509 memslots.
> See MEMORY_DEVICES_SOFT_MEMSLOT_LIMIT:
> 
> /*
>  * Traditionally, KVM/vhost in many setups supported 509 memslots, whereby
>  * 253 memslots were "reserved" for boot memory and other devices (such
>  * as PCI BARs, which can get mapped dynamically) and 256 memslots were
>  * dedicated for DIMMs. These magic numbers worked reliably in the past.
>  *
>  * Further, using many memslots can negatively affect performance, so setting
>  * the soft-limit of memslots used by memory devices to the traditional
>  * DIMM limit of 256 sounds reasonable.
>  *
>  * If we have less than 509 memslots, we will instruct memory devices that
>  * support automatically deciding how many memslots to use to only use a 
> single
>  * one.
>  *
>  * Hotplugging vhost devices with at least 509 memslots is not expected to
>  * cause problems, not even when devices automatically decided how many 
> memslots
>  * to use.
>  */
> #define MEMORY_DEVICES_SOFT_MEMSLOT_LIMIT 256
> #define MEMORY_DEVICES_SAFE_MAX_MEMSLOTS 509
> 
> 
> That changes once you have some vhost-user devices consume combined with boot
> memory more than 253 memslots.
> 
> -- 
> Cheers,
> 
> David / dhildenb
> 

Attachment: signature.asc
Description: PGP signature

Reply via email to