On Tue, 16 Nov 2021 12:11:29 +0100 David Hildenbrand <da...@redhat.com> wrote:
> >> > >> Examples include exposing HBM or PMEM to the VM. Just like on real HW, > >> this memory is exposed via cpu-less, special nodes. In contrast to real > >> HW, the memory is hotplugged later (I don't think HW supports hotplug > >> like that yet, but it might just be a matter of time). > > > > I suppose some of that maybe covered by GENERIC_AFFINITY entries in SRAT > > some by MEMORY entries. Or nodes created dynamically like with normal > > hotplug memory. > > The naming of the define is unhelpful. GENERIC_AFFINITY here corresponds to Generic Initiator Affinity. So no good for memory. This is meant for representation of accelerators / network cards etc so you can get the NUMA characteristics for them accessing Memory in other nodes. My understanding of 'traditional' memory hotplug is that typically the PA into which memory is hotplugged is known at boot time whether or not the memory is physically present. As such, you present that in SRAT and rely on the EFI memory map / other information sources to know the memory isn't there. When it is hotplugged later the address is looked up in SRAT to identify the NUMA node. That model is less useful for more flexible entities like virtio-mem or indeed physical hardware such as CXL type 3 memory devices which typically need their own nodes. For the CXL type 3 option, currently proposal is to use the CXL table entries representing Physical Address space regions to work out how many NUMA nodes are needed and just create extra ones at boot. https://lore.kernel.org/linux-cxl/163553711933.2509508.2203471175679990.st...@dwillia2-desk3.amr.corp.intel.com It's a heuristic as we might need more nodes to represent things well kernel side, but it's better than nothing and less effort that true dynamic node creation. If you chase through the earlier versions of Alison's patch you will find some discussion of that. I wonder if virtio-mem should just grow a CDAT instance via a DOE? That would make all this stuff discoverable via PCI config space rather than ACPI CDAT is at: https://uefi.org/sites/default/files/resources/Coherent%20Device%20Attribute%20Table_1.01.pdf but the table access protocol over PCI DOE is currently in the CXL 2.0 spec (nothing stops others using it though AFAIK). However, then we'd actually need either dynamic node creation in the OS, or some sort of reserved pool of extra nodes. Long term it may be the most flexible option. Jonathan > > I'm certainly no SRAT expert, but seems like under VMWare something > similar can happen: > > https://lkml.kernel.org/r/bae95f0c-faa7-40c6-a0d6-5049b1207...@vmware.com > > "VM was powered on with 4 vCPUs (4 NUMA nodes) and 4GB memory. > ACPI SRAT reports 128 possible CPUs and 128 possible NUMA nodes." > > Note that that discussion is about hotplugging CPUs to memory-less, > hotplugged nodes. > > But there seems to be some way to expose possible NUMA nodes. Maybe > that's via GENERIC_AFFINITY. >