On Wed, Jan 3, 2024 at 1:56 PM Gregory Price <gregory.pr...@memverge.com> wrote: > > On Sun, Dec 31, 2023 at 11:53:15PM -0800, Ho-Ren (Jack) Chuang wrote: > > Introduce a new configuration option 'host-mem-type=' in the > > '-object memory-backend-ram', allowing users to specify > > from which type of memory to allocate. > > > > Users can specify 'cxlram' as an argument, and QEMU will then > > automatically locate CXL RAM NUMA nodes and use them as the backend memory. > > For example: > > -object memory-backend-ram,id=vmem0,size=19G,host-mem-type=cxlram \ > > Stupid questions: > > Why not just use `host-nodes` and pass in the numa node you want to > allocate from? Why should QEMU be made "CXL-aware" in the sense that > QEMU is responsible for figuring out what host node has CXL memory? > > This feels like an "upper level software" operation (orchestration), rather > than something qemu should internally understand.
I don't have a "big picture" and I am learning. Maybe we proposed something not useful :-) I replied to the same question on a fork of this thread. > > > -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \ > > -device cxl-rp,port=0,bus=cxl.1,id=root_port13,chassis=0,slot=2 \ > > -device cxl-type3,bus=root_port13,volatile-memdev=vmem0,id=cxl-vmem0 \ > > For a variety of performance reasons, this will not work the way you > want it to. You are essentially telling QEMU to map the vmem0 into a > virtual cxl device, and now any memory accesses to that memory region > will end up going through the cxl-type3 device logic - which is an IO > path from the perspective of QEMU. I didn't understand exactly how the virtual cxl-type3 device works. I thought it would go with the same "guest virtual address -> guest physical address -> host physical address" translation totally done by CPU. But if it is going through an emulation path handled by virtual cxl-type3, I agree the performance would be bad. Do you know why accessing memory on a virtual cxl-type3 device can't go with the nested page table translation? > > You may want to double check that your tests aren't using swap space in > the guest, because I found in my testing that linux would prefer to use > swap rather than attempt to use virtual cxl-type3 device memory because > of how god-awful slow it is (even if it is backed by DRAM). We didn't enable swap in our current test setup. I think there was a kernel change making the mm page reclamation path to use cxl memory instead of swap if you enable memory tiering. Did you try that? Swap is persistent storage. I would be very surprised if virtual cxl is actually slower than swap. > > > Additionally, this configuration will not (should not) presently work > with VMX enabled. Unless I missed some other update, QEMU w/ CXL memory > presently crashes when VMX is enabled for not-yet-debugged reasons. When we had a discussion with Intel, they told us to not use the KVM option in QEMU while using virtual cxl type3 device. That's probably related to the issue you described here? We enabled KVM though but haven't seen the crash yet. > > Another possiblity: You mapped this memory-backend into another numa > node explicitly and never onlined the memory via cxlcli. I've done > this, and it works, but it's a "hidden feature" that probably should > not exist / be supported. I thought general purpose memory nodes are onlined by default? > > > > If I understand the goal here, it's to pass CXL-hosted DRAM through to > the guest in a way that the system can manage it according to its > performance attributes. Yes. > > You're better off just using the `host-nodes` field of host-memory > and passing bandwidth/latency attributes though via `-numa hmat-lb` We tried this but it doesn't work from end to end right now. I described the issue in another fork of this thread. > > In that scenario, the guest software doesn't even need to know CXL > exists at all, it can just read the attributes of the numa node > that QEMU created for it. We thought about this before. But the current kernel implementation requires a devdax device to be probed and recognized as a slow tier (by reading the memory attributes). I don't think this can be done via the path you described. Have you tried this before? > > In the future to deal with proper dynamic capacity, we may need to > consider a different backend object altogether that allows sparse > allocations, and a virtual cxl device which pre-allocates the CFMW > can at least be registered to manage it. I'm not quite sure how that > looks just yet. Are we talking about CXL memory pooling? > > For example: 1-socket, 4 CPU QEMU instance w/ 4GB on a cpu-node and 4GB > on a cpuless node. > > qemu-system-x86_64 \ > -nographic \ > -accel kvm \ > -machine type=q35,hmat=on \ > -drive file=./myvm.qcow2,format=qcow2,index=0,media=disk,id=hd \ > -m 8G,slots=4,maxmem=16G \ > -smp cpus=4 \ > -object memory-backend-ram,size=4G,id=ram-node0,numa=X \ <-- extend here > -object memory-backend-ram,size=4G,id=ram-node1,numa=Y \ <-- extend here > -numa node,nodeid=0,cpus=0-4,memdev=ram-node0 \ <-- cpu node > -numa node,initiator=0,nodeid=1,memdev=ram-node1 \ <-- cpuless node > -netdev bridge,id=hn0,br=virbr0 \ > -device virtio-net-pci,netdev=hn0,id=nic1,mac=52:54:00:12:34:77 \ > -numa > hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=10 > \ > -numa > hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 > \ > -numa > hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=20 > \ > -numa > hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 > > [root@fedora ~]# numactl -H > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 > node 0 size: 3965 MB > node 0 free: 3611 MB > node 1 cpus: > node 1 size: 3986 MB > node 1 free: 3960 MB > node distances: > node 0 1 > 0: 10 20 > 1: 20 10 > > [root@fedora initiators]# cd /sys/devices/system/node/node1/access0/initiators > node0 read_bandwidth read_latency write_bandwidth write_latency > [root@fedora initiators]# cat read_bandwidth > 5 > [root@fedora initiators]# cat read_latency > 20 > > > ~Gregory