On Sun, Sep 8, 2024 at 9:47 PM Sahil <icegambi...@gmail.com> wrote:
>
> Hi,
>
> On Friday, August 30, 2024 4:18:31 PM GMT+5:30 Eugenio Perez Martin wrote:
> > On Fri, Aug 30, 2024 at 12:20 PM Sahil <icegambi...@gmail.com> wrote:
> > > Hi,
> > >
> > > On Tuesday, August 27, 2024 9:00:36 PM GMT+5:30 Eugenio Perez Martin 
> > > wrote:
> > > > On Wed, Aug 21, 2024 at 2:20 PM Sahil <icegambi...@gmail.com> wrote:
> > > > > [...]
> > > > > I have been trying to test my changes so far as well. I am not very
> > > > > clear
> > > > > on a few things.
> > > > >
> > > > > Q1.
> > > > > I built QEMU from source with my changes and followed the vdpa_sim +
> > > > > vhost_vdpa tutorial [1]. The VM seems to be running fine. How do I
> > > > > check
> > > > > if the packed format is being used instead of the split vq format for
> > > > > shadow virtqueues? I know the packed format is used when virtio_vdev
> > > > > has
> > > > > got the VIRTIO_F_RING_PACKED bit enabled. Is there a way of checking
> > > > > that
> > > > > this is the case?
> > > >
> > > > You can see the features that the driver acked from the guest by
> > > > checking sysfs. Once you know the PCI BFN from lspci:
> > > > # lspci -nn|grep '\[1af4:1041\]'
> > > > 01:00.0 Ethernet controller [0200]: Red Hat, Inc. Virtio 1.0 network
> > > > device [1af4:1041] (rev 01)
> > > > # cut -c 35
> > > > /sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/virtio0/features 0
> > > >
> > > > Also, you can check from QEMU by simply tracing if your functions are
> > > > being called.
> > > >
> > > > > Q2.
> > > > > What's the recommended way to see what's going on under the hood? I
> > > > > tried
> > > > > using the -D option so QEMU's logs are written to a file but the file
> > > > > was
> > > > > empty. Would using qemu with -monitor stdio or attaching gdb to the
> > > > > QEMU
> > > > > VM be worthwhile?
> > > >
> > > > You need to add --trace options with the regex you want to get to
> > > > enable any output. For example, --trace 'vhost_vdpa_*' print all the
> > > > trace_vhost_vdpa_* functions.
> > > >
> > > > If you want to speed things up, you can just replace the interesting
> > > > trace_... functions with fprintf(stderr, ...). We can add the trace
> > > > ones afterwards.
> > >
> > > Understood. I am able to trace the functions that are being called with
> > > fprintf. I'll stick with fprintf for now.
> > >
> > > I realized that packed vqs are not being used in the test environment. I
> > > see that in "hw/virtio/vhost-shadow-virtqueue.c", svq->is_packed is set
> > > to 0 and that calls vhost_svq_add_split(). I am not sure how one enables
> > > the packed feature bit. I don't know if this is an environment issue.
> > >
> > > I built qemu from the latest source with my changes on top of it. I
> > > followed this article [1] to set up the environment.
> > >
> > > On the host machine:
> > >
> > > $ uname -a
> > > Linux fedora 6.10.5-100.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Aug 14
> > > 15:49:25 UTC 2024 x86_64 GNU/Linux
> > >
> > > $ ./qemu/build/qemu-system-x86_64 --version
> > > QEMU emulator version 9.0.91
> > >
> > > $ vdpa -V
> > > vdpa utility, iproute2-6.4.0
> > >
> > > All the relevant vdpa modules have been loaded in accordance with [1].
> > >
> > > $ lsmod | grep -iE "(vdpa|virtio)"
> > > vdpa_sim_net    12288  0
> > > vdpa_sim                24576  1 vdpa_sim_net
> > > vringh          32768  2 vdpa_sim,vdpa_sim_net
> > > vhost_vdpa              32768  2
> > > vhost           65536  1 vhost_vdpa
> > > vhost_iotlb             16384  4 vdpa_sim,vringh,vhost_vdpa,vhost
> > > vdpa            36864  3 vdpa_sim,vhost_vdpa,vdpa_sim_net
> > >
> > > $ ls -l /sys/bus/vdpa/devices/vdpa0/driver
> > > lrwxrwxrwx. 1 root root 0 Aug 30 11:25 /sys/bus/vdpa/devices/vdpa0/driver
> > > -> ../../bus/vdpa/drivers/vhost_vdpa
> > >
> > > In the output of the following command, I see ANY_LAYOUT is supported.
> > > According to virtio_config.h [2] in the linux kernel, this represents the
> > > layout of descriptors. This refers to split and packed vqs, right?
> > >
> > > $ vdpa mgmtdev show
> > >
> > > vdpasim_net:
> > >   supported_classes net
> > >   max_supported_vqs 3
> > >   dev_features MTU MAC STATUS CTRL_VQ CTRL_MAC_ADDR ANY_LAYOUT VERSION_1
> > >   ACCESS_PLATFORM>
> > > $ vdpa dev show -jp
> > > {
> > >
> > >     "dev": {
> > >
> > >         "vdpa0": {
> > >
> > >             "type": "network",
> > >             "mgmtdev": "vdpasim_net",
> > >             "vendor_id": 0,
> > >             "max_vqs": 3,
> > >             "max_vq_size": 256
> > >
> > >         }
> > >
> > >     }
> > >
> > > }
> > >
> > > I started the VM by running:
> > >
> > > $ sudo ./qemu/build/qemu-system-x86_64 \
> > > -enable-kvm \
> > > -drive file=//home/ig91/fedora_qemu_test_vm/L1.qcow2,media=disk,if=virtio
> > > \
> > > -net nic,model=virtio \
> > > -net user,hostfwd=tcp::2226-:22 \
> > > -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 \
> > > -device
> > > virtio-net-pci,netdev=vhost-vdpa0,bus=pci.0,addr=0x7,disable-legacy=on,di
> > > sable-modern=off,page-per-vq=on,event_idx=off,packed=on \ -nographic \
> > > -m 2G \
> > > -smp 2 \
> > > -cpu host \
> > > 2>&1 | tee vm.log
> > >
> > > I added the packed=on option to -device virtio-net-pci.
> > >
> > > In the VM:
> > >
> > > # uname -a
> > > Linux fedora 6.8.5-201.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 11
> > > 18:25:26 UTC 2024 x86_64 GNU/Linux
> > >
> > > # lspci -nn | grep -i -A15 "\[1af4:1041\]"
> > > 00:07.0 Ethernet controller [0200]: Red Hat, Inc. Virtio 1.0 network
> > > device [1af4:1041] (rev 01)
> > >
> > > # cut -c 35 /sys/devices/pci0000:00/0000:00:07.0/virtio1/features
> > > 0
> > >
> > > The packed vq feature bit hasn't been set. Am I missing something here?
> >
> > vdpa_sim does not support packed vq at the moment. You need to build
> > the use case #3 of the second part of that blog [1]. It's good that
> > you build the vdpa_sim earlier as it is a simpler setup.
> >
> > If you have problems with the vp_vdpa environment please let me know
> > so we can find alternative setups.
>
> Thank you for the clarification. I tried setting up the vp_vdpa
> environment (scenario 3) but I ended up running into a problem
> in the L1 VM.
>
> I verified that nesting is enabled in KVM (L0):
>
> $ grep -oE "(vmx|svm)" /proc/cpuinfo | sort | uniq
> vmx
>
> $ cat /sys/module/kvm_intel/parameters/nested
> Y
>
> There are no issues when booting L1. I start the VM by running:
>
> $ sudo ./qemu/build/qemu-system-x86_64 \
> -enable-kvm \
> -drive file=//home/ig91/fedora_qemu_test_vm/L1.qcow2,media=disk,if=virtio \
> -net nic,model=virtio \
> -net user,hostfwd=tcp::2222-:22 \
> -device intel-iommu,snoop-control=on \
> -device 
> virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_platform=on,event_idx=off,packed=on,bus=pcie.0,addr=0x4
>  \
> -netdev tap,id=net0,script=no,downscript=no \
> -nographic \
> -m 2G \
> -smp 2 \
> -M q35 \
> -cpu host \
> 2>&1 | tee vm.log
>
> Kernel version in L1:
>
> # uname -a
> Linux fedora 6.8.5-201.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 11 18:25:26 
> UTC 2024 x86_64 GNU/Linux
>

Did you run the kernels with the arguments "iommu=pt intel_iommu=on"?
You can print them with cat /proc/cmdline.

> The following variables are set in the kernel's config as
> described in the blog [1]:
>
> CONFIG_VIRTIO_VDPA=m
> CONFIG_VDPA=m
> CONFIG_VP_VDPA=m
> CONFIG_VHOST_VDPA=m
>
> The vDPA tool also satisfies the version criterion.
>
> # vdpa -V
> vdpa utility, iproute2-6.10.0
>
> I built QEMU from source with my changes on top of it.
>
> # ./qemu/build/qemu-system-x86_64 --version
> QEMU emulator version 9.0.91
>
> The relevant vdpa modules are loaded successfully as
> explained in the blog.
>
> # lsmod | grep -i vdpa
> vp_vdpa         20480  0
> vhost_vdpa      32768  0
> vhost                   65536  1 vhost_vdpa
> vhost_iotlb    16384  2 vhost_vdpa,vhost
> vdpa               36864  2 vp_vdpa,vhost_vdpa
> irqbypass       12288  2 vhost_vdpa,kvm
>
> # lspci | grep -i virtio
> 00:03.0 SCSI storage controller: Red Hat, Inc. Virtio block device
> 00:04.0 Ethernet controller: Red Hat, Inc. Virtio 1.0 network device (rev 01)
>
> # lspci -n |grep 00:04.0
> 00:04.0 0200: 1af4:1041 (rev 01)
>
> I then unbind the virtio-pci device from the virtio-pci
> driver and bind it to the vp_vdpa driver.
>
> # echo 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
> # echo 1af4 1041 > /sys/bus/pci/drivers/vp-vdpa/new_id
>
> I then create the vDPA device without any issues.
>
> # vdpa mgmtdev show
> pci/0000:00:04.0:
>   supported_classes net
>   max_supported_vqs 3
>   dev_features CSUM GUEST_CSUM CTRL_GUEST_OFFLOADS MAC GUEST_TSO4 GUEST_TSO6 
> GUEST_ECN GUEST_UFO HOST_TSO4 HOST_TSO6 HOST_ECN HOST_UFO MRG_RXBUF STATUS 
> CTRL_VQ CTRL_RX CTRL_VLAN CTRL_RX_EXTRA GUEST_ANNOUNCE CTRL_MAC_ADDR 
> RING_INDIRECT_DE6
>
> # vdpa dev add name vdpa0 mgmtdev pci/0000:00:04.0
> # vdpa dev show -jp
> {
>     "dev": {
>         "vdpa0": {
>             "type": "network",
>             "mgmtdev": "pci/0000:00:04.0",
>             "vendor_id": 6900,
>             "max_vqs": 3,
>             "max_vq_size": 256
>         }
>     }
> }
>
> # ls -l /sys/bus/vdpa/devices/vdpa0/driver
> lrwxrwxrwx. 1 root root 0 Sep  8 18:58 /sys/bus/vdpa/devices/vdpa0/driver -> 
> ../../../../bus/vdpa/drivers/vhost_vdpa
>
> # ls -l /dev/ |grep vdpa
> crw-------. 1 root root    239,   0 Sep  8 18:58 vhost-vdpa-0
>
> # driverctl -b vdpa list-devices
> vdpa0 vhost_vdpa
>
> I have a qcow2 image L2.qcow in L1. QEMU throws an error
> when trying to launch L2.
>
> # sudo ./qemu/build/qemu-system-x86_64 \
> -enable-kvm \
> -drive file=//root/L2.qcow2,media=disk,if=virtio \
> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 \
> -device 
> virtio-net-pci,netdev=vhost-vdpa0,bus=pcie.0,addr=0x7,disable-legacy=on,disable-modern=off,event_idx=off,packed=on
>  \
> -nographic \
> -m 2G \
> -smp 2 \
> -M q35 \
> -cpu host \
> 2>&1 | tee vm.log
>
> qemu-system-x86_64: -netdev 
> type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0: Could not open 
> '/dev/vhost-vdpa-0': Unknown error 524
>
> I get the same error when trying to launch L2 with qemu-kvm
> which I installed using "dnf install".
>
> # qemu-kvm --version
> QEMU emulator version 8.1.3 (qemu-8.1.3-5.fc39)
>
> The minimum version of QEMU required is v7.0.0-rc4.
>
> According to "include/linux/errno.h" [2], errno 524 is
> ENOTSUPP (operation is not supported). I am not sure
> where I am going wrong.
>
> However, I managed to set up scenario 4 successfully
> and I see that packed vq is enabled in this case.
>
> # cut -c 35 /sys/devices/pci0000:00/0000:00:04.0/virtio1/features
> 1
>
> For the time being, shall I simply continue testing with
> scenario 4?
>
> > Thanks!
> >
> > [1]
> > https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-
> > hardware-part-2
> > > Thanks,
> > > Sahil
> > >
> > > [1]
> > > https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-go
> > > t-hardware-part-1 [2]
> > > https://github.com/torvalds/linux/blob/master/include/uapi/linux/virtio_c
> > > onfig.h#L63
>
> Thanks,
> Sahil
>
> [1] 
> https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-2
> [2] https://github.com/torvalds/linux/blob/master/include/linux/errno.h#L27
>
>


Reply via email to