On Fri, Aug 30, 2024 at 12:20 PM Sahil <icegambi...@gmail.com> wrote:
>
> Hi,
>
> On Tuesday, August 27, 2024 9:00:36 PM GMT+5:30 Eugenio Perez Martin wrote:
> > On Wed, Aug 21, 2024 at 2:20 PM Sahil <icegambi...@gmail.com> wrote:
> > > [...]
> > > I have been trying to test my changes so far as well. I am not very clear
> > > on a few things.
> > >
> > > Q1.
> > > I built QEMU from source with my changes and followed the vdpa_sim +
> > > vhost_vdpa tutorial [1]. The VM seems to be running fine. How do I check
> > > if the packed format is being used instead of the split vq format for
> > > shadow virtqueues? I know the packed format is used when virtio_vdev has
> > > got the VIRTIO_F_RING_PACKED bit enabled. Is there a way of checking that
> > > this is the case?
> >
> > You can see the features that the driver acked from the guest by
> > checking sysfs. Once you know the PCI BFN from lspci:
> > # lspci -nn|grep '\[1af4:1041\]'
> > 01:00.0 Ethernet controller [0200]: Red Hat, Inc. Virtio 1.0 network
> > device [1af4:1041] (rev 01)
> > # cut -c 35
> > /sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/virtio0/features 0
> >
> > Also, you can check from QEMU by simply tracing if your functions are
> > being called.
> >
> > > Q2.
> > > What's the recommended way to see what's going on under the hood? I tried
> > > using the -D option so QEMU's logs are written to a file but the file was
> > > empty. Would using qemu with -monitor stdio or attaching gdb to the QEMU
> > > VM be worthwhile?
> >
> > You need to add --trace options with the regex you want to get to
> > enable any output. For example, --trace 'vhost_vdpa_*' print all the
> > trace_vhost_vdpa_* functions.
> >
> > If you want to speed things up, you can just replace the interesting
> > trace_... functions with fprintf(stderr, ...). We can add the trace
> > ones afterwards.
>
> Understood. I am able to trace the functions that are being called with
> fprintf. I'll stick with fprintf for now.
>
> I realized that packed vqs are not being used in the test environment. I
> see that in "hw/virtio/vhost-shadow-virtqueue.c", svq->is_packed is set
> to 0 and that calls vhost_svq_add_split(). I am not sure how one enables
> the packed feature bit. I don't know if this is an environment issue.
>
> I built qemu from the latest source with my changes on top of it. I followed
> this article [1] to set up the environment.
>
> On the host machine:
>
> $ uname -a
> Linux fedora 6.10.5-100.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Aug 14 
> 15:49:25 UTC 2024 x86_64 GNU/Linux
>
> $ ./qemu/build/qemu-system-x86_64 --version
> QEMU emulator version 9.0.91
>
> $ vdpa -V
> vdpa utility, iproute2-6.4.0
>
> All the relevant vdpa modules have been loaded in accordance with [1].
>
> $ lsmod | grep -iE "(vdpa|virtio)"
> vdpa_sim_net    12288  0
> vdpa_sim                24576  1 vdpa_sim_net
> vringh          32768  2 vdpa_sim,vdpa_sim_net
> vhost_vdpa              32768  2
> vhost           65536  1 vhost_vdpa
> vhost_iotlb             16384  4 vdpa_sim,vringh,vhost_vdpa,vhost
> vdpa            36864  3 vdpa_sim,vhost_vdpa,vdpa_sim_net
>
> $ ls -l /sys/bus/vdpa/devices/vdpa0/driver
> lrwxrwxrwx. 1 root root 0 Aug 30 11:25 /sys/bus/vdpa/devices/vdpa0/driver -> 
> ../../bus/vdpa/drivers/vhost_vdpa
>
> In the output of the following command, I see ANY_LAYOUT is supported.
> According to virtio_config.h [2] in the linux kernel, this represents the
> layout of descriptors. This refers to split and packed vqs, right?
>
> $ vdpa mgmtdev show
> vdpasim_net:
>   supported_classes net
>   max_supported_vqs 3
>   dev_features MTU MAC STATUS CTRL_VQ CTRL_MAC_ADDR ANY_LAYOUT VERSION_1 
> ACCESS_PLATFORM
>
> $ vdpa dev show -jp
> {
>     "dev": {
>         "vdpa0": {
>             "type": "network",
>             "mgmtdev": "vdpasim_net",
>             "vendor_id": 0,
>             "max_vqs": 3,
>             "max_vq_size": 256
>         }
>     }
> }
>
> I started the VM by running:
>
> $ sudo ./qemu/build/qemu-system-x86_64 \
> -enable-kvm \
> -drive file=//home/ig91/fedora_qemu_test_vm/L1.qcow2,media=disk,if=virtio \
> -net nic,model=virtio \
> -net user,hostfwd=tcp::2226-:22 \
> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 \
> -device 
> virtio-net-pci,netdev=vhost-vdpa0,bus=pci.0,addr=0x7,disable-legacy=on,disable-modern=off,page-per-vq=on,event_idx=off,packed=on
>  \
> -nographic \
> -m 2G \
> -smp 2 \
> -cpu host \
> 2>&1 | tee vm.log
>
> I added the packed=on option to -device virtio-net-pci.
>
> In the VM:
>
> # uname -a
> Linux fedora 6.8.5-201.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 11 18:25:26 
> UTC 2024 x86_64 GNU/Linux
>
> # lspci -nn | grep -i -A15 "\[1af4:1041\]"
> 00:07.0 Ethernet controller [0200]: Red Hat, Inc. Virtio 1.0 network device 
> [1af4:1041] (rev 01)
>
> # cut -c 35 /sys/devices/pci0000:00/0000:00:07.0/virtio1/features
> 0
>
> The packed vq feature bit hasn't been set. Am I missing something here?
>

vdpa_sim does not support packed vq at the moment. You need to build
the use case #3 of the second part of that blog [1]. It's good that
you build the vdpa_sim earlier as it is a simpler setup.

If you have problems with the vp_vdpa environment please let me know
so we can find alternative setups.

Thanks!

[1] 
https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-2

> Thanks,
> Sahil
>
> [1] 
> https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-1
> [2] 
> https://github.com/torvalds/linux/blob/master/include/uapi/linux/virtio_config.h#L63
>
>
>


Reply via email to