Hi,

On Sun, Dec 14, 2025 at 04:25:54PM +0100, Francesco Valla wrote:
> While stress testing this, I noticed that flooding the virtio-can
> interface with packets leads to an hang of the interface itself.
> I am seeing this issuing, at host side:
> 
>       while true; do cansend can0 123#00; done
> 
> with:
> 
>  - QEMU: the tip of the master branch plus [2]
>  - vhost-device: the tip of the main branch
> 
> and the following QEMU invocation:
> 
> qemu-system-x86_64 -serial mon:stdio \
>     -m 2G -smp 2 \
>     -kernel $(pwd)/BUILD.bin/arch/x86/boot/bzImage \
>     -initrd /home/francesco/SRC/LINUX_KERNEL/initramfs.gz \
>     -append "loglevel=7 console=ttyS0" \
>     -machine memory-backend=pc.ram \
>     -object 
> memory-backend-file,id=pc.ram,size=2G,mem-path=/tmp/pc.ram,share=on \
>     -chardev socket,id=can0,path=/tmp/sock-can0 \
>     -device vhost-user-can-pci,chardev=can0
> 
> 
> Restarting the interface (i.e.: ip link set down and the up) does not
> fix the situation.
> 
> I'll try to do some more testing during the next days.

After a deep dive, I _think_ the problem actually lies in vhost-device,
since it is not there (or al least, it seems so) using an alternative
implementation that uses the qemu socketcan support [0] (implementation
which builds on top of the work done by Harald and Mikhail):

qemu-system-x86_64 -serial mon:stdio \
    -m 2G -smp 2 -enable-kvm \
    -kernel $(pwd)/BUILD.bin/arch/x86/boot/bzImage \
    -initrd /home/francesco/SRC/LINUX_KERNEL/initramfs.gz \
    -append "loglevel=7 console=ttyS0" \
    -object can-bus,id=canbus0 -object 
can-host-socketcan,id=canhost0,if=vcan0,canbus=canbus0 \
    -device virtio-can-pci,canbus=canbus0

Unfortunately, my Rust knoweledge is not sufficient to understand the
vhost-device implementation [1]; the issue seems to be related to the
host->guest vring becoming empty and not refilling anymore.

Regards,
Francesco

[0] https://github.com/WallaceIT/qemu/tree/virtio-can
[1] https://github.com/rust-vmm/vhost-device/tree/main/vhost-device-can


Reply via email to