Since kernel 5.15, there is an issue with io_uring when used in combination with CIFS [0]. Unfortunately, the kernel developers did not suggest any way to resolve the issue and didn't comment on my proposed one. So for now, just disable io_uring when the storage is CIFS, like is done for other storage types that had problematic interactions.
It is rather easy to reproduce when writing large amounts of data within the VM. I used dd if=/dev/urandom of=file bs=1M count=1000 to reproduce it consistently, but your mileage may vary. Some forum reports about users running into the issue [1][2][3]. [0]: https://www.spinics.net/lists/linux-cifs/msg26734.html [1]: https://forum.proxmox.com/threads/109848/ [2]: https://forum.proxmox.com/threads/110464/ [3]: https://forum.proxmox.com/threads/111382/ Signed-off-by: Fiona Ebner <f.eb...@proxmox.com> --- PVE/QemuServer.pm | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 4e85dd02..513a248f 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -1665,8 +1665,12 @@ sub print_drive_commandline_full { # sometimes, just plain disable... my $lvm_no_io_uring = $scfg && $scfg->{type} eq 'lvm'; + # io_uring causes problems when used with CIFS since kernel 5.15 + # Some discussion: https://www.spinics.net/lists/linux-cifs/msg26734.html + my $cifs_no_io_uring = $scfg && $scfg->{type} eq 'cifs'; + if (!$drive->{aio}) { - if ($io_uring && !$rbd_no_io_uring && !$lvm_no_io_uring) { + if ($io_uring && !$rbd_no_io_uring && !$lvm_no_io_uring && !$cifs_no_io_uring) { # io_uring supports all cache modes $opts .= ",aio=io_uring"; } else { -- 2.30.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel