On Dienstag, 15. Oktober 2019 11:20:39 CEST Greg Kurz wrote: > On Tue, 08 Oct 2019 14:05:28 +0200 > > Christian Schoenebeck <qemu_...@crudebyte.com> wrote: > > I wonder though whether virtio-fs suffers from the same file ID collisions > > problem when sharing multiple file systems. > > I gave a try and it seems that virtio-fs might expose the inode numbers from > different devices in the host, unvirtualized AND with the same device in > the guest: > > # mkdir -p /var/tmp/virtio-fs/proc > # mount --bind /proc /var/tmp/virtio-fs/proc > # virtiofsd -o vhost_user_socket=/tmp/vhostqemu -o source=/var/tmp/virtio-fs > -o cache=always > > and then started QEMU with: > > -chardev socket,id=char0,path=/tmp/vhostqemu \ > -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs \ > -m 4G -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on > \ -numa node,memdev=mem > > In the host: > > $ stat /var/tmp/virtio-fs > File: /var/tmp/virtio-fs > Size: 4096 Blocks: 8 IO Block: 4096 directory > Device: fd00h/64768d Inode: 787796 Links: 4 > Access: (0775/drwxrwxr-x) Uid: ( 1000/ greg) Gid: ( 1000/ greg) > Context: unconfined_u:object_r:user_tmp_t:s0 > Access: 2019-10-15 11:08:52.070080922 +0200 > Modify: 2019-10-15 11:02:09.887404446 +0200 > Change: 2019-10-15 11:02:09.887404446 +0200 > Birth: 2019-10-13 19:13:04.009699354 +0200 > [greg@bahia ~]$ stat /var/tmp/virtio-fs/FOO > File: /var/tmp/virtio-fs/FOO > Size: 0 Blocks: 0 IO Block: 4096 regular empty > file Device: fd00h/64768d Inode: 790740 Links: 1 > Access: (0664/-rw-rw-r--) Uid: ( 1000/ greg) Gid: ( 1000/ greg) > Context: unconfined_u:object_r:user_tmp_t:s0 > Access: 2019-10-15 11:02:09.888404448 +0200 > Modify: 2019-10-15 11:02:09.888404448 +0200 > Change: 2019-10-15 11:02:09.888404448 +0200 > Birth: 2019-10-15 11:02:09.887404446 +0200 > [greg@bahia ~]$ stat /var/tmp/virtio-fs/proc/fs > File: /var/tmp/virtio-fs/proc/fs > Size: 0 Blocks: 0 IO Block: 1024 directory > Device: 4h/4d Inode: 4026531845 Links: 5 > Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) > Context: system_u:object_r:proc_t:s0 > Access: 2019-10-01 14:50:09.223233901 +0200 > Modify: 2019-10-01 14:50:09.223233901 +0200 > Change: 2019-10-01 14:50:09.223233901 +0200 > Birth: - > > In the guest: > > [greg@localhost ~]$ stat /mnt > File: /mnt > Size: 4096 Blocks: 8 IO Block: 4096 directory > Device: 2dh/45d Inode: 787796 Links: 4 > Access: (0775/drwxrwxr-x) Uid: ( 1000/ greg) Gid: ( 1000/ greg) > Context: system_u:object_r:unlabeled_t:s0 > Access: 2019-10-15 11:08:52.070080922 +0200 > Modify: 2019-10-15 11:02:09.887404446 +0200 > Change: 2019-10-15 11:02:09.887404446 +0200 > Birth: - > [greg@localhost ~]$ stat /mnt/FOO > File: /mnt/FOO > Size: 0 Blocks: 0 IO Block: 4096 regular empty > file Device: 2dh/45d Inode: 790740 Links: 1 > Access: (0664/-rw-rw-r--) Uid: ( 1000/ greg) Gid: ( 1000/ greg) > Context: system_u:object_r:unlabeled_t:s0 > Access: 2019-10-15 11:02:09.888404448 +0200 > Modify: 2019-10-15 11:02:09.888404448 +0200 > Change: 2019-10-15 11:02:09.888404448 +0200 > Birth: - > [greg@localhost ~]$ stat /mnt/proc/fs > File: /mnt/proc/fs > Size: 0 Blocks: 0 IO Block: 1024 directory > Device: 2dh/45d Inode: 4026531845 Links: 5 > Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) > Context: system_u:object_r:unlabeled_t:s0 > Access: 2019-10-01 14:50:09.223233901 +0200 > Modify: 2019-10-01 14:50:09.223233901 +0200 > Change: 2019-10-01 14:50:09.223233901 +0200 > Birth: - > > Unless I'm missing something, it seems that "virtio-fs" has the same > issue we had on 9pfs before Christian's patches... :-\
Is a fix for this desired for virtio-fs? Greg, did you have to update kernel version on either host or guest side to get virtio-fs running? Or were the discussed kernel changes just for optional acceleration purposes (i.e. DAX)? Best regards, Christian Schoenebeck