1) Host memory consumption is not the right measure to conclude on VM mem leaks, esp. because QEMU does a mmap for the VM memory so as pages are touched inside the guest, host will allocate and this will be seen as increase in QEMU RSS size, as long as we don't get OOM, it should not be considered as mem leak.
2) Since the time this bug was reported and now, there are loads of mem leaks fixes in kernel's 9p client code (fs/9p) and qemu 9p server code... 3)In a while true kind of script, when one would do a Ctrl-C, it could potentially cut-off the umount operation in between, causing kmemleak to show false positives. 4) We ran the modified version of the submiited script ( instead of while true, ran it for a defined number of iterations) and then invoked kmemleak and did not find anything reported which relates to 9p. See the log below... [<ffffffff810fbada>] __kmalloc+0xf7/0x122 [<ffffffff81668680>] pci_acpi_scan_root+0x10f/0x2c6 [<ffffffff81658f35>] acpi_pci_root_add+0x1d5/0x41f [<ffffffff813cd649>] acpi_device_probe+0x49/0x117 [<ffffffff8147d378>] driver_probe_device+0xa5/0x135 [<ffffffff8147d461>] __driver_attach+0x59/0x7c [<ffffffff8147be21>] bus_for_each_dev+0x57/0x83 [<ffffffff8147d070>] driver_attach+0x19/0x1b [<ffffffff8147ccf2>] bus_add_driver+0xab/0x201 [<ffffffff8147d8db>] driver_register+0x93/0x100 [<ffffffff813cdd81>] acpi_bus_register_driver+0x3e/0x40 [<ffffffff81cde2f8>] acpi_pci_root_init+0x20/0x28 [<ffffffff8100020a>] do_one_initcall+0x7a/0x130 [<ffffffff81cbab44>] kernel_init+0x9a/0x114 [<ffffffff81680b64>] kernel_thread_helper+0x4/0x10 unreferenced object 0xffff88001faf06e0 (size 16): comm "swapper/0", pid 1, jiffies 4294667775 (age 135.226s) hex dump (first 16 bytes): 50 43 49 20 42 75 73 20 30 30 30 30 3a 30 30 00 PCI Bus 0000:00. backtrace: [<ffffffff8165460a>] kmemleak_alloc+0x21/0x3e [<ffffffff810fbada>] __kmalloc+0xf7/0x122 [<ffffffff8138fd5d>] kvasprintf+0x45/0x6e [<ffffffff8138fdbe>] kasprintf+0x38/0x3a [<ffffffff816686a6>] pci_acpi_scan_root+0x135/0x2c6 [<ffffffff81658f35>] acpi_pci_root_add+0x1d5/0x41f [<ffffffff813cd649>] acpi_device_probe+0x49/0x117 [<ffffffff8147d378>] driver_probe_device+0xa5/0x135 [<ffffffff8147d461>] __driver_attach+0x59/0x7c [<ffffffff8147be21>] bus_for_each_dev+0x57/0x83 [<ffffffff8147d070>] driver_attach+0x19/0x1b [<ffffffff8147ccf2>] bus_add_driver+0xab/0x201 [<ffffffff8147d8db>] dri 5) Lastly we also ran the exact same script (while true... ) for a long duration ( few days) on 9p exported path and did not find any OOM So this makes me believe that there aren't any mem leaks in the 9p virtio mapped flow. This bug can be closed. ** Changed in: qemu Status: New => Invalid -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/648356 Title: VirtFS possible memory leak in 9p virtio mapped Status in QEMU: Invalid Bug description: I use as client Debian squeeze i386 with a custom kernel: Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux And as host Debian squeeze amd64 Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux kvm version is: kvm-88-5908-gdd67374 Started the client using: sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic I've done following inside the guest: $ mount -t 9p -o trans=virtio host /mnt $ rm -f /mnt/test $ touch /mnt/test $ ls -l /mnt/test $ while true ;do ls -l /mnt/test > /dev/null; done Now I can see on my host system that the memory consumption starts at 90MB and after a minute it raises to 130MB. The extra memory consumption stops when I stop the while-loop. $ while true ;do ls -l /tmp > /dev/null; done Doesn't show that behaviour. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/648356/+subscriptions