[Bug 1686980] Re: qemu is very slow when adding 16, 384 virtio-scsi drives

2020-07-13 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.] ** Changed in: qemu Status: Incomplete => Expired -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1686980 Title: qemu is ver

[Bug 1686980] Re: qemu is very slow when adding 16, 384 virtio-scsi drives

2020-05-13 Thread Thomas Huth
Is this faster nowadays if you use the new -blockdev parameter instead of -drive? ** Changed in: qemu Status: New => Incomplete -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1686980 Title:

[Qemu-devel] [Bug 1686980] Re: qemu is very slow when adding 16, 384 virtio-scsi drives

2017-04-28 Thread Daniel Berrange
I added further instrumentation and got this profile of where the remaining time goes 1000x drive_new 18.347secs -> 1000x blockdev_init 18.328secs -> 1000x monitor_add_blk 4.515secs -> 1000x blk_by_name 1.545secs -> 1000x bdrv_find_node 2.968secs -> 1000x blk_new_open 13.786secs

[Qemu-devel] [Bug 1686980] Re: qemu is very slow when adding 16, 384 virtio-scsi drives

2017-04-28 Thread Daniel Berrange
I instrumented drive_new to time how long 1000 creations took with current code: 1000 drive_new() in 0 secs 1000 drive_new() in 2 secs 1000 drive_new() in 18 secs 1000 drive_new() in 61 secs As a quick hack you can just disable the drive_get() calls when if=none. They're mostly just used to fill

[Qemu-devel] [Bug 1686980] Re: qemu is very slow when adding 16, 384 virtio-scsi drives

2017-04-28 Thread Daniel Berrange
The first place where it ages an insane amount of time is simply processing -drive options. The stack trace I see is this (gdb) bt #0 0x5583b596719a in drive_get (type=type@entry=IF_NONE, bus=bus@entry=0, unit=unit@entry=2313) at blockdev.c:223 #1 0x5583b59679bd in drive_new (all_opts=0