Konstantin, Thanks for explanation. But unfortunately, upgrading qemu is
nearly impossible in my case.
So is there something else I can do, or I have to agree with fact that
write IOPS had to be 8x smaller inside KVM rather than outside KVM? :|
pt., 13 lip 2018 o 04:22 Konstantin Shalygin napisa
I've seen some people using 'num_queues' but I don't have this parameter
in my schemas(libvirt version = 1.3.1, qemu version = 2.5.0
num-queues is available from qemu 2.7 [1]
[1] https://wiki.qemu.org/ChangeLog/2.7
k
___
ceph-users mailing list
Hello,
Steffen, Thanks for Your reply. Sorry but I was on holidays, now I'm back
and still digging into my problem.. :(
I've read thousands of google links but can't find anything which could
help me.
- tried all qemu drive IO(io=) and cache(cache=) modes, nothing could come
even close to the r
> On 26 Jun 2018, at 14.04, Damian Dabrowski wrote:
>
> Hi Stefan, thanks for reply.
>
> Unfortunately it didn't work.
>
> disk config:
>
>discard='unmap'/>
>
>
>
>name='volumes-nvme/volume-ce247187-a625-49f1-bacd-fc03df215395'>
>
>
Hi Stefan, thanks for reply.
Unfortunately it didn't work.
disk config:
ce247187-a625-49f1-bacd-fc03df215395
Controller config:
benchmark command: fio --randrepeat=1 --
Quoting Damian Dabrowski (scoot...@gmail.com):
> Hello,
>
> When I mount rbd image with -o queue_depth=1024 I can see much improvement,
> generally on writes(random write improvement from 3k IOPS on standard
> queue_depth to 24k IOPS on queue_depth=1024).
>
> But is there any way to attach rbd di
Hello,
When I mount rbd image with -o queue_depth=1024 I can see much improvement,
generally on writes(random write improvement from 3k IOPS on standard
queue_depth to 24k IOPS on queue_depth=1024).
But is there any way to attach rbd disk to KVM instance with custom
queue_depth? I can't find any