Re: Poor NVMe Performance with KVM

2019-05-18 Thread Wido den Hollander
You also might want to set the allocation mode to something else then shared. This causes the qcow2 metadata to be pre-allocated and that will improve performance. Wido On 5/17/19 3:04 PM, Ivan Kudryavtsev wrote: > Well, just FYI, I changed cache_mode from NULL (none), to writethrough > directly

Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
gt; Sent from the Delta quadrant using Borg technology! > > Nux! > www.nux.ro > > - Original Message - > > From: "Ivan Kudryavtsev" > > To: "users" , "dev" < > dev@cloudstack.apache.org> > > Sent: Friday, 17 May, 2019 1

Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Well, just FYI, I changed cache_mode from NULL (none), to writethrough directly in DB and the performance boosted greatly. It may be an important feature for NVME drives. Currently, on 4.11, the user can set cache-mode for disk offerings, but cannot for service offerings, which are translated to c

Re: Poor NVMe Performance with KVM

2019-05-17 Thread Nux!
What happens when you set deadline scheduler in both HV and guest? -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro - Original Message - > From: "Ivan Kudryavtsev" > To: "users" , "dev" > Sent: Friday, 17 May, 2019 14:16:

Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
BTW, You may think that the improvement is achieved by caching, but I clear the cache with sync & echo 3 > /proc/sys/vm/drop_caches So, can't claim for sure, need other opinion, but looks like for NVMe, writethrough must be used if you want high IO rate. At least with Intel p4500. пт, 17 мая 201