Hi,
have done the test again in a cleaner way.
Same pool, same VM, different hosts (qemu 2.4 + qemu 2.2) but same hardware.
But only one run!
The biggest difference is due cache settings:
qemu2.4 cache=writethrough iops=3823 bw=15294KB/s
qemu2.4 cache=writeback iops=8837 bw=35348KB/s
qemu2.2 c
Hi Zoltan,
you are right ( but this was two running systems...).
I see also an big failure: "--filename=/mnt/test.bin" (use simply
copy/paste without to much thinking :-( )
The root filesystem is not on ceph (on both servers).
So my measurements are not valid!!
I would do the measurements clean t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
There have been numerous on the mailing list of the Samsung EVO and
Pros failing far before their expected wear. This is most likely due
to the 'uncommon' workload of Ceph and the controllers of those drives
are not really designed to handle the cont
I just had 2 of the 3 SSD journals in my small 3-node cluster fail
within 24 hours of each other (not fun, although thanks to a replication
factor of 3x, at least I didn't lose any data). The journals were 128 GB
Samsung 850 Pros. However I have determined that it wasn't really their
fault...
>>One test with proxmox ve 4 (qemu 2.4, iothread for device, and
>>cache=writeback) gives 14856 iops
Please also note that qemu in proxmox ve 4 is compiled with jemalloc.
- Mail original -
De: "Udo Lembke"
À: "Sean Redmond"
Cc: "ceph-users"
Envoyé: Dimanche 22 Novembre 2015 04:29:29
It would have been more interesting if you had tweaked only one option as now
we can’t be sure which changed had what impact… :-)
> On 22 Nov 2015, at 04:29, Udo Lembke wrote:
>
> Hi Sean,
> Haomai is right, that qemu can have a huge performance differences.
>
> I have done two test to the sam