[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-18 Thread Marc
> Run status group 1 (all jobs): >READ: bw=900KiB/s (921kB/s), 900KiB/s-900KiB/s (921kB/s-921kB/s), > io=159MiB (167MB), run=180905-180905msec > so it is not 200MB/s but 0.9MB/s ceph (obviously) does and never will come near to native disk speeds. https://yourcmc.ru/wiki/Ceph_performance

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Kai Börnert
Hi, to have a fair test you need to replicate the power loss scenarios ceph does cover and you are currently not: No memory caches in the os or an the disk are allowed to be used, ceph has to ensure that an object written is actually written, even if a node of your cluster explodes right at

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Behzad Khoshbakhti
Hi Marc, Thanks for your prompt response. We have test the direct random write for the disk (without Ceph) and it is 200 MB/s. Wonder why we got 80MB/s from Ceph. Your help is much appreciated. Regards, Behzad On Sun, Jan 16, 2022 at 11:56 AM Marc wrote: > > > > Detailed (somehow) problem de

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Marc
> Detailed (somehow) problem description: > Disk size: 1.2 TB > Ceph version: Pacific > Block size: 4 MB > Operation: Sequential write > Replication factor: 1 > Direct disk performance: 245 MB/s > Ceph controlled disk performance: 80 MB/s you are comparing sequential io against random. You shou