The drive will actually be writing 500MB/s in this case, if the journal is on 
the same drive.
All writes get to the journal and then to the filestore, so 200MB/s is actually 
a sane figure.

Jan


> On 11 Dec 2015, at 13:55, Zoltan Arnold Nagy <zol...@linux.vnet.ibm.com> 
> wrote:
> 
> It’s very unfortunate that you guys are using the EVO drives. As we’ve 
> discussed numerous times on the ML, they are not very suitable for this task.
> I think that 200-300MB/s is actually not bad (without knowing anything about 
> the hardware setup, as you didn’t give details…) coming from those drives, 
> but expect to replace them soon.
> 
>> On 11 Dec 2015, at 13:44, Florian Rommel <florian.rom...@datalounges.com> 
>> wrote:
>> 
>> Hi, we are just testing our new ceph cluster and to optimise our spinning 
>> disks we created an erasure coded pool and a SSD cache pool.
>> 
>> We modified the crush map to make an sad pool as easy server contains 1 ssd 
>> drive and 5 spinning drives.
>> 
>> Stress testing the cluster in terms of read performance is very nice pushing 
>> a little bit over 1.2GB/s
>> however the write speed is pushing 200-300MB/s.
>> 
>> All the SSDs are SAMSUNG 500GB EVO 850 PROs and can push 500MB/write speed, 
>> as tested with hdparm and dd.
>> 
>> What can we tweak that the write speed increases as well over the network?
>> 
>> We run everything over 10Ge
>> 
>> The cache mode is set to write-back
>> 
>> Any help would be greatly appreciated.
>> 
>> Thank you and best regards
>> //Florian
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to