Hi Loic and community,

I have gathered the following data on EC backend (all flash). I have decided to 
use Jerasure since space saving is the utmost priority.

Setup:
--------
41 OSDs (each on 8 TB flash), 5 node Ceph cluster. 48 core HT enabled cpu/64 GB 
RAM. Tested with Rados Bench clients.

Result:
---------

It is attached in the doc.

Summary :
-------------

1. It is doing pretty good in Reads and 4 Rados Bench clients are saturating 40 
GB network. With more physical server, it is scaling almost linearly and 
saturating 40 GbE on both the host.

2. As suspected with Ceph, problem is again with writes. Throughput wise it is 
beating replicated pools in significant numbers. But, it is not scaling with 
multiple clients and not saturating anything.

So, my question is the following.

1. Probably, nothing to do with EC backend, we are suffering because of 
filestore inefficiencies. Do you think any tunable like EC stipe size (or 
anything else) will help here ?

2. I couldn't make fault domain as 'host', because of HW limitation. Do you 
think will that play a role in performance for bigger k values ?

3. Even though it is not saturating 40 GbE for writes, do you think separating 
out public/private network will help in terms of performance ?

Any feedback on this is much appreciated.

Thanks & Regards
Somnath



________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

Attachment: EC_benchmark.docx
Description: EC_benchmark.docx

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to