Hello Cephers .

I wonder if anyone has some experience with full ssd cluster .
We're testing ceph ( "firefly" ) with 4 nodes ( supermicro
 SYS-F628R3-R72BPT ) * 1TB  SSD , total of 12 osds .
Our network is 10 gig .
We used the ceph_deploy for installation with all defaults  ( followed ceph
documentation for integration with open-stack )
As much as we understand there is no need to enable the rbd cache as we're
running on full ssd .
bench marking the cluster shows very poor performance write but mostly read
( clients are open-stack but also vmware instances ) .
any input is much appreciated ( especially want to know which parameter is
crucial for read performance in full ssd cluster )

Thanks in Advance





*Yair Magnezi *




*Storage & Data Protection TL   // KenshooOffice +972 7 32862423   //
Mobile +972 50 575-2955__________________________________________*

-- 
This e-mail, as well as any attached document, may contain material which 
is confidential and privileged and may include trademark, copyright and 
other intellectual property rights that are proprietary to Kenshoo Ltd, 
 its subsidiaries or affiliates ("Kenshoo"). This e-mail and its 
attachments may be read, copied and used only by the addressee for the 
purpose(s) for which it was disclosed herein. If you have received it in 
error, please destroy the message and any attachment, and contact us 
immediately. If you are not the intended recipient, be aware that any 
review, reliance, disclosure, copying, distribution or use of the contents 
of this message without Kenshoo's express permission is strictly prohibited.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to