On 05/08/2013 17:15, Mike Dawson wrote:

Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to consider.

Mark Nelson, the Ceph performance guy at Inktank, has published several articles you should consider reading. A few of interest are [0], [1], and [2]. The last link is a 5-part series.

Yep, I saw [0] and [1]. He tries a 6-disk RAID0 array and generally gets lower throughput than 6 x JBOD disks (although I think he's using the controller RAID0 functionality, rather than mdraid).

AFAICS he has a 36-disk chassis but only runs tests with 6 disks, which is a shame as it would be nice to know which other bottleneck you could hit first with this type of setup.

Also, note that there is on-going work to add erasure coding as a optional backend (as opposed to the current replication scheme). If you prioritize bulk storage over performance, you may be interested in following the progress [3], [4], and [5].

[0]: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/ [1]: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/ [2]: http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-1-introduction-and-rados-bench/ [3]: http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend [4]: http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend
[5]: http://www.inktank.com/about-inktank/roadmap/

Thank you - erasure coding is very much of interest for the archival-type storage I'm looking at. However your links [3] and [4] are identical, did you mean to link to another one?

Cheers,

Brian.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to