On 9 January 2014 16:57, Mark Nelson <mark.nel...@inktank.com> wrote:
> On 01/09/2014 10:43 AM, Bradley Kite wrote: > >> On 9 January 2014 15:44, Christian Kauhaus <k...@gocept.com >> <mailto:k...@gocept.com>> wrote: >> >> Am 09.01.2014 10:25, schrieb Bradley Kite: >> > 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM >> disks (4TB) >> > plus a 160GB SSD. >> > [...] >> > By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops >> and ~2000 >> > iops write (but with 15KRPM SAS disks). >> >> I think that comparing Ceph on 7.2k rpm SATA disks against iSCSI on >> 15k rpm >> SAS disks is not fair. The random access times of 15k SAS disks are >> hugely >> better compared to 7.2k SATA disks. What would be far more >> interesting is to >> compare Ceph against iSCSI with identical disks. >> >> Regards >> >> Christian >> >> -- >> Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com >> <mailto:k...@gocept.com> · systems administration >> >> gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · >> Germany >> http://gocept.com · tel +49 345 219401-11<tel:%2B49%20345%20219401-11> >> >> Python, Pyramid, Plone, Zope · consulting, development, hosting, >> operations >> _______________________________________________ >> ceph-users mailing list >> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> >> Hi Christian, >> >> Yes, for a true comparison it would be better but this is the only iscsi >> SAN that we have available for testing, so I really only compared >> against it to get a "gut feel" for relative performance. >> >> I'm still looking for clues that might indicate why there is such a huge >> difference between the read & write rates on the ceph cluster though. >> > > One thing you may want to look at is some comparisons we did with fio on > different RBD volumes with varying io depths and volume/guest counts: > > http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail- > part-2-4k-rbd-performance/ > > You'll probably be most interested in the 4k random read/write results for > XFS. It would be interesting to see if you saw any difference with more or > less volumes at different io depths. Also, sorry if I missed it, but is > this QEMU/KVM? If so, did you enable RBD cache? > >> >> Hi Mark, Thanks for your very detailed test results. Your results are interesting, and suggest that there is a significant performance difference between kernel RBD mapping and QEMU/KVM (which uses librbd directly) - shown particularly here where KRBD achieves 23MB/sec vs librbd 500 MB/sec: http://ceph.com/wp-content/uploads/2014/07/cuttlefish-rbd_xfs-write-0004K.png Our end-goal is to use QEMU/KVM so this is very promising. Would you happen to have the raw iops/second figures from your tests? The graphs only show throughput which provides a good comparison but for us IOPS is the most important factor. Would you happen to know if stgt (iscsi) uses the kernel module or librbd? We also have some legacy HyperV hosts that we would like to connect (to avoid rebuilding them). Is it generally recommended to avoid the kernel module where possible? Regards -- Brad.
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com