On 9 January 2014 15:44, Christian Kauhaus <k...@gocept.com> wrote: > Am 09.01.2014 10:25, schrieb Bradley Kite: > > 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM disks > (4TB) > > plus a 160GB SSD. > > [...] > > By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops and > ~2000 > > iops write (but with 15KRPM SAS disks). > > I think that comparing Ceph on 7.2k rpm SATA disks against iSCSI on 15k rpm > SAS disks is not fair. The random access times of 15k SAS disks are hugely > better compared to 7.2k SATA disks. What would be far more interesting is > to > compare Ceph against iSCSI with identical disks. > > Regards > > Christian > > -- > Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration > gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany > http://gocept.com · tel +49 345 219401-11 > Python, Pyramid, Plone, Zope · consulting, development, hosting, operations > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
Hi Christian, Yes, for a true comparison it would be better but this is the only iscsi SAN that we have available for testing, so I really only compared against it to get a "gut feel" for relative performance. I'm still looking for clues that might indicate why there is such a huge difference between the read & write rates on the ceph cluster though. I've been doing some more testing, and the raw random read/write performance of the individual bcache OSD's is around 1500 iops/second so I feel I should be getting significantly more from ceph than what I am able to. Of course, as soon as bcache stops providing benefits (ie data is pushed out of the SSD cache) then the raw performance drops to a standard SATA drive of around 120 IOPS. Regards -- Brad.
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com