Thanks,

Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250

On 11/7/2013 2:12 PM, Kyle Bader wrote:
Once I know a drive has had a head failure, do I trust that the rest of the 
drive isn't going to go at an inconvenient moment vs just fixing it right now 
when it's not 3AM on Christmas morning? (true story)  As good as Ceph is, do I 
trust that Ceph is smart enough to prevent spreading corrupt data all over the 
cluster if I leave bad disks in place and they start doing terrible things to 
the data?

I have a lot more disks than I have trust in disks. If a drive lost a
head then I want it gone.

I love the idea of using smart data but can foresee see some
implementation issues. We have seen some raid configurations where
polling smart will halt all raid operations momentarily. Also, some
controllers require you to use their CLI tool to pool for smart vs
smartmontools.

It would be similarly awesome to embed something like an apdex score
against each osd, especially if it factored in hierarchy to identify
poor performing osds, nodes, racks, etc..

Kyle,

I think you are spot-on here. Apdex or similar scoring for gear performance is important for Ceph, IMO. Due to pseudo-random placement and replication, it can be quite difficult to identify 1) if hardware, software, or configuration are the cause of slowness, and 2) which hardware (if any) is slow. I recently discovered a method that seems address both points built.

Zackc, Loicd, and I have been the main participants in a weekly Teuthology call the past few weeks. We've talked mostly about methods to extend Teuthology to capture performance metrics. Would you be willing to join us during the Teuthology and Ceph-Brag sessions at the Firefly Developer Summit?

Cheers,
Mike
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to