Hello Christian.

 Our current setup has 4 osd's per node.    When a drive  fails   the
cluster is almost unusable for data entry.   I want to change pour set up
so that under no circumstances ever happens.    We used drbd for 8 years,
and our main concern is high availability .  1200bps  Modem speed feel cli
does not count as available.
 Network:  we use 2 IB switches and  bonding in fail over mode.
 Systems are two  Dell Poweredge r720 and Supermicro X8DT3 .

 So looking at how to do things better we will try  #4 anti-cephalopod.

We'll switch to using raid-10 or raid-6 and have one osd per node, using
high end raid controllers,  hot spares etc.

And use one Intel 200gb S3700 per node for journal

My questions:

is there a minimum number of OSD's which should be used?

should  OSD's per node be the same?

best regards, Rob
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to