2016-06-09 9:16 GMT+02:00 Christian Balzer <ch...@gol.com>:
> Neither, a journal failure is lethal for the OSD involved and unless you
> have LOTS of money RAID1 SSDs are a waste.

Ok, so if a journal failure is lethal, ceph automatically remove the
affected OSD
and start rebalance, right ?

> Additionally your cluster should (NEEDS to) be designed to handle the
> loss of a journal SSD and its associated OSDs, since that is less than a
> whole node, or a whole rack (whatever your failure domain may be).

What do you suggest about this? In the (small) cluster i'm trying to plan,
I would like to be protected on every component up to the whole rack.
I have 2 different racks for the storage, so data should be spread across both
and still keep the single OSD/Journal failure as failure domain

Yes, reading docs should answer to many questions (and I'm reading), but having
a mailing list where expert people reply is much better.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to