On 18/09/2014 13:50, m.channappa.nega...@accenture.com wrote:
Even after setting replication size 3 , my data is not getting replicated on 
all the 3 nodes.

Example:
root@Cephadmin:/home/oss# ceph osd map storage check1
osdmap e122 pool 'storage' (9) object 'check1' -> pg 9.7c9c5619 (9.1) -> up 
([0,2,1], p0) acting ([0,2,1], p0)

pg 9.7c9c5619 (9.1) -> up ([0,2,1], p0) acting ([0,2,1], p0)

Right here it says your data is being replicated for that PG across osd.0 osd.1 
osd.3 ([0,2,1]) so yes your data is being replicated across the three nodes.

but here if I shutdown my 2 nodes I will be unable to access data. In actual 
scenario I should be able to access / write data as my other 3rd node is up (if 
my understanding is correct). Please let me know where I am wrong.

Where are your mons situated? If you have 3 mons across 3 nodes once two are 
shut down you'll only have 1 mon left, 1/3 will fail quorum and so the cluster 
will stop taking data to prevent split-brain scenarios. For 2 nodes to be down 
and the cluster to continue to operate you'd need a minimum of 5 mons or you'd 
need to move your mons away from your osd's.

-Michael

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to