> > When using a pool size of 3, I get the following behavior when one OSD
> > fails:
> > * the affected PGs get marked active+degraded
> >
> > * there is no data movement/backfill
> 
> Works as designed, if you have the default crush map in place (all replicas 
> must
> be on DIFFERENT hosts). You need to tweak your crush map in this case, but be
> aware that this can have serious effects (think of all your data residing on 
> 3 disks
> on a single host).

The old behavior was that data is automatically distributed to remaining 3 
disks.
So the question is why this is different when we use 'ceph osd crush tunables 
optimal'?


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to