On 01/14/2014 09:44 AM, Dietmar Maurer wrote:
>>> When using a pool size of 3, I get the following behavior when one OSD
>>> fails:
>>> * the affected PGs get marked active+degraded
>>>
>>> * there is no data movement/backfill
>>
>> Works as designed, if you have the default crush map in place (all replicas 
>> must
>> be on DIFFERENT hosts). You need to tweak your crush map in this case, but be
>> aware that this can have serious effects (think of all your data residing on 
>> 3 disks
>> on a single host).
> 
> The old behavior was that data is automatically distributed to remaining 3 
> disks.
> So the question is why this is different when we use 'ceph osd crush tunables 
> optimal'?

Sorry, it seems as if I had misread your question: Only a single OSD
fails, not the whole server? Then there should definitively be a
backfilling in place. Check if you have set the 'osd noout' flag in your
cluster.

-- 
http://www.wogri.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to