On Thursday, July 10, 2014, Erik Logtenberg <e...@logtenberg.eu> wrote:

>
> > Yeah, Ceph will never voluntarily reduce the redundancy. I believe
> > splitting the "degraded" state into separate "wrongly placed" and
> > "degraded" (reduced redundancy) states is currently on the menu for
> > the Giant release, but it's not been done yet.
>
> That would greatly improve the accuracy of ceph's status reports.
>
> Does ceph currently know about the difference of these states well
> enough to be smart with prioritizing? Specifically, if I add an OSD and
> ceph starts moving data around, but during that time an other OSD fails;
> is ceph smart enough to quickly prioritize reduplicating the lost copies
> before continuing to move data around (that was still perfectly
> duplicated)?
>

I believe that when choosing the next PG to backfill, OSDs prefer PGs which
are undersized. But it won't stop replicating a PG if one goes undersized
mid-process, and it's not a guarantee anyway because backfill is
distributed over the cluster, but the decisions have to be made locally.
(So a backfilling OSD which has no undersized PGs might beat out an OSD
with undersized PGs to get the "reservation".)
-Greg


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to