On Thursday, July 10, 2014, Erik Logtenberg wrote:
>
> > Yeah, Ceph will never voluntarily reduce the redundancy. I believe
> > splitting the "degraded" state into separate "wrongly placed" and
> > "degraded" (reduced redundancy) states is currently on the menu for
> > the Giant release, but it's
> Yeah, Ceph will never voluntarily reduce the redundancy. I believe
> splitting the "degraded" state into separate "wrongly placed" and
> "degraded" (reduced redundancy) states is currently on the menu for
> the Giant release, but it's not been done yet.
That would greatly improve the accuracy o
On Mon, Jul 7, 2014 at 7:03 AM, Erik Logtenberg wrote:
> Hi,
>
> If you add an OSD to an existing cluster, ceph will move some existing
> data around so the new OSD gets its respective share of usage right away.
>
> Now I noticed that during this moving around, ceph reports the relevant
> PG's as
Hi,
If you add an OSD to an existing cluster, ceph will move some existing
data around so the new OSD gets its respective share of usage right away.
Now I noticed that during this moving around, ceph reports the relevant
PG's as degraded. I can more or less understand the logic here: if a
piece o