ceph4
-6 0 host soi-ceph5
>
> I find it strange that 1023 PGs are undersized when only one OSD failed.
>
> Bob
>
> On Thu, Mar 31, 2016 at 9:27 AM, Calvin Morrow
> wrote:
>
>>
>>
>> On Wed, Mar 30, 2016 at 5:24 PM Christian Balzer wrote:
>>
>>&
On Wed, Mar 30, 2016 at 5:24 PM Christian Balzer wrote:
> On Wed, 30 Mar 2016 15:50:07 +0000 Calvin Morrow wrote:
>
> > On Wed, Mar 30, 2016 at 1:27 AM Christian Balzer wrote:
> >
> > >
> > > Hello,
> > >
> > > On Tue, 29 Mar 2016 18:10:33
On Wed, Mar 30, 2016 at 1:27 AM Christian Balzer wrote:
>
> Hello,
>
> On Tue, 29 Mar 2016 18:10:33 + Calvin Morrow wrote:
>
> > Ceph cluster with 60 OSDs, Giant 0.87.2. One of the OSDs failed due to a
> > hardware error, however after normal recovery it seem
Ceph cluster with 60 OSDs, Giant 0.87.2. One of the OSDs failed due to a
hardware error, however after normal recovery it seems stuck with
one active+undersized+degraded+inconsistent pg.
I haven't been able to get repair to happen using "ceph pg repair 12.28a";
I can see the activity logged in t