Just to update this issue.
I stopped OSD.6, removed the PG from disk, and restarted it. Ceph rebuilt
the object and it went to HEALTH_OK.
During the weekend the disk for OSD.6 started giving smart errors and will
be replaced.
Thanks for your help Greg. I've opened a bug report in the tracker.
O
What version of Ceph are you running? Is this a replicated or
erasure-coded pool?
On Fri, Dec 12, 2014 at 1:11 AM, Luis Periquito wrote:
> Hi Greg,
>
> thanks for your help. It's always highly appreciated. :)
>
> On Thu, Dec 11, 2014 at 6:41 PM, Gregory Farnum wrote:
>>
>> On Thu, Dec 11, 2014 a
Hi Greg,
thanks for your help. It's always highly appreciated. :)
On Thu, Dec 11, 2014 at 6:41 PM, Gregory Farnum wrote:
> On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito
> wrote:
> > Hi,
> >
> > I've stopped OSD.16, removed the PG from the local filesystem and started
> > the OSD again. After
Be very careful with running "ceph pg repair". Have a look at this
thread:
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/15185
--
Tomasz Kuzemko
tomasz.kuze...@ovh.net
On Thu, Dec 11, 2014 at 10:57:22AM +, Luis Periquito wrote:
> Hi,
>
> I've stopped OSD.16, removed the PG from
On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito wrote:
> Hi,
>
> I've stopped OSD.16, removed the PG from the local filesystem and started
> the OSD again. After ceph rebuilt the PG in the removed OSD I ran a
> deep-scrub and the PG is still inconsistent.
What led you to remove it from osd 16? Is
Hi,
I've stopped OSD.16, removed the PG from the local filesystem and started
the OSD again. After ceph rebuilt the PG in the removed OSD I ran a
deep-scrub and the PG is still inconsistent.
I'm running out of ideas on trying to solve this. Does this mean that all
copies of the object should also