Hello Sage and Brad,
Many thanks for the information
>incomplete PGs can be extracted from the drive if the bad sector(s) don't
>happen to affect those pgs. The ceph-objectstore-tool --op export command
>can be used for this (extract it from the affected drive and add it to
>some other osd).
=
On Fri, 31 Mar 2017, nokia ceph wrote:
> Hello Brad,
> Many thanks of the info :)
>
> ENV:-- Kracken - bluestore - EC 4+1 - 5 node cluster : RHEL7
>
> What is the status of the down+out osd? Only one osd osd.6 down and out from
> cluster.
> What role did/does it play? Mostimportantly, is it osd.6
On Fri, Mar 31, 2017 at 5:19 AM, nokia ceph wrote:
> Hello Brad,
>
> Many thanks of the info :)
>
> ENV:-- Kracken - bluestore - EC 4+1 - 5 node cluster : RHEL7
>
> What is the status of the down+out osd? Only one osd osd.6 down and out from
> cluster.
> What role did/does it play? Mostimportantly
Hello Brad,
Many thanks of the info :)
ENV:-- Kracken - bluestore - EC 4+1 - 5 node cluster : RHEL7
What is the status of the down+out osd? Only one osd osd.6 down and out
from cluster.
What role did/does it play? Mostimportantly, is it osd.6? Yes, due to
underlying I/O error issue we removed th
On Thu, Mar 30, 2017 at 4:53 AM, nokia ceph wrote:
> Hello,
>
> Env:-
> 5 node, EC 4+1 bluestore kraken v11.2.0 , RHEL7.2
>
> As part of our resillency testing with kraken bluestore, we face more PG's
> were in incomplete+remapped state. We tried to repair each PG using "ceph pg
> repair " still
Hello,
Env:-
5 node, EC 4+1 bluestore kraken v11.2.0 , RHEL7.2
As part of our resillency testing with kraken bluestore, we face more PG's
were in incomplete+remapped state. We tried to repair each PG using "ceph
pg repair " still no luck. Then we planned to remove incomplete PG's
using below proc