Hi,
We added some more osds to the cluster and it was fixed.
Karun Josy
On Tue, Jan 2, 2018 at 6:21 AM, 한승진 wrote:
> Are all odsd are same version?
> I recently experienced similar situation.
>
> I upgraded all osds to exact same version and reset of pool configuration
> like below
>
> ceph os
Are all odsd are same version?
I recently experienced similar situation.
I upgraded all osds to exact same version and reset of pool configuration
like below
ceph osd pool set min_size 5
I have 5+2 erasure code the important thing is not the number of min_size
but re-configuration I think.
I ho
I think what happened is this :
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
Note
Sometimes, typically in a “small” cluster with few hosts (for instance with
a small testing cluster), the fact to take out the OSD can spawn a CRUSH
corner case where some PGs remain stuck in
Maybe try outing the disk that should have a copy of the PG, but doesn't.
Then mark it back in. It might check that it has everything properly and
pull a copy of the data it's missing. I dunno.
On Sun, Dec 17, 2017, 10:00 PM Karun Josy wrote:
> Tried restarting all osds. Still no luck.
>
> Will
Tried restarting all osds. Still no luck.
Will adding a new disk to any of the server forces a rebalance and fix it?
Karun Josy
On Sun, Dec 17, 2017 at 12:22 PM, Cary wrote:
> Karun,
>
> Could you paste in the output from "ceph health detail"? Which OSD
> was just added?
>
> Cary
> -Dynamic
>
Karun,
Could you paste in the output from "ceph health detail"? Which OSD
was just added?
Cary
-Dynamic
On Sun, Dec 17, 2017 at 4:59 AM, Karun Josy wrote:
> Any help would be appreciated!
>
> Karun Josy
>
> On Sat, Dec 16, 2017 at 11:04 PM, Karun Josy wrote:
>>
>> Hi,
>>
>> Repair didnt fix t
Any help would be appreciated!
Karun Josy
On Sat, Dec 16, 2017 at 11:04 PM, Karun Josy wrote:
> Hi,
>
> Repair didnt fix the issue.
>
> In the pg dump details, I notice this None. Seems pg is missing from one
> of the OSD
>
> [0,2,NONE,4,12,10,5,1]
> [0,2,1,4,12,10,5,1]
>
> There is no way Ceph
Hi,
Repair didnt fix the issue.
In the pg dump details, I notice this None. Seems pg is missing from one of
the OSD
[0,2,NONE,4,12,10,5,1]
[0,2,1,4,12,10,5,1]
There is no way Ceph corrects this automatically ? I have to edit/
troubleshoot it manually ?
Karun
On Sat, Dec 16, 2017 at 10:44 PM,
Karun,
Running ceph pg repair should not cause any problems. It may not fix
the issue though. If that does not help, there is more information at
the link below.
http://ceph.com/geen-categorie/ceph-manually-repair-object/
I recommend not rebooting, or restarting while Ceph is repairing or
recove
Hi Cary,
No, I didnt try to repair it.
I am comparatively new in ceph. Is it okay to try to repair it ?
Or should I take any precautions while doing it ?
Karun Josy
On Sat, Dec 16, 2017 at 2:08 PM, Cary wrote:
> Karun,
>
> Did you attempt a "ceph pg repair "? Replace with the pg
> ID that ne
Karun,
Did you attempt a "ceph pg repair "? Replace with the pg
ID that needs repaired, 3.4.
Cary
-D123
On Sat, Dec 16, 2017 at 8:24 AM, Karun Josy wrote:
> Hello,
>
> I added 1 disk to the cluster and after rebalancing, it shows 1 PG is in
> remapped state. How can I correct it ?
>
> (I had
Hello,
I added 1 disk to the cluster and after rebalancing, it shows 1 PG is in
remapped state. How can I correct it ?
(I had to restart some osds during the rebalancing as there were some slow
requests)
$ ceph pg dump | grep remapped
dumped all
3.4 981 00
12 matches
Mail list logo