9:21 PM
To: ceph-users@lists.ceph.com
Cc: Daniel Manzau
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
On Tue, 4 Aug 2015 20:33:58 +1000 Daniel Manzau wrote:
> Hi Christian,
>
> True it's not exactly out of the box. Here is the ceph.conf.
>
> admin socket = /var/run/ceph/rbd-client-$pid.asok
>
>
> Regards,
> Daniel
>
> -Original Message-----
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Tuesday, 4 August 2015 3:47 PM
> To: Daniel Manzau
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [cep
lzer [mailto:ch...@gol.com]
Sent: Tuesday, 4 August 2015 3:47 PM
To: Daniel Manzau
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
There's a number of reasons I can think of why this would happen.
You say "default behavior" b
Hello,
There's a number of reasons I can think of why this would happen.
You say "default behavior" but looking at your map it's obvious that you
probably don't have a default cluster and crush map.
Your ceph.conf may help, too.
Regards,
Christian
On Tue, 4 Aug 2015 13:05:54 +1000 Daniel Manza
ldesktop.com>>,
ceph-users mailto:ceph-us...@ceph.com>>
Subject: Re: [ceph-users] pg's degraded
Thanks Michael. That was a good idea.
I did:
1. sudo service ceph stop mds
2. ceph mds newfs 1 0 —yes-i-really-mean-it (where 1 and 0 are pool ID’s for
metadata and data)
3. ce
ceph-users
> Subject: Re: [ceph-users] pg's degraded
>
> Hi Craig,
>
> Recreating the missing PG’s fixed it. Thanks for your help.
>
> But when I tried to mount the Filesystem, it gave me the “mount error 5”. I
> tried to restart the MDS server but it wo
Maybe delete the pool and start over?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of JIten
Shah
Sent: Thursday, November 20, 2014 5:46 PM
To: Craig Lewis
Cc: ceph-users
Subject: Re: [ceph-users] pg's degraded
Hi Craig,
Recreating the missing PG's fixed i
Hi Craig,
Recreating the missing PG’s fixed it. Thanks for your help.
But when I tried to mount the Filesystem, it gave me the “mount error 5”. I
tried to restart the MDS server but it won’t work. It tells me that it’s
laggy/unresponsive.
BTW, all these machines are VM’s.
[jshah@Lab-cephmon0
Ok. Thanks.
—Jiten
On Nov 20, 2014, at 2:14 PM, Craig Lewis wrote:
> If there's no data to lose, tell Ceph to re-create all the missing PGs.
>
> ceph pg force_create_pg 2.33
>
> Repeat for each of the missing PGs. If that doesn't do anything, you might
> need to tell Ceph that you lost the
If there's no data to lose, tell Ceph to re-create all the missing PGs.
ceph pg force_create_pg 2.33
Repeat for each of the missing PGs. If that doesn't do anything, you might
need to tell Ceph that you lost the OSDs. For each OSD you moved, run ceph
osd lost , then try the force_create_pg comm
Thanks for your help.
I was using puppet to install the OSD’s where it chooses a path over a device
name. Hence it created the OSD in the path within the root volume since the
path specified was incorrect.
And all 3 of the OSD’s were rebuilt at the same time because it was unused and
we had no
So you have your crushmap set to choose osd instead of choose host?
Did you wait for the cluster to recover between each OSD rebuild? If you
rebuilt all 3 OSDs at the same time (or without waiting for a complete
recovery between them), that would cause this problem.
On Thu, Nov 20, 2014 at 11:
Just to be clear, this is from a cluster that was healthy, had a disk
replaced, and hasn't returned to healthy? It's not a new cluster that has
never been healthy, right?
Assuming it's an existing cluster, how many OSDs did you replace? It
almost looks like you replaced multiple OSDs at the same
Yes, it was a healthy cluster and I had to rebuild because the OSD’s got
accidentally created on the root disk. Out of 4 OSD’s I had to rebuild 3 of
them.
[jshah@Lab-cephmon001 ~]$ ceph osd tree
# idweight type name up/down reweight
-1 0.5 root default
-2 0.0
14 matches
Mail list logo