sure that I have fixed it.
Darryl
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] On
Behalf Of Darryl Bond [db...@nrggos.com.au]
Sent: Tuesday, April 16, 2013 9:43 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upg
On 04/16/13 08:50, Dan Mick wrote:
On 04/04/2013 02:27 PM, Darryl Bond wrote:
# ceph pg 3.8 query
pgid currently maps to no osd
That means your CRUSH rules are wrong. What's the crushmap look like,
and what's the rule for pool 3?
# begin crush map
# devices
device 0 device0
device 1 device
On 04/04/2013 02:27 PM, Darryl Bond wrote:
# ceph pg 3.8 query
pgid currently maps to no osd
That means your CRUSH rules are wrong. What's the crushmap look like,
and what's the rule for pool 3?
___
ceph-users mailing list
ceph-users@lists.ceph.c
Ping,
Any ideas? A week later and it is still the same, 300 pgs stuck stale.
I have seen a few references since recommending that there are no gaps
in the OSD numbers. Mine has gaps. Might this the be cause of my problem.
Darryl
On 04/05/13 07:27, Darryl Bond wrote:
I have a 3 node ceph clust
I have a 3 node ceph cluster with 6 disks in each node.
I upgraded from Bobtail 0.56.3 to 0.56.4 last night.
Before I started the upgrade, ceph status reported HEALTH_OK.
After upgrading and restarting the first node the status ended up at
HEALTH_WARN 133 pgs stale; 133 pgs stuck stale
After check