Got it going.
This helped http://tracker.ceph.com/issues/5205
My ceph.conf has cluster and public addresses defined in global. I commented them out and mon.c started successfully.
[global]
auth cluster required = cephx
auth service required = cephx
auth client required
Nope, same outcome.
[root@ceph3 mon]# ceph mon remove c
removed mon.c at 192.168.6.103:6789/0, there are now 2 monitors
[root@ceph3 mon]# mkdir tmp
[root@ceph3 mon]# ceph auth get mon. -o tmp/keyring
exported keyring for mon.
[root@ceph3 mon]# ceph mon getmap -o tmp/monmap
2013-06-26 13:51:26.640
I've typically moved it off to a non-conflicting path in lieu of
deleting it outright, but either way should work. IIRC, I used something
like:
sudo mv /var/lib/ceph/mon/ceph-c /var/lib/ceph/mon/ceph-c-bak && sudo
mkdir /var/lib/ceph/mon/ceph-c
- Mike
On 6/25/2013 11:08 PM, Darryl Bond wrot
Thanks for your prompt response.
Given that my mon.c /var/lib/ceph/mon/ceph-c is currently populated,
should I delete it's contents after removing the monitor and before
re-adding it?
Darryl
On 06/26/13 12:50, Mike Dawson wrote:
Darryl,
I've seen this issue a few times recently. I believe Joao
Darryl,
I've seen this issue a few times recently. I believe Joao was looking
into it at one point, but I don't know if it has been resolved (Any news
Joao?). Others have run into it too. Look closely at:
http://tracker.ceph.com/issues/4999
http://irclogs.ceph.widodh.nl/index.php?date=2013-06
Upgrading a cluster from 6.1.3 to 6.1.4 with 3 monitors. Cluster had
been successfully upgraded from bobtail to cuttlefish and then from
6.1.2 to 6.1.3. There have been no changes to ceph.conf.
Node mon.a upgrade, a,b,c monitors OK after upgrade
Node mon.b upgrade a,b monitors OK after upgrade (