Hi!

I have two nodes with 8 OSDs on each. First node running 2 monitors on 
different virtual machines (mon.1 and mon.2), second node runing mon.3
After several reboots (I have tested power failure scenarios) "ceph -w" on node 
2 always fails with message:

root@bes-mon3:~# ceph --verbose -w
Error initializing cluster client: Error

Logs files are not show any error:

2014-03-22 16:05:51.288526 osd.3 10.92.8.103:6800/7492 3510 : [INF] 0.262 
deep-scrub ok
2014-03-22 16:05:54.997444 osd.1 10.92.8.101:6800/7688 3288 : [INF] 1.22b 
deep-scrub ok
2014-03-22 16:06:09.350377 mon.0 10.92.8.80:6789/0 11104 : [INF] pgmap v28682: 
12288 pgs: 12288 active+clean; 246 MB data, 18131 MB used, 12928 GB / 12945 GB 
avail

2014-03-22 16:07:24.795144 7f7bf42b4700  1 mon.3@2(peon).paxos(paxos active c 
67771..68517) is_readable now=2014-03-22 16:07:24.795145 
lease_expire=2014-03-22 16:07:29.791889 has v0 lc 68517
2014-03-22 16:07:27.795042 7f7bf42b4700  1 mon.3@2(peon).paxos(paxos active c 
67771..68517) is_readable now=2014-03-22 16:07:27.795044 
lease_expire=2014-03-22 16:07:32.792003 has v0 lc 68517

On the node 1 I have got the same error just after reboots, but now everything 
seems to be ok:

root@bastet-mon2:/# ceph -w
    cluster fffeafa2-a664-48a7-979a-517e3ffa0da1
     health HEALTH_OK
     monmap e3: 3 mons at 
{1=10.92.8.80:6789/0,2=10.92.8.81:6789/0,3=10.92.8.82:6789/0}, election epoch 
62, quorum 0,1,2 1,2,3
     osdmap e680: 16 osds: 16 up, 16 in
      pgmap v28692: 12288 pgs, 6 pools, 246 MB data, 36 objects
            18131 MB used, 12928 GB / 12945 GB avail
               12288 active+clean


2014-03-22 16:08:10.611578 mon.0 [INF] pgmap v28692: 12288 pgs: 12288 
active+clean; 246 MB data, 18131 MB used, 12928 GB / 12945 GB avail

////////////////////////////////

How to debug and fix "Error initializing cluster client: Error" problem ?

With best regards,
  Pavel.


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to