Hi,
My ceph cluster include 5 OSDs. 3 osds are installed in the host 'strony-tc' 
and 2 are in the host 'strony-pc'. Recently, both of hosts were rebooted due to 
power cycles. After all of disks are mounted again, the ceph-osd are in the 
'down' status. I tried cmd, "sudo start ceph-osd id=x', to start the OSDs. But 
they are not started well with the error below reported in the 'dmesg' output. 
Any suggestions about how to make the OSDs started well? Any comments are 
appreciated.

"[6595400.895147] init: ceph-osd (ceph/1) main process ended, 
respawning[6595400.969346] init: ceph-osd (ceph/1) main process (21990) 
terminated with status 1[6595400.969352] init: ceph-osd (ceph/1) respawning too 
fast, stopped"
:~$ ceph osd treeID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT 
PRIMARY-AFFINITY -1 1.09477 root default                                        
 -2 0.61818     host strony-tc                                    0 0.20000     
    osd.0         down        0          1.00000  1 0.21819         osd.1       
  down        0          1.00000  4 0.20000         osd.4           up  1.00000 
         1.00000 -3 0.47659     host strony-pc                                  
  2 0.23830         osd.2         down        0          1.00000  3 0.23830     
    osd.3         down        0          1.00000 
:~$ cat /etc/ceph/ceph.conf [global]fsid = 
60638bfd-1eea-46d5-900d-36224475d8aamon_initial_members = strony-tcmon_host = 
10.132.141.122auth_cluster_required = cephxauth_service_required = 
cephxauth_client_required = cephxosd_pool_default_size = 2
Thanks,Strony
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to