Hello,

My environment is only one machine, only one hard disk. I cannot restart the 
cluster after machine reboots.

This is what I did before reboot:

Stop the osd and mon instances using:

$ sudo stop ceph-osd-all 
ceph-osd-all stop/waiting
$ sudo stop ceph-mon-all 
ceph-mon-all stop/waiting

I got this on the log:
2014-06-06 00:45:58.257575 mon.0 10.0.2.15:6789/0 235 : [INF] osd.0 marked 
itself down

2014-06-06 00:45:58.258450 mon.0 10.0.2.15:6789/0 236 : [INF] osd.1 marked 
itself down
2014-06-06 00:45:58.259301 mon.0 10.0.2.15:6789/0 237 : [INF] osd.2 marked 
itself down
2014-06-06 00:45:58.356269 mon.0 10.0.2.15:6789/0 238 : [INF] osdmap e23: 3 
osds: 0 up, 3 in
2014-06-06 00:45:58.402608 mon.0 10.0.2.15:6789/0 239 : [INF] pgmap v193: 192 
pgs: 104 stale+active+remapped, 88 stale+active+degraded; 0 bytes data, 15576 
MB used, 3357 MB / 19967 MB avail
2014-06-06 00:46:00.918163 mon.0 10.0.2.15:6789/0 240 : [INF] osdmap e24: 3 
osds: 0 up, 3 in 

After rebooting, it is impossible to restart the ceph:

When I execute sudo ceph osd lspools, I get:
2014-06-06 13:37:21.336572 7f0d44558700  0 -- :/1001897 >> 10.0.2.15:6789/0 
pipe(0x7f0d40024860 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f0d40024ad0).fault

When I execute sudo start ceph-all, I get:
start: Job is already running: ceph-all
 
BUT ceph is not on the running process list.

I did manual deployments (other deployments didn't work on my environment). 
What could be the problem? Any ideas?

BTW, I know is not a good idea to run all mon and all osd on the same machine, 
on the same disk. But on the other hand, it facilitates testing with small 
resources. It would be great to deploy such small environment easily.

Best,
koleosfuscus
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to