I'm having a play with ceph-deploy after some time away from it (mainly
relying on the puppet modules).

With a test setup of only two debian testing servers, I do the following:

ceph-deploy new host1 host2
ceph-deploy install host1 host2 (installs emperor)
ceph-deploy mon create host1 host2
ceph-deploy osd prepare host1:/dev/sda4 host2:/dev/sda4
ceph-deploy osd activate host1:/dev/sda4 host2:/dev/sda4
ceph-deploy mds create host1 host2

Everything is running fine -- copy some files into CephFS, everything it
looking great.

host1: /etc/init.d/ceph stop osd

Still fine.

host1: /etc/init.d/ceph stop mds

Fails over to the standby mds after a few seconds. Little outage, but to be
expected. Everything fine.

host1: /etc/init.d/ceph start osd
host1: /etc/init.d/ceph start mds

Everything recovers, everything is fine.

Now, let's do something drastic:

host1: reboot
host2: reboot

Both hosts come back up, but the mds never recovers -- it always says it is
replaying.

On closer inspection, host2's osd never came back into action. Doing:

ceph-deploy osd activate host2:/dev/sda4 fixed the issue, and the mds
recovered, as well as the osd now reporting both "up" and "in".

Is there something obvious I'm missing? The ceph.conf seemed remarkably
empty, do I have to re-deploy the configuration file to the monitors or
similar? I've never noticed a problem with puppet deployed hosts, but that
manually writes out the ceph.conf as part of the puppet run.

Many thanks in advance,

Matthew Walster
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to