Hi, I replace the code in the file /etc/init.d/ceph of v0.78 with the code of emperor and everything is ok for now. The replaced code:
#Code of v0.78 if [ "$type" = "osd" ]; then get_conf update_crush "" "osd crush update on start" if [ "${update_crush:-1}" = "1" -o "${update_crush:-1}" = "true" ]; then # update location in crush get_conf osd_location_hook "$BINDIR/ceph-crush-location" "osd crush location hook" osd_location=`$osd_location_hook --cluster ceph --id $id --type osd` get_conf osd_weight "" "osd crush initial weight" defaultweight="$(df -P -k $osd_data/. | tail -1 | awk '{ d=$2/1073741824 ; r = sprintf("%.2f", d); print r }')" get_conf osd_keyring "$osd_data/keyring" "keyring" do_cmd "timeout 10 $BINDIR/ceph -c $conf --name=osd.$id --keyring=$osd_keyring osd crush create-or-move -- $id ${osd_weight:-${defaultweight:-1}} $osd_location" fi fi #Code of emperor if [ "$type" = "osd" ]; then get_conf update_crush "" "osd crush update on start" if [ "${update_crush:-1}" = "1" -o "{$update_crush:-1}" = "true" ]; then # update location in crush; put in some suitable defaults on the # command line, ceph.conf can override what it wants get_conf osd_location "" "osd crush location" get_conf osd_weight "" "osd crush initial weight" defaultweight="$(do_cmd "df $osd_data/. | tail -1 | awk '{ d= \$2/1073741824 ; r = sprintf(\"%.2f\", d); print r }'")" get_conf osd_keyring "$osd_data/keyring" "keyring" do_cmd "timeout 10 $BINDIR/ceph \ --name=osd.$id \ --keyring=$osd_keyring \ osd crush create-or-move \ -- \ $id \ ${osd_weight:-${defaultweight:-1}} \ root=default \ host=$host \ $osd_location" fi fi Best regards, Thanh Tran On Mon, Apr 7, 2014 at 3:49 PM, Thanh Tran <thanht...@gmail.com> wrote: > Hi, > > First, i installed ceph with version emperor by mkcephfs, everything is ok. > My cluster has 3 server. Please see http://pastebin.com/avTRfi5F for > config information and additional information. > > Then, i upgraded to v0.78 and restart ceph, osd map changed (see "ceph osd > tree" at http://pastebin.com/avTRfi5F). > osd.1 and osd.4 should belong to host cephtest19, osd.2 and osd.5 should > belong to host cephtest20. > > Processes osd.1, osd.4 and osd.2, osd.5 are still running on cephtest19 > and cephtest20. > > Please help me to investigate this issue. > > Best regards, > Thanh Tran > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com