I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I run "service ceph start" i got:

# service ceph start

   ERROR:ceph-disk:Failed to activate
   ceph-disk: Does not look like a Ceph OSD, or incompatible version:
   /var/lib/ceph/tmp/mnt.I71N5T
   mount: /dev/hioa1 already mounted or /var/lib/ceph/tmp/mnt.02sVHj busy
   ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t',
   'xfs', '-o', 'noatime', '--',
   
'/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.6d726c93-41f9-453d-858e-ab4132b5c8fd',
   '/var/lib/ceph/tmp/mnt.02sVHj']' returned non-zero exit status 32
   ceph-disk: Error: One or more partitions failed to activate

Someone told me "service ceph start" still tries to call ceph-disk which will create a filestore type OSD, and create a journal partition, is it true?

ls -l /dev/disk/by-parttypeuuid/

   lrwxrwxrwx. 1 root root 11 9月  23 16:56
   45b0969e-9b03-4f30-b4c6-b4b80ceff106.00dbee5e-fb68-47c4-aa58-924c904c4383
   -> ../../hioa2
   lrwxrwxrwx. 1 root root 10 9月  23 17:02
   45b0969e-9b03-4f30-b4c6-b4b80ceff106.c30e5b97-b914-4eb8-8306-a9649e1c20ba
   -> ../../sdb2
   lrwxrwxrwx. 1 root root 11 9月  23 16:56
   4fbd7e29-9d25-41b8-afd0-062c0ceff05d.6d726c93-41f9-453d-858e-ab4132b5c8fd
   -> ../../hioa1
   lrwxrwxrwx. 1 root root 10 9月  23 17:02
   4fbd7e29-9d25-41b8-afd0-062c0ceff05d.b56ec699-e134-4b90-8f55-4952453e1b7e
   -> ../../sdb1
   lrwxrwxrwx. 1 root root 11 9月  23 16:52
   89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be.6d726c93-41f9-453d-858e-ab4132b5c8fd
   -> ../../hioa1

There seems to be two hioa1 partitions there, maybe remained from last time I create the OSD using ceph-deploy osd prepare?



在 2014-09-24 10:19, Mark Kirkwood 写道:
On 24/09/14 14:07, Aegeaner wrote:
I turned on the debug option, and this is what I got:

# ./kv.sh

    removed osd.0
    removed item id 0 name 'osd.0' from crush map
    0
    umount: /var/lib/ceph/osd/ceph-0: not found
    updated
    add item id 0 name 'osd.0' weight 1 at location
    {host=CVM-0-11,root=default} to crush map
    meta-data=/dev/hioa              isize=256    agcount=4,
    agsize=24506368 blks
              =                       sectsz=512   attr=2, projid32bit=0
    data     =                       bsize=4096 blocks=98025472,
    imaxpct=25
              =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0
log =internal log bsize=4096 blocks=47864, version=2
              =                       sectsz=512   sunit=0 blks,
    lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    2014-09-24 10:02:21.049162 7fe4cf3aa7a0  0 ceph version 0.80.5
(38b73c67d375a2552d8ed67843c8a65c2c0feba6), process ceph-osd, pid 10252
    2014-09-24 10:02:21.055433 7fe4cf3aa7a0  1 mkfs in
    /var/lib/ceph/osd/ceph-0
    2014-09-24 10:02:21.056359 7fe4cf3aa7a0  1 mkfs generated fsid
    d613a61d-a1b4-4180-aea2-552944a2f0dc
    2014-09-24 10:02:21.061349 7fe4cf3aa7a0  1 keyvaluestore backend
    exists/created
    2014-09-24 10:02:21.061377 7fe4cf3aa7a0  1 mkfs done in
    /var/lib/ceph/osd/ceph-0
    2014-09-24 10:02:21.065679 7fe4cf3aa7a0 -1 created object store
    /var/lib/ceph/osd/ceph-0 journal /var/lib/ceph/osd/ceph-0/journal
    for osd.0 fsid d90272ca-d8cc-41eb-b525-2cffe734aec0
    2014-09-24 10:02:21.065776 7fe4cf3aa7a0 -1 auth: error reading file:
    /var/lib/ceph/osd/ceph-0/keyring: can't open
    /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
    2014-09-24 10:02:21.065889 7fe4cf3aa7a0 -1 created new key in
    keyring /var/lib/ceph/osd/ceph-0/keyring
    added key for osd.0

# ceph osd tree

    # id    weight    type name    up/down    reweight
    -1    1    root default
    -2    1        host CVM-0-11
    0    1            osd.0    down    0

Also I updated my simple script to create the OSD:

    ceph osd rm 0
    ceph osd crush rm osd.0
    ceph osd create
    umount /var/lib/ceph/osd/ceph-0
    rm -rf /var/lib/ceph/osd/ceph-0
    rm -rf /var/lib/ceph/osd/ceph-0
    mkdir /var/lib/ceph/osd/ceph-0
    ceph auth del osd.0
    ceph osd crush add osd.0 1 root=default host=CVM-0-11
    mkfs -t xfs -f /dev/hioa
    mount  /dev/hioa /var/lib/ceph/osd/ceph-0
ceph-osd --id 0 -d --mkkey --mkfs --osd-data /var/lib/ceph/osd/ceph-0
    ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i
    /var/lib/ceph/osd/ceph-0/keyring
    /etc/init.d/ceph start osd.0


From where your log stops at, it would appear that your system start script is not even trying to get osd.0 up at all.

Can we see an ls -l of /var/lib/ceph/osd/ceph-0?

Also what os are you on? You might need to invoke via:

$ service ceph start

or similar.

Cheers

Mark




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to