Now I use the following script to create key/value backended OSD, but the OSD is created down and never go up.

   ceph osd create
   umount /var/lib/ceph/osd/ceph-0
   rm -rf /var/lib/ceph/osd/ceph-0
   mkdir /var/lib/ceph/osd/ceph-0
   ceph osd crush add osd.0 1 root=default host=CVM-0-11
   mkfs -t xfs -f /dev/hioa
   mount  /dev/hioa /var/lib/ceph/osd/ceph-0
   ceph-osd --id 0 --mkkey --mkfs --osd-data /var/lib/ceph/osd/ceph-0
   /etc/init.d/ceph start osd.0


Anything goes wrong?



在 2014-09-19 16:23, Mark Kirkwood 写道:
Hi there,

Well suppose you have /dev/hioa1 (assuming a device partition):

$ mkfs.xfs /dev/hioa1
$ mkdir /ceph1
$ mount /dev/hioa1 /ceph1

Then set OSD_DATAPATH=/ceph1 in the fine script :-)

Yes, more work is required compared to using ceph-deploy!

Cheers

Mark

On 19/09/14 20:11, Aegeaner wrote:
I'm not sure what should i put in the $OSD_DATAPATH variable? I have a
SSD disk /dev/hioa, and in filestore case I just use ceph-deploy to
prepare OSD on this disk. Should I format this disk and create a
filesystem on it?

I have tried pass /dev/hioa or /dev/hioa1 as $OSD_DATAPATH, but got the
error:

  ceph-osd --id 0 --mkkey --mkfs --osd-data /dev/hioa
2014-09-19 15:59:41.592725 7f0bc73a47a0 -1 unable to create object store

And this is the disk infomation:

Disk /dev/hioa: 401.5 GB, 401512333312 bytes
256 heads, 63 sectors/track, 48623 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

     Device Boot      Start         End      Blocks   Id  System
/dev/hioa1               1       48624   392101887+  ee  GPT

Thanks for your reply.
========================
Aegeaner






在 2014-09-19 16:18, Aegeaner 写道:
I'm not sure what should i put in the $OSD_DATAPATH variable? I have a SSD disk /dev/hioa, and in filestore case I just use ceph-deploy to prepare OSD on this disk. Should I format this disk and create a filesystem on it?

I have tried pass /dev/hioa or /dev/hioa1 as $OSD_DATAPATH, but got the error:

 ceph-osd --id 0 --mkkey --mkfs --osd-data /dev/hioa
 2014-09-19 15:59:41.592725 7f0bc73a47a0 -1 unable to create object store

And this is the disk infomation:

Disk /dev/hioa: 401.5 GB, 401512333312 bytes
256 heads, 63 sectors/track, 48623 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/hioa1               1       48624   392101887+  ee  GPT

Thanks for your reply.
========================
Aegeaner

在 2014-09-19 14:02, Mark Kirkwood 写道:
On 19/09/14 15:11, Aegeaner wrote:

I noticed ceph added key/value store OSD backend feature in firefly, but
i can hardly get any documentation about how to use it. At last I found
that i can add a line in ceph.conf:

osd objectstore = keyvaluestore-dev

but got failed with ceph-deploy creating OSDs. According to the log,
ceph-disk still tried to part a journal partition but failed.

The commands i used  are:

ceph-deploy disk zap CVM-0-11:/dev/hioa

ceph-deploy osd prepare CVM-0-11:/dev/hioa

ceph-deploy osd activate CVM-0-11:/dev/hioa1

Can anyone help me to create a kvstore backend OSD?


Attached script should work (configured to use rocksdb, but just change to leveldb in the obvious place. It does the whole job assuming you want a MON and an OSD on the same host, so you may needed to customize it - or only use the osd part after editing your existing ceph.conf.

It expects that the OSD_DATAPATH points to a mounted filesystem (i.e does not make use of ceph-disk).

Also some assumption of Ubuntu (i.e upstart) is made when it tries to start the daemons.

Best wishes

Mark



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to