Re: [ceph-users] ceph-volume does not support upstart

2017-12-29 Thread 赵赵贺东
Hello Cary!
It’s really big surprise for me to receive your reply!
Sincere thanks to you!
I know it’s a fake execute file, but it works!

>
$ cat /usr/sbin/systemctl 
#!/bin/bash   
exit 0
<

I can start my osd by following command
/usr/bin/ceph-osd --cluster=ceph -i 12 -f --setuser ceph --setgroup ceph

But, threre are still problems.
1.Though ceph-osd can start successfully, prepare log and activate log looks 
like errors occurred.

Prepare log:
===>
# ceph-volume lvm prepare --bluestore --data vggroup/lv
Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-12
Running command: chown -R ceph:ceph /dev/dm-0
Running command: sudo ln -s /dev/vggroup/lv /var/lib/ceph/osd/ceph-12/block
Running command: sudo ceph --cluster ceph --name client.bootstrap-osd --keyring 
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o 
/var/lib/ceph/osd/ceph-12/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-12/keyring 
--create-keyring --name osd.12 --add-key 
AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
 stdout: creating /var/lib/ceph/osd/ceph-12/keyring
 stdout: added entity osd.12 auth auth(auid = 18446744073709551615 
key=AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/
Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore 
--mkfs -i 12 --monmap /var/lib/ceph/osd/ceph-12/activate.monmap --key 
 --osd-data /var/lib/ceph/osd/ceph-12/ 
--osd-uuid 827f4a2c-8c1b-427b-bd6c-66d31a0468ac --setuser ceph --setgroup ceph
 stderr: warning: unable to create /var/run/ceph: (13) Permission denied
 stderr: 2017-12-29 08:13:08.609127 b66f3000 -1 asok(0x850c62a0) 
AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to 
bind the UNIX domain socket to '/var/run/ceph/ceph-osd.12.asok': (2) No such 
file or directory
 stderr:
 stderr: 2017-12-29 08:13:08.643410 b66f3000 -1 
bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to decode 
label at offset 66: buffer::malformed_input: void 
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end 
of struct encoding
 stderr: 2017-12-29 08:13:08.644055 b66f3000 -1 
bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to decode 
label at offset 66: buffer::malformed_input: void 
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end 
of struct encoding
 stderr: 2017-12-29 08:13:08.644722 b66f3000 -1 
bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to decode 
label at offset 66: buffer::malformed_input: void 
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end 
of struct encoding
 stderr: 2017-12-29 08:13:08.646722 b66f3000 -1 
bluestore(/var/lib/ceph/osd/ceph-12/) _read_fsid unparsable uuid
 stderr: 2017-12-29 08:14:00.697028 b66f3000 -1 key 
AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
 stderr: 2017-12-29 08:14:01.261659 b66f3000 -1 created object store 
/var/lib/ceph/osd/ceph-12/ for osd.12 fsid 4e5adad0-784c-41b4-ab72-5f4fae499b3a
<===

Activate log:
===>
# ceph-volume lvm activate --bluestore 12 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
Running command: sudo ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev 
/dev/vggroup/lv --path /var/lib/ceph/osd/ceph-12
Running command: sudo ln -snf /dev/vggroup/lv /var/lib/ceph/osd/ceph-12/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12
Running command: sudo systemctl enable 
ceph-volume@lvm-12-827f4a2c-8c1b-427b-bd6c-66d31a0468ac
Running command: sudo systemctl start ceph-osd@12
<===

After activate operation , osd did not start , but I can start osd by the 
following command(before restart host)
/usr/bin/ceph-osd --cluster=ceph -i 12 -f --setuser ceph --setgroup ceph



2. After host reboot, everything about ceph-osd has been lost.
Because before I reboot, /var/lib/ceph/osd/ceph-12 was mounted on tmpfs, so 
after reboot I lost everything about ceph-osd.
#df -h
 /dev/root15G  2.4G   12G  18% /
devtmpfs   1009M  4.0K 1009M   1% /dev
none4.0K 0  4.0K   0% /sys/fs/cgroup
none202M  156K  202M   1% /run
none5.0M 0  5.0M   0% /run/lock
none   1009M 0 1009M   0% /run/shm
none100M 0  100M   0% /run/user
tmpfs  1009M   48K 1009M   1% /var/lib/ceph/osd/ceph-12


3. ceph-osd can not start automatically.
I think there are something wrong in osd upstart, I should add some operation 
about osd upstart.


It seems that this ceph-volume for ubuntu14.04 is not a easy problem for me , 
so any suggestions  or hints about the problems wil

Re: [ceph-users] ceph-volume does not support upstart

2017-12-29 Thread Cary
Hello,

I mount my Bluestore OSDs in /etc/fstab:

vi /etc/fstab

tmpfs   /var/lib/ceph/osd/ceph-12  tmpfs   rw,relatime 0 0
=
Then mount everyting in fstab with:
mount -a
==
I activate my OSDs this way on startup: You can find the fsid with

cat /var/lib/ceph/osd/ceph-12/fsid

Then add file named ceph.start so ceph-volume will be run at startup.

vi /etc/local.d/ceph.start
ceph-volume lvm activate 12 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
==
Make it excitable:
chmod 700 /etc/local.d/ceph.start
==
cd /etc/local.d/
./ceph.start
==
I am a Gentoo user and use OpenRC, so this may not apply to you.
==
cd /etc/init.d/
ln -s ceph ceph-osd.12
/etc/init.d/ceph-osd.12 start
rc-update add ceph-osd.12 default

Cary

On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东  wrote:
> Hello Cary!
> It’s really big surprise for me to receive your reply!
> Sincere thanks to you!
> I know it’s a fake execute file, but it works!
>
> >
> $ cat /usr/sbin/systemctl
> #!/bin/bash
> exit 0
> <
>
> I can start my osd by following command
> /usr/bin/ceph-osd --cluster=ceph -i 12 -f --setuser ceph --setgroup ceph
>
> But, threre are still problems.
> 1.Though ceph-osd can start successfully, prepare log and activate log looks
> like errors occurred.
>
> Prepare log:
> ===>
> # ceph-volume lvm prepare --bluestore --data vggroup/lv
> Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-12
> Running command: chown -R ceph:ceph /dev/dm-0
> Running command: sudo ln -s /dev/vggroup/lv /var/lib/ceph/osd/ceph-12/block
> Running command: sudo ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/osd/ceph-12/activate.monmap
>  stderr: got monmap epoch 1
> Running command: ceph-authtool /var/lib/ceph/osd/ceph-12/keyring
> --create-keyring --name osd.12 --add-key
> AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
>  stdout: creating /var/lib/ceph/osd/ceph-12/keyring
>  stdout: added entity osd.12 auth auth(auid = 18446744073709551615
> key=AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww== with 0 caps)
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/keyring
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/
> Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore
> --mkfs -i 12 --monmap /var/lib/ceph/osd/ceph-12/activate.monmap --key
>  --osd-data
> /var/lib/ceph/osd/ceph-12/ --osd-uuid 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
> --setuser ceph --setgroup ceph
>  stderr: warning: unable to create /var/run/ceph: (13) Permission denied
>  stderr: 2017-12-29 08:13:08.609127 b66f3000 -1 asok(0x850c62a0)
> AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
> bind the UNIX domain socket to '/var/run/ceph/ceph-osd.12.asok': (2) No such
> file or directory
>  stderr:
>  stderr: 2017-12-29 08:13:08.643410 b66f3000 -1
> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
> decode label at offset 66: buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
> end of struct encoding
>  stderr: 2017-12-29 08:13:08.644055 b66f3000 -1
> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
> decode label at offset 66: buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
> end of struct encoding
>  stderr: 2017-12-29 08:13:08.644722 b66f3000 -1
> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
> decode label at offset 66: buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
> end of struct encoding
>  stderr: 2017-12-29 08:13:08.646722 b66f3000 -1
> bluestore(/var/lib/ceph/osd/ceph-12/) _read_fsid unparsable uuid
>  stderr: 2017-12-29 08:14:00.697028 b66f3000 -1 key
> AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
>  stderr: 2017-12-29 08:14:01.261659 b66f3000 -1 created object store
> /var/lib/ceph/osd/ceph-12/ for osd.12 fsid
> 4e5adad0-784c-41b4-ab72-5f4fae499b3a
> <===
>
> Activate log:
> ===>
> # ceph-volume lvm activate --bluestore 12
> 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
> Running command: sudo ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
> /dev/vggroup/lv --path /var/lib/ceph/osd/ceph-12
> Running command: sudo ln -snf /dev/vggroup/lv
> /var/lib/ceph/osd/ceph-12/block
> Running command: chown -R ceph:ceph /dev/dm-0
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12
> Running command: sudo systemctl enable
> ceph-volume

Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-29 Thread Travis Nielsen
Since bluestore was declared stable in Luminous, is there any remaining
scenario to use filestore in new deployments? Or is it safe to assume that
bluestore is always better to use in Luminous? All documentation I can
find points to bluestore being superior in all cases.

Thanks,
Travis

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-29 Thread Sage Weil
On Fri, 29 Dec 2017, Travis Nielsen wrote:
> Since bluestore was declared stable in Luminous, is there any remaining
> scenario to use filestore in new deployments? Or is it safe to assume that
> bluestore is always better to use in Luminous? All documentation I can
> find points to bluestore being superior in all cases.

The only real reason to run FileStore is for stability reasons: FileStore 
is older and well-tested, so the most conservative users may stick to 
FileStore for a bit longer.

sage

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com