I am relatively new to Ceph and need some advice on Bluestore migration. I
tried migrating a few of our test cluster nodes from Filestore to Bluestore
by following this (
http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/)
as the cluster is currently running 12.2.9. The cluster, originally set up
by my predecessors, was running Jewel until I upgraded it recently to
Luminous.
OSDs in each OSD host is set up in such a way that for ever 10 data HDD
disks, there is one SSD drive that is holding their journals.  For example,
osd.0 data is on /dev/sdh and its Filestore journal is on a partitioned
part of /dev/sda. So, lsblk shows something like

sda       8:0    0 447.1G  0 disk
├─sda1    8:1    0    40G  0 part      # journal for osd.0

sdh       8:112  0   3.7T  0 disk
└─sdh1    8:113  0   3.7T  0 part /var/lib/ceph/osd/ceph-0

It seems like this was all set up by my predecessor with the following
command :

ceph-deploy osd create osd0:sdh:/dev/sda


Since sda is an SSD drive, even after Bluestore migration, I plan to keep
the DB and WAL for sdh (and 9 other data disks) on the /sda drive.


I followed all the way up to number 6 of
http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/.
Then, instead of ceph-volume lvm zap $DEVICE, I used from an admin node,
ceph-deploy disk zap to wipe out the contents on those two drives.  Since
osd.0 - 9 share the SSD drive for their journals, I did the same for
osd.{1..9} as well as /dev/sda.  I then destroyed osd.{0..9}  using the osd
destroy command (step 8).

Where something definitely went wrong was the last part. ceph-volume lvm
create command shown assumes that WAL and DB will be on the same device as
the data.  I tried adding --block.wal --block.data to it, but did not work.
I tried various ceph-deploy commands taken from various versions of the
doc, but nothing seemed to work.  I even tried manually creating LVs for
WAL and DB (
http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/),
but that did not work, either. At some point, the following command seems
to have worked (the only command that did not return an error), but then
all the OSDs on the node shut down and I could not bring any of them back
up.

sudo ceph-disk prepare --bluestore /dev/sdh --block.wal=/dev/sda
--block.db=/dev/sda --osd-id 0

Since this is a test cluster with essentially no data on it, I can always
start over.  But I do need to know how to properly migrate OSDs from
Filestore to Bluestore in this type of setting (with Filestore journal
residing on an SSD) for our production clusters.  Please let me know if
there are any steps missing in the documentation particularly for a case
like this and what commands I need to run to achieve what I am trying to
do.  Also, if it is advisable to upgrade to Mimic first, then perform the
FIlestore to Bluestore migration, that is an option as well.


-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to