Hello everybody,

I have just upgraded my ceph cluster from kraken to luminous. I just want to go 
on with filestore based objectstore for OSDs until Redhat announces bluestore 
as stable. It is still in technical preview.

So my question is: “What is the right procedure of adding an filestore based 
OSD into the existing cluster with an NVME journal?"

My NVME journal contains ceph journal partitions for existing OSD created with 
kraken.

root@ank-ceph10:~# lsblk /dev/nvme0n1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 2.9T 0 disk
|-nvme0n1p1 259:1 0 40G 0 part
|-nvme0n1p2 259:2 0 40G 0 part
|-nvme0n1p3 259:3 0 40G 0 part
|-nvme0n1p4 259:4 0 40G 0 part
|-nvme0n1p5 259:5 0 40G 0 part
|-nvme0n1p6 259:6 0 40G 0 part
|-nvme0n1p7 259:7 0 40G 0 part
|-nvme0n1p8 259:8 0 40G 0 part
|-nvme0n1p9 259:9 0 40G 0 part
|-nvme0n1p10 259:10 0 40G 0 part
|-nvme0n1p11 259:11 0 40G 0 part
`-nvme0n1p12 259:12 0 40G 0 part

When i try to add a new osd with the following command,

ceph-deploy osd create --filestore --journal /dev/nvme0n1 --data /dev/sdl 
ank-ceph10

I get the following error:

[ank-ceph10][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph 
lvm create --filestore --data /dev/sdl --journal /dev/nvme0n1
[ank-ceph10][WARNIN] --> RuntimeError: unable to use device
[ank-ceph10][DEBUG ] Running command: /usr/bin/ceph-authtool --gen-print-key
[ank-ceph10][DEBUG ] Running command: /usr/bin/ceph --cluster ceph --name 
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - 
osd new 2d203f03-e547-4a8a-9140-53f48ed52e06
[ank-ceph10][DEBUG ] Running command: vgcreate --force --yes 
ceph-35465726-457d-439d-9f59-a8a050f5a486 /dev/sdl
[ank-ceph10][DEBUG ] stderr: /run/lvm/lvmetad.socket: connect failed: No such 
file or directory
[ank-ceph10][DEBUG ] WARNING: Failed to connect to lvmetad. Falling back to 
internal scanning.
[ank-ceph10][DEBUG ] stdout: Physical volume "/dev/sdl" successfully created
[ank-ceph10][DEBUG ] stdout: Volume group 
"ceph-35465726-457d-439d-9f59-a8a050f5a486" successfully created
[ank-ceph10][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n 
osd-data-2d203f03-e547-4a8a-9140-53f48ed52e06 
ceph-35465726-457d-439d-9f59-a8a050f5a486
[ank-ceph10][DEBUG ] stderr: /run/lvm/lvmetad.socket: connect failed: No such 
file or directory
[ank-ceph10][DEBUG ] WARNING: Failed to connect to lvmetad. Falling back to 
internal scanning.
[ank-ceph10][DEBUG ] stdout: Logical volume 
"osd-data-2d203f03-e547-4a8a-9140-53f48ed52e06" created.
[ank-ceph10][DEBUG ] --> blkid could not detect a PARTUUID for device: 
/dev/nvme0n1
[ank-ceph10][DEBUG ] --> Was unable to complete a new OSD, will rollback changes
[ank-ceph10][DEBUG ] --> OSD will be fully purged from the cluster, because the 
ID was generated
[ank-ceph10][DEBUG ] Running command: ceph osd purge osd.119 
--yes-i-really-mean-it
[ank-ceph10][DEBUG ] stderr: purged osd.119
[ank-ceph10][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume 
--cluster ceph lvm create --filestore --data /dev/sdl --journal /dev/nvme0n1
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

AFAIU, blkid looks for PARTUUID for the journal device, but it does not have 
one. Only partitions have. I do not want to format my journal device. Any 
recommendation about this?

root@ank-ceph10:~# blkid /dev/nvme0n1*
/dev/nvme0n1: PTUUID="a6431404-5693-4076-98c9-ffbe84224e1b" PTTYPE="gpt"
/dev/nvme0n1p1: PARTLABEL="ceph journal" 
PARTUUID="43440fab-a30f-4e42-9c15-35f375dde033"
/dev/nvme0n1p10: PARTLABEL="ceph journal" 
PARTUUID="c9c9f459-98a1-4a6a-9350-9942a6fc02f6"
/dev/nvme0n1p11: PARTLABEL="ceph journal" 
PARTUUID="3f64ddc1-ac5d-4b7b-ace3-ad35d44e4fd3"
/dev/nvme0n1p12: PARTLABEL="ceph journal" 
PARTUUID="0fdff4d6-2833-4e6e-a832-9fb2452bc396"
/dev/nvme0n1p2: PARTLABEL="ceph journal" 
PARTUUID="5ce0b4e8-3571-4297-974a-9ef648fac1a8"
/dev/nvme0n1p3: PARTLABEL="ceph journal" 
PARTUUID="228cee11-06e3-4691-963a-77e74e099716"
/dev/nvme0n1p4: PARTLABEL="ceph journal" 
PARTUUID="b7c09c3e-e4ae-42be-8686-5daf9e40c407"
/dev/nvme0n1p5: PARTLABEL="ceph journal" 
PARTUUID="60d9115c-ebb1-4eaf-85ae-31379a5e9450"
/dev/nvme0n1p6: PARTLABEL="ceph journal" 
PARTUUID="5a057b30-b697-4598-84c0-1794c608d70c"
/dev/nvme0n1p7: PARTLABEL="ceph journal" 
PARTUUID="c22c272d-5b75-40ca-970e-87b1b303944c"
/dev/nvme0n1p8: PARTLABEL="ceph journal" 
PARTUUID="ed9fd194-1490-42b1-a2b4-ae36b2a4f8ce"
/dev/nvme0n1p9: PARTLABEL="ceph journal" 
PARTUUID="d5589315-4e47-49c4-91f5-48e1a55011d2"

While using kraken, I used to add OSDs with journals via the follwing command:

ceph-deploy osd prepare ank-ceph10:sdl:/dev/nvme0n1

Thanks for any recommendation.

Best regards,

Dr. Huseyin COTUK
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to