Thanks- the prepare worked.   I was able to use the partition for the block.db 
- I have seen several post/ examples where NVME drives are used that way as 
well.



-----Original Message-----
From: Robert Sander <r.san...@heinlein-support.de> 
Sent: Monday, August 1, 2022 3:11 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Adding new drives to ceph with ssd DB+WAL

Am 30.07.22 um 01:28 schrieb Robert W. Eckert:
> Hi - I am trying to add a new hdd to each of my 3 servers, and want to use a 
> spare ssd partition on the servers for the db+wall.   My other OSDs are set 
> up the same way, but I can't seem to keep CEPH from creating the OSDs on the 
> drives before I can actually create the OSD
> 
> I am trying to use the commands
> 
> ceph orch device zap cube.robeckert.us /dev/sda --force cephadm 
> ceph-volume lvm create --bluestore --data /dev/sda --block.db 
> /dev/sdc1

Two things:

You cannot use "cephadm ceph-volume lvm create" in a cephadm orchestrated 
cluster because it will not create the correct container systemd units.

Use "cephadm ceph-volume lvm prepare" to create the OSD and "ceph cephadm osd 
activate $HOST" to let the orchestrator create the daemons (systemd units for 
containers).


AFAIK it is not possible to use a partition for a block DB device.
Create a physical volume on your SSD, a volume group and then a logical volume 
for each DB device. Use the LV in the command line like this:

cephadm ceph-volume lvm preparee --bluestore --data /dev/sda --block.db 
vgname/lvname

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to