I haven't seen a reply, but managed to piece together info from a few places.

First, I edited my crushmap to move everything to the hard drives, and let ceph 
do its magic moving the pools.   (It ran overnight, not sure how long overall)
Then I deleted the SSD OSDs one at a time - since they were empty it went 
pretty fast.

I then repartitioned the SSDs , one 512Gb partition per HDD, and a bit left 
over.   I used the storage option in the RHEL Cockpit, because it let me 
quickly rebuild the partition table and create the partitions.

For my first server, I manually deleted each HDD osd, waited for everything to 
rebalance (about 12 hours/ OSD), then deleted the LVM , and re-added  using

sgdisk --zap-all /dev/sdX
ceph-volume lvm prepare --bluestore --data /dev/sdX --block.db /dev/sdbY

And waited for the OSD to settle in (maybe 6-8 hours).

I realized that each of the delete OSD/add used a lot of cross host traffic, 
and decided to take a slight risk on hosts 2 and 3
For those, I set the pools to 2 host replication instead of 3 (I do have a 
backup), and deleted all 3 OSDs at once.  This process still took surprisingly 
long, about 6 hours for all 3 to delete - I had assumed since I had replication 
set to 2, and there were 3 replicas, I could get away easily.

once I cleared the OSDs from host 2, I zapped the LVMs on the host, and did the 
sgdisk and ceph-volume prepare, and let everything sit overnight.
the next day, I did host 3, it took 8 hours to rebuild, and then I set the pool 
replication back to 3, let it rebuild again, and stabilize.

One of my uses is a set of podman containers running ARK Survival evolved 
servers.   In the old configuration,  I had to use XFS on mounted RBD images 
because the startup was so slow (It appears to somehow touch the status of each 
of the thousands of files in the directory, and the XFS cached the writes 
fairly well).  I can now use Cephfs as the file system again.   For this use 
case, it appears to be as fast on startup as XFS over an RBD image, if not 
possibly faster.



________________________________
From: Robert W. Eckert <r...@rob.eckert.name>
Sent: Monday, December 13, 2021 1:00 PM
To: ceph-users@ceph.io <ceph-users@ceph.io>
Subject: [ceph-users] reallocating SSDs

Hi- I have a 3 host cluster with 3 HDs and 1 SSD per host.

The hosts are on RHEL 8.5, using PODMAN containers deployed via cephadm, with 
one OSD per HD and SSD.

In my current crush map, I have a rule for the SSD and the HDD, and put the 
cephfs meta data pool and rbd on the ssd pool.

>From things I have read, it probably would be smarter to partition the SSDs 
>into 3 partitions (Maybe 4 if I want to add an additional hdd in the future), 
>and put the bluefs db/WAL on the SSD partition, but I am not quite sure how to 
>get there from where I am.

So my questions is what would be the procedure to remove an OSD, split the SSD 
and attach the partition to the other OSDs?

I can handle the movement of the pools and data.

Thanks,

Rob
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to