Hi - I have a drive that is starting to show errors, and was wondering what the 
best way to replace it is.

I am on Ceph 18.2.1, and using cephadm/containers
I have 3 hosts, and each host has 4 4Tb drives with a 2 tb NVME device splt 
amongst them for WAL/DB, and 10 GB Networking.


Option 1:  Stop the OSD, use dd to copy from old to new, remove old, reboot so 
LVM recognized new as the volume that old was.
Option 2: LVM and mirror the old drive to the new, then remove the old, once 
the mirroring is complete.    In this way, I don't have to remove and 
reprovision the OSD, and the OSD doesn't need to be down during any 
Option 3: Remove the OSD, let everything settle down, swap the drive, fight the 
orchestrator to get the OSD provisioned with the OSD and db partition on the 
proper partition of the  NVME, then let everything sync up again.   

I am leaning towards Option 2, because it should have the least impact/overhead 
on the rest of the drives, but am open to the other options as well.

Thanks,
Rob
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to