thony D'Atri
Sent: Monday, November 11, 2024 8:41 PM
To: bre...@cfl.rr.com
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: Cephadm Drive upgrade process
> 1.Pulled failed drive ( after troubleshooting of course )
>
> 2.Cephadm gui - find OSD, purge osd
> 3.
Hi,
I would be nice if we could just copy the content to the new drive
and go from there.
that's exactly what we usually do, we add a new drive and 'pvmove' the
contents of the failing drive. The worst thing so far is that the
orchestrator still thinks it's /dev/sd{previous_letter}, but I
> 1.Pulled failed drive ( after troubleshooting of course )
>
> 2.Cephadm gui - find OSD, purge osd
> 3.Wait for rebalance
> 4.Insert new drive ( let cluster rebalance after it automatically adds
> the drive as an OSD ) ( yes, we have auto-add on in the clusters )
> I imagine wi