Thank you for the advice.
Our crush map is actually setup with replication set to 3, and at least one
copy in each cabinet, ensuring no one host is a single point of failure. We
fully intended on performing this maintenance over the course of many week, one
host at a time. We felt that the stag
Sadly, this is not an option. Not only are there no free slots on the hosts,
but we're downgrading in size for each OSD because we decided to sacrifice
space to make a significant jump in drive quality.
We're not really too concerned about the rebalancing, as we monitor the cluster
closely and
> Op 14 april 2016 om 15:29 schreef Stephen Mercier
> :
>
>
> Good morning,
>
> We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the
> past 20 months. We're very happy with our experience with the platform so far.
>
> Shortly, we will be embarking on an initiative to repl
If you have empty drive slots in your OSD hosts, I'd be tempted to
insert new drive in slot, set noout, shutdown one OSD, unmount OSD
directory, dd the old drive to the new one, remove old drive, restart OSD.
No rebalancing and minimal data movment when the OSD rejoins.
-K.
On 04/14/2016 04:29 P
Good morning,
We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the past
20 months. We're very happy with our experience with the platform so far.
Shortly, we will be embarking on an initiative to replace all 88 OSDs with new
drives (Planned maintenance and lifecycle replac