Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread Stephen Mercier
Thank you for the advice. Our crush map is actually setup with replication set to 3, and at least one copy in each cabinet, ensuring no one host is a single point of failure. We fully intended on performing this maintenance over the course of many week, one host at a time. We felt that the stag

Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread Stephen Mercier
Sadly, this is not an option. Not only are there no free slots on the hosts, but we're downgrading in size for each OSD because we decided to sacrifice space to make a significant jump in drive quality. We're not really too concerned about the rebalancing, as we monitor the cluster closely and

Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread Wido den Hollander
> Op 14 april 2016 om 15:29 schreef Stephen Mercier > : > > > Good morning, > > We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the > past 20 months. We're very happy with our experience with the platform so far. > > Shortly, we will be embarking on an initiative to repl

Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread koukou73gr
If you have empty drive slots in your OSD hosts, I'd be tempted to insert new drive in slot, set noout, shutdown one OSD, unmount OSD directory, dd the old drive to the new one, remove old drive, restart OSD. No rebalancing and minimal data movment when the OSD rejoins. -K. On 04/14/2016 04:29 P

[ceph-users] Advice on OSD upgrades

2016-04-14 Thread Stephen Mercier
Good morning, We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the past 20 months. We're very happy with our experience with the platform so far. Shortly, we will be embarking on an initiative to replace all 88 OSDs with new drives (Planned maintenance and lifecycle replac