Hello,
The way Wido explained is the correct way. I won't deny, however, last
year we had problems with our SSD disks and they did not perform well.
So we decided to replace all disks. As the replacement done by Ceph
caused highload/downtime on the clients (which was the reason we wanted
to repl
The fact that you are all SSD I would do exactly what Wido said -
gracefully remove the OSD and gracefully bring up the OSD on the new
SSD.
Let Ceph do what its designed to do. The rsync idea looks great on
paper - not sure what issues you will run into in practise.
On Fri, Dec 16, 2016 at 12:38
2016-12-16 10:19 GMT+01:00 Wido den Hollander :
>
> > Op 16 december 2016 om 9:49 schreef Alessandro Brega <
> alessandro.bre...@gmail.com>:
> >
> >
> > 2016-12-16 9:33 GMT+01:00 Wido den Hollander :
> >
> > >
> > > > Op 16 december 2016 om 9:26 schreef Alessandro Brega <
> > > alessandro.bre...@g
2016-12-16 9:33 GMT+01:00 Wido den Hollander :
>
> > Op 16 december 2016 om 9:26 schreef Alessandro Brega <
> alessandro.bre...@gmail.com>:
> >
> >
> > Hi guys,
> >
> > I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD
> > only. I'd like to replace some SSDs because they are c
> Op 16 december 2016 om 9:26 schreef Alessandro Brega
> :
>
>
> Hi guys,
>
> I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD
> only. I'd like to replace some SSDs because they are close to their TBW.
>
> I know I can simply shutdown the OSD, replace the SSD, restart th
Hi guys,
I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD
only. I'd like to replace some SSDs because they are close to their TBW.
I know I can simply shutdown the OSD, replace the SSD, restart the OSD and
ceph will take care of the rest. However I don't want to do it this w