Alright, thanks! :) Kind Regards, David Majchrzak
13 jun 2014 kl. 11:21 skrev Wido den Hollander <w...@42on.com>: > On 06/13/2014 11:18 AM, David wrote: >> Thanks Wido, >> >> So during no out data will be degraded but not resynced, which won’t >> interrupt operations ( running default 3 replicas and a normal map, so each >> osd node only has 1 replica of the data) >> Do we need to do anything after bringing the node up again or will it >> resynch automatically? >> > > Correct. The OSDs will be marked as down, so that will cause the PGs to go > into a degraded state, but they will stay marked as "in", not triggering data > re-distribution. > > You don't have to do anything. Just let the machine and OSDs boot and Ceph > will take care of the rest (assuming it's all configured properly). > > Afterwards unset the noout flag. > > Wido > >> Kind Regards, >> David Majchrzak >> >> 13 jun 2014 kl. 11:13 skrev Wido den Hollander <w...@42on.com>: >> >>> On 06/13/2014 10:56 AM, David wrote: >>>> Hi, >>>> >>>> We’re going to take down one OSD node for maintenance (add cpu + ram) >>>> which might take 10-20 minutes. >>>> What’s the best practice here in a production cluster running dumpling >>>> 0.67.7-1~bpo70+1? >>>> >>> >>> I suggest: >>> >>> $ ceph osd set noout >>> >>> This way NO OSD will be marked as out and prevent data re-distribution. >>> >>> After the OSDs are back up and synced: >>> >>> $ ceph osd unset noout >>> >>>> Kind Regards, >>>> David Majchrzak >>>> >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users@lists.ceph.com >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>> >>> >>> >>> -- >>> Wido den Hollander >>> 42on B.V. >>> >>> Phone: +31 (0)20 700 9902 >>> Skype: contact42on >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@lists.ceph.com >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > > -- > Wido den Hollander > 42on B.V. > > Phone: +31 (0)20 700 9902 > Skype: contact42on _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com