On Tue, Mar 31, 2015 at 3:05 PM, Gregory Farnum wrote:
> On Tue, Mar 31, 2015 at 12:56 PM, Quentin Hartman
> >
> > My understanding is that the "right" method to take an entire cluster
> > offline is to set noout and then shutting everything down. Is there a
> better
> > way?
>
> That's probably
On Tue, Mar 31, 2015 at 2:05 PM, Gregory Farnum wrote:
> Github pull requests. :)
>
Ah, well that's easy:
https://github.com/ceph/ceph/pull/4237
QH
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-c
On Tue, Mar 31, 2015 at 12:56 PM, Quentin Hartman
wrote:
> Thanks for the extra info Gregory. I did not also set nodown.
>
> I expect that I will be very rarely shutting everything down in the normal
> course of things, but it has come up a couple times when having to do some
> physical re-organiz
Thanks for the extra info Gregory. I did not also set nodown.
I expect that I will be very rarely shutting everything down in the normal
course of things, but it has come up a couple times when having to do some
physical re-organizing of racks. Little irritants like this aren't a big
deal if peopl
On Tue, Mar 31, 2015 at 7:50 AM, Quentin Hartman
wrote:
> I'm working on redeploying a 14-node cluster. I'm running giant 0.87.1. Last
> friday I got everything deployed and all was working well, and I set noout
> and shut all the OSD nodes down over the weekend. Yesterday when I spun it
> back up
I'm working on redeploying a 14-node cluster. I'm running giant 0.87.1.
Last friday I got everything deployed and all was working well, and I set
noout and shut all the OSD nodes down over the weekend. Yesterday when I
spun it back up, the OSDs were behaving very strangely, incorrectly marking
each