hink there'll be an easy
answer, and it's more likely that I'll have to reproduce the scenario and
actually pause myself next time in order to troubleshoot it?
From: Craig Lewis [mailto:cle...@centraldesktop.com]
Sent: 19 December 2014 19:17
To: Chris Murray
Cc: ceph-users
Subject: Re:
more likely that I'll have to reproduce the scenario and
> actually pause myself next time in order to troubleshoot it?
>
> From: Craig Lewis [mailto:cle...@centraldesktop.com]
> Sent: 19 December 2014 19:17
> To: Chris Murray
> Cc: ceph-users
> Subject: Re: [ceph-user
> The more I think about this problem, the less I think there'll be an easy
> answer, and it's more likely that I'll have to reproduce the scenario and
> actually pause myself next time in order to troubleshoot it?
It is even possible to simulate those crush problem. I reported a few examples
long
Lewis [mailto:cle...@centraldesktop.com]
Sent: 19 December 2014 19:17
To: Chris Murray
Cc: ceph-users
Subject: Re: [ceph-users] Placement groups stuck inactive after down & out of
1/9 OSDs
That seems odd. So you have 3 nodes, with 3 OSDs each. You should've been
able to mark osd.0 down an
That seems odd. So you have 3 nodes, with 3 OSDs each. You should've been
able to mark osd.0 down and out, then stop the daemon without having those
issues.
It's generally best to mark an osd down, then out, and wait until the
cluster has recovered completely before stopping the daemon and remov
Hello,
I'm a newbie to CEPH, gaining some familiarity by hosting some virtual
machines on a test cluster. I'm using a virtualisation product called
Proxmox Virtual Environment, which conveniently handles cluster setup,
pool setup, OSD creation etc.
During the attempted removal of an OSD, my pool