On Tue, Jun 4, 2013 at 1:59 PM, Sage Weil wrote:
> On Tue, 4 Jun 2013, Nigel Williams wrote:
>> Something else I noticed: ...
>
> Does the monitor data directory share a disk with an OSD? If so, that
> makes sense: compaction freed enough space to drop below the threshold...
Of course! that is
On Tue, 4 Jun 2013, Nigel Williams wrote:
> On 4/06/2013 9:16 AM, Chen, Xiaoxi wrote:
> > my 0.02? you really dont need to wait for health_ok between your
> > recovery steps,just go ahead. Everytime a new map be generated and
> > broadcasted,the old map and in-progress recovery will be canceled
>
On 4/06/2013 9:16 AM, Chen, Xiaoxi wrote:
> my 0.02, you really dont need to wait for health_ok between your
> recovery steps,just go ahead. Everytime a new map be generated and
> broadcasted,the old map and in-progress recovery will be canceled
thanks Xiaoxi, that is helpful to know.
It seems to
my 0.02, you really dont need to wait for health_ok between your recovery
steps,just go ahead. Everytime a new map be generated and broadcasted,the old
map and in-progress recovery will be canceled
发自我的 iPhone
在 2013-6-2,11:30,"Nigel Williams" 写道:
> Could I have a critique of this approach pl
Could I have a critique of this approach please as to how I could have
done it better or whether what I experienced simply reflects work still
to be done.
This is with Ceph 0.61.2 on a quite slow test cluster (logs shared with
OSDs, no separate journals, using CephFS).
I knocked the power co