ph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Dietmar Maurer
> Sent: Dienstag, 14. Jänner 2014 10:40
> To: Wolfgang Hennerbichler; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 3 node setup with pools size=3
>
> > Are you aware of this?
> >
> Are you aware of this?
> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
> => Stopping w/out Rebalancing
What do you think is wrong with my setup? I want to re-balance. The problem is
that it does not
happen at all!
I do exactly the same test with and without 'ceph osd c
On 01/14/2014 10:06 AM, Dietmar Maurer wrote:
> Yes, only a single OSD is down and marked out.
Sorry for the misunderstanding then.
>> Then there should definitively be a backfilling in place.
>
> no, this does not happen. Many PGs stay in degraded state (I tested this
> several times now).
> Sorry, it seems as if I had misread your question: Only a single OSD fails,
> not the
> whole server?
Yes, only a single OSD is down and marked out.
> Then there should definitively be a backfilling in place.
no, this does not happen. Many PGs stay in degraded state (I tested this
several
On 01/14/2014 09:44 AM, Dietmar Maurer wrote:
>>> When using a pool size of 3, I get the following behavior when one OSD
>>> fails:
>>> * the affected PGs get marked active+degraded
>>>
>>> * there is no data movement/backfill
>>
>> Works as designed, if you have the default crush map in place (a
> > When using a pool size of 3, I get the following behavior when one OSD
> > fails:
> > * the affected PGs get marked active+degraded
> >
> > * there is no data movement/backfill
>
> Works as designed, if you have the default crush map in place (all replicas
> must
> be on DIFFERENT hosts). You
On 01/13/2014 12:39 PM, Dietmar Maurer wrote:
> I am still playing around with a small setup using 3 Nodes, each running
> 4 OSDs (=12 OSDs).
>
>
>
> When using a pool size of 3, I get the following behavior when one OSD
> fails:
> * the affected PGs get marked active+degraded
>
> * there is