io:
client: 1897 kB/s wr, 0 op/s rd, 11 op/s wr
And it's frozen in that state, self-healing doesn't occur, just stuck in
the state: objects misplaced / pgs active+clean+remapped.
I think something wrong with my rule, and the cluster can't move objects
to rearrange it accor
m_name": "default"
>> },
>> {
>> "op": "chooseleaf_firstn",
>> "num": 0,
>> "type": "pod"
>> },
>> {
>> "op": "emit"
>> }
>> ]
>> }
>
>
> 1. Assign device class to your crush rule:
>
> ceph osd crush rule create-replicated pods default pod hdd
>
> 2. Your crush is imbalanced:
>
> *good*:
>
> root:
>
> host1:
>
> - osd0
>
> host2:
>
> - osd1
>
> host3:
>
> - osd3
>
>
> *bad*:
>
> root:
>
> host1:
>
> - osd0
>
> host2:
>
> - osd1
>
> - osd2
>
> - osd3
>
>
>
>
> k
--
With best regards,
Igor Gajsin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "pod"
},
{
"op": "emit"
}
]
}
Konstantin Shalygin writes:
> On 04/26/2018 11:30 PM, Igor Gajsi
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
My data is replicated across hosts, not across osds, all hosts have
pieces of data and a situation like:
* host0 has a piece of data on
Thanks a lot for your help.
Konstantin Shalygin writes:
> On 04/27/2018 05:05 PM, Igor Gajsin wrote:
>> I have a crush rule like
>
>
> You still can use device classes!
>
>
>> * host0 has a piece of data on osd.0
> Not peace, full object. If we talk about non-E