On Thu, 4 Apr 2019 at 13:32, Dan van der Ster wrote:
>
> There are several more fixes queued up for v12.2.12:
>
> 16b7cc1bf9 osd/OSDMap: add log for better debugging
> 3d2945dd6e osd/OSDMap: calc_pg_upmaps - restrict optimization to
> origin pools only
> ab2dbc2089 osd/OSDMap: drop local pool filt
There are several more fixes queued up for v12.2.12:
16b7cc1bf9 osd/OSDMap: add log for better debugging
3d2945dd6e osd/OSDMap: calc_pg_upmaps - restrict optimization to
origin pools only
ab2dbc2089 osd/OSDMap: drop local pool filter in calc_pg_upmaps
119d8cb2a1 crush: fix upmap overkill
0729a7887
Yeah i agree... the auto balancer is definitely doing a poor job for me.
I have been experimenting with this for weeks and i can make way better
optimization than the balancer by looking at "ceph osd df tree" and
manually running various ceph upmap commands.
Too bad this is tedious work, and tend
On Mon, 18 Mar 2019 at 16:42, Dan van der Ster wrote:
>
> The balancer optimizes # PGs / crush weight. That host looks already
> quite balanced for that metric.
>
> If the balancing is not optimal for a specific pool that has most of
> the data, then you can use the `optimize myplan ` param.
>
>F
balancer
>>> can't find further optimization.
>>> Specifically OSD 23 is getting way more pg's than the other 3tb OSD's.
>>>
>>> See https://pastebin.com/f5g5Deak
>>>
>>> On Fri, Mar 1, 2019 at 10:25 AM wrote:
>>>>
>
//github.com/ceph/ceph/pull/26127>
>>> <https://github.com/ceph/ceph/pull/26127>
>>> <https://github.com/ceph/ceph/pull/26127>
>>> https://github.com/ceph/ceph/pull/26127 if you are eager to get out
>>> of the trap right now.
>>>
&g
; Sorry for the typo.
>>
>>
>>
>>
>> 原始邮件
>> *发件人:*谢型果10072465
>> *收件人:*d...@vanderster.com ;
>> *抄送人:*ceph-users@lists.ceph.com ;
>> *日 期 :*2019年03月01日 17:09
>> *主 题 :**Re: [ceph-users] ceph osd pg-upmap-items not working*
>>
*发件人:*谢型果10072465
> *收件人:*d...@vanderster.com ;
> *抄送人:*ceph-users@lists.ceph.com ;
> *日 期 :*2019年03月01日 17:09
> *主 题 :**Re: [ceph-users] ceph osd pg-upmap-items not working*
> ___
> ceph-users mailing list
> ceph-users@lists
> Backports should be available in v12.2.11.
s/v12.2.11/ v12.2.12/
Sorry for the typo.
原始邮件
发件人:谢型果10072465
收件人:d...@vanderster.com ;
抄送人:ceph-users@lists.ceph.com ;
日 期 :2019年03月01日 17:09
主 题 :Re: [ceph-users] ceph osd pg-upmap-items not work
Bertilsson ;
抄送人:ceph-users ;谢型果10072465;
日 期 :2019年03月01日 14:48
主 题 :Re: [ceph-users] ceph osd pg-upmap-items not working
It looks like that somewhat unusual crush rule is confusing the new
upmap cleaning.
(debug_mon 10 on the active mon should show those cleanups).
I'm copying Xie Xingguo
It looks like that somewhat unusual crush rule is confusing the new
upmap cleaning.
(debug_mon 10 on the active mon should show those cleanups).
I'm copying Xie Xingguo, and probably you should create a tracker for this.
-- dan
On Fri, Mar 1, 2019 at 3:12 AM Kári Bertilsson wrote:
>
> This i
This is the pool
pool 41 'ec82_pool' erasure size 10 min_size 8 crush_rule 1 object_hash
rjenkins pg_num 512 pgp_num 512 last_change 63794 lfor 21731/21731 flags
hashpspool,ec_overwrites stripe_width 32768 application cephfs
removed_snaps [1~5]
Here is the relevant crush rule:
rule ec_pool
Hi,
pg-upmap-items became more strict in v12.2.11 when validating upmaps.
E.g., it now won't let you put two PGs in the same rack if the crush
rule doesn't allow it.
Where are OSDs 23 and 123 in your cluster? What is the relevant crush rule?
-- dan
On Wed, Feb 27, 2019 at 9:17 PM Kári Bertilss
13 matches
Mail list logo