_______________
From: Eugen Block
Sent: Friday, May 24, 2024 2:51 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in
different subtree
I start to think that the root cause of the remapping is just the fact
that the crush rule(s) contain(s) th
ards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________
From: Eugen Block
Sent: Friday, May 24, 2024 2:51 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree
I start to think tha
egards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: Friday, May 24, 2024 2:51 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree
I start to think that t
From: Frank Schilder
Sent: Thursday, May 23, 2024 6:32 PM
To: Eugen Block
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in
different subtree
Hi Eugen,
I'm at home now. Could you please check all the remapped PGs that
they have no shards on
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: Thursday, May 23, 2024 6:32 PM
To: Eugen Block
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree
Hi Eugen,
lock
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree
Hi Eugen,
I'm at home now. Could you please check all the remapped PGs that they have no
shards on the new OSDs, i.e. its just shuffling around mappings within the same
set of OSDs under rooms?
Hi Eugen,
I'm at home now. Could you please check all the remapped PGs that they have no
shards on the new OSDs, i.e. its just shuffling around mappings within the same
set of OSDs under rooms?
If this is the case, it is possible that this is partly intentional and partly
buggy. The remapping
rank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in
different subtree
Hi Frank,
thanks for chiming in here.
Please correct if this is wrong. Assuming its correct, I conclude
the following.
You assume correctly.
Now, from your description it
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: Thursday, May 23, 2024 1:26 PM
To: Frank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in
different subtree
Hi Frank,
Eugen Block
Sent: Thursday, May 23, 2024 1:26 PM
To: Frank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in
different subtree
Hi Frank,
thanks for chiming in here.
Please correct if this is wrong. Assuming its correct, I conclude
the follo
the crush map.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: Thursday, May 23, 2024 1:26 PM
To: Frank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in
and describe at
which step exactly things start diverging from my expectations.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
____
From: Eugen Block
Sent: Thursday, May 23, 2024 12:05 PM
To: ceph-users@ceph.io
Subject: [ceph-use
Sent: Thursday, May 23, 2024 12:05 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree
Hi again,
I'm still wondering if I misunderstand some of the ceph concepts.
Let's assume the choose_tries value is too low and ceph can't fin
Hi again,
I'm still wondering if I misunderstand some of the ceph concepts.
Let's assume the choose_tries value is too low and ceph can't find
enough OSDs for the remapping. I would expect that there are some PG
chunks in remapping state or unknown or whatever, but why would it
affect the
Thanks, Konstantin.
It's been a while since I was last bitten by the choose_tries being
too low... Unfortunately, I won't be able to verify that... But I'll
definitely keep that in mind, or least I'll try to. :-D
Thanks!
Zitat von Konstantin Shalygin :
Hi Eugen
On 21 May 2024, at 15:26,
Hi Eugen
> On 21 May 2024, at 15:26, Eugen Block wrote:
>
> step set_choose_tries 100
I think you should try to increase set_choose_tries to 200
Last year we had an Pacific EC 8+2 deployment of 10 racks. And even with 50
hosts, the value of 100 not worked for us
k
___
16 matches
Mail list logo