[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-11 Thread Michel Jouvin
Hi, After 2 weeks, the increase of the number of PGs in an EC pool (9+6) from 256 PGs to 1024 completed successfully! I was still wondering if such a duration was expected or may be the sign of a problem... After the previous exchanges, I restarted the increase by setting both pg_num and pgp

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-01 Thread Michel Jouvin
Hi Bukhard, Thanks for your answer. Your explanation seems to match well our observations, in particular the fact that new misplaced objects are added when we fall under something like 0.5% of misplaced objects. What is not clear for me anyway is that 'ceph osd pool ls detail' for the pool mo

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-01 Thread Burkhard Linke
Hi, On 4/1/25 09:06, Michel Jouvin wrote: Hi, We are observing a new strange behaviour on our production cluster : we increased the number of PG (from 256 to 2048) in a (EC) pool after a warning that there was a very high number of objects per pool (the pool has 52M objects). Background: t

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-01 Thread Burkhard Linke
Hi, On 4/1/25 10:03, Michel Jouvin wrote: Hi Bukhard, Thanks for your answer. Your explanation seems to match well our observations, in particular the fact that new misplaced objects are added when we fall under something like 0.5% of misplaced objects. What is not clear for me anyway is tha