I wanted to revisit this - we're not on 15.2.9 and still have this one
cluster with 5 PGs "stuck" in pg_temp. Any idea how to clean this up,
or how it might have occurred? I'm fairly certain it showed up after
an autoscale up and autoscale down happened that overlapped each
other.
On Mon, Aug 10,
Hi,
I am not sure but perhaps this could be an Effekt of "balancer" module - if you
use it!?
Hth
Mehmet
Am 10. August 2020 17:28:27 MESZ schrieb David Orman :
>We've gotten a bit further, after evaluating how this remapped count
>was
>determine (pg_temp), we've found the PGs counted as being re
We've gotten a bit further, after evaluating how this remapped count was
determine (pg_temp), we've found the PGs counted as being remapped:
root@ceph01:~# ceph osd dump |grep pg_temp
pg_temp 3.7af [93,1,29]
pg_temp 3.7bc [137,97,5]
pg_temp 3.7d9 [72,120,18]
pg_temp 3.7e8 [80,21,71]
pg_temp 3.7fd
Still haven't figured this out. We went ahead and upgraded the entire
cluster to Podman 2.0.4 and in the process did OS/Kernel upgrades and
rebooted every node, one at a time. We've still got 5 PGs stuck in
'remapped' state, according to 'ceph -s' but 0 in the pg dump output in
that state. Does any