Hi Andras.
Assuming that you've already tightened the
mgr/balancer/upmap_max_deviation to 1, I suspect that this cluster
already has too many upmaps.
Last time I checked, the balancer implementation is not able to
improve a pg-upmap-items entry if one already exists for a PG. (It can
add an OSD m
Hello Ceph user list!
I tried to update Ceph 15.2.10 to 16.2.0 via ceph orch. In the
beginning everything seems to work fine and the new MGR and MONs where
deployed. But now I enden up in a pulling loop and I am unable to fix
the issue by my self.
#ceph -W cephadm --watch-debu
2021-04-02T10:36:
Hi again,
Oops, I'd missed the part about some PGs being degraded, which
prevents the balancer from continuing.
So I assume that you have PGs which are simultaneously
undersized+backfill_toofull?
That case does indeed sound tricky. To solve that you would either
need to move PGs out of the tooful
Hi,
just installed pacific on our test-cluster. This really is a minimal, but
fully functional cluster.
Everything works as expected, except for the new (and by me anticipated)
cephfs-top.
When I run that tool, it says: "cluster ceph does not exist"
If I point it to the correct config file:
# cep
On Fri, Apr 2, 2021 at 2:59 PM Erwin Bogaard wrote:
>
> Hi,
>
> just installed pacific on our test-cluster. This really is a minimal, but
> fully functional cluster.
> Everything works as expected, except for the new (and by me anticipated)
> cephfs-top.
> When I run that tool, it says: "cluster c
Den fre 2 apr. 2021 kl 11:23 skrev Dan van der Ster :
>
> Hi again,
>
> Oops, I'd missed the part about some PGs being degraded, which
> prevents the balancer from continuing.
> any upmaps which are directing PGs *to* those toofull OSDs. Or maybe
> it will be enough to just reweight those OSDs to 0
Dear ceph users,
On one of our clusters I have some difficulties with the upmap
balancer. We started with a reasonably well balanced cluster (using the
balancer in upmap mode). After a node failure, we crush reweighted all
the OSDs of the node to take it out of the cluster - and waited for t
Hi Alex,
Thanks for the report! I've opened
https://tracker.ceph.com/issues/50114. It looks like the
target_digests check needs to check for overlap instead of equality.
sage
On Fri, Apr 2, 2021 at 4:04 AM Alexander Sporleder
wrote:
>
> Hello Ceph user list!
>
> I tried to update Ceph 15.2.10
I'm a bit confused by the log messages--I'm not sure why the
target_digests aren't changing. Can you post the whole
ceph-mgr.mon-a-02.tvcrfq.log? (ceph-post-file
/var/log/ceph/*/ceph-mgr.mon-a-02.tvcrfq.log)
Thanks!
s
___
ceph-users mailing list -- cep
Lowering the weight is what I ended up doing. But this isn't ideal
since afterwards the balancer will remove too many PGs from the OSD
since now it has a lower weight. So I'll have to put the weight back
once the cluster recovers and the balancer goes back to its business.
But in any case -
On Fri, Apr 2, 2021 at 12:08 PM Alexander Sporleder
wrote:
>
> Hello Sage, thank you for your response!
>
> I had some problems updating 15.2.8 -> 15.2.9 but after updating Podman
> to 3.0.1 and Ceph to 15.2.10 everything was fine again.
>
> Then I started the update 15.2.10 -> 16.2.0 and in the b
11 matches
Mail list logo