[ceph-users] Re: unbalanced OSDs

2023-08-03 Thread Spiros Papageorgiou
On 03-Aug-23 12:11 PM, Eugen Block wrote: ceph balancer status I changed the PGs and it started rebalancing (and turned autoscaler off) , so now it will not report status: It reports: "optimize_result": "Too many objects (0.088184 > 0.05) are misplaced; try again later" Lets wait a fe

[ceph-users] unbalanced OSDs

2023-08-03 Thread Spiros Papageorgiou
Hi all, I have a ceph cluster with 3 nodes. ceph version is 16.2.9. There are 7 SSD OSDs on each server and one pool that resides on these OSDs. My OSDs are terribly unbalanced: ID  CLASS  WEIGHT    REWEIGHT  SIZE RAW USE  DATA OMAP META  AVAIL    %USE   VAR   PGS STATUS  TYPE N