Hello,
what Kostis said, in particular with regard to change crush weights (NOT
re-weight).
Also the output of "ceph -s" if you please, insufficient PGs can make OSD
imbalances worse.
Look at your output of "ceph df detail" and "ceph osd tree".
Find the worst outliers and carefully (a few % at
Hi Hauke,
you could increase the mon/osd full/near full ratios but at this level
of disk space scarcity, things may need your constant attention
especially in case of failure given the risk of closing down the
cluster IO. Modifying crush weights may be of use too.
Regards,
Kostis
On 15 June 2016
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I have a ceph jewel Cluster with 5 Server and 40 OSD.
The Cluster is very full, but at this Moment i cannot use 10 Percent of
the Volume because the ceph health health says some Harddisks are too
full. They are between 75 and 95 Percent full.