Hi,

is the balancer on? And which mode is enabled?

ceph balancer status

You definitely should split PGs, aim at 100 - 150 PGs per OSD at first. I would inspect the PG sizes of the new OSDs:

ceph pg ls-by-osd 288 (column BYTES)

and compare them with older OSDs. If you have very large PG sizes, only a few of them could fill up an OSD quite quickly since your OSD sizes are "only" 1.7 TB.

Zitat von Yunus Emre Sarıpınar <yunusemresaripi...@gmail.com>:

I have 6 ssd sata and 12 osd per server in a 24 server cluster. This
environment was created when it was in the natilus version.

I switched this environment to the Octopus version 6 months ago. The
cluster is working healthily.

I added 8 new servers, I created 6 ssd sata and 12 osd on these servers in
the same way.

I did not change the number of PGs in the environment, I have 8192 PGs.

The problem is that in my ceph -s output, the remapped pg and missplaced
object states are gone, but there is a warning of 6nearfull osds and 4pools
nearfull.

I saw in the ceph df output that my pools are also full above normal.

In the output of the ceph osd df tree command, I observed that the
occupancy percentages of the newly added osds were around 80%, while the
percentages of the old osds were around 30%.

How do I equalize this situation?

Note: I am sharing the output of crushmap and osd df tree with you in the
attachment.
My new osds between 288-384.
My new servers are ekuark13,14,15,16 and bkuark13,14,15,16.


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to