There is a ceph command "reweight-by-utilization" you can run to
adjust the OSD weights automatically based on their utilization:
http://docs.ceph.com/docs/master/rados/operations/control/#osd-subsystem
Some people run this on a periodic basis (cron script)
Check the mailing list archives, for exa
How many PGs ? How many pool (and how many data, please post rados df)
On 13/09/2017 22:30, Sinan Polat wrote:
> Hi,
>
>
>
> I have 52 OSD's in my cluster, all with the same disk size and same weight.
>
>
>
> When I perform a:
>
> ceph osd df
>
>
>
> The disk with the least available
Hi,
I have 52 OSD's in my cluster, all with the same disk size and same weight.
When I perform a:
ceph osd df
The disk with the least available space: 863G
The disk with the most available space: 1055G
I expect the available space or the usage on the disks to be the same, since
th