Hi!

I've been working on this for quite some time now and I think it's ready for 
some broader testing and feedback.

https://github.com/TheJJ/ceph-balancer

It's an alternative standalone balancer implementation, optimizing for equal 
OSD storage utilization and PG placement across all pools.

It doesn't change your cluster in any way, it just prints the commands you can 
run to apply the PG movements.
Please play around with it :)

Quickstart example: generate 10 PG movements on hdd to stdout

    ./placementoptimizer.py -v balance --max-pg-moves 10 --only-crushclass hdd 
| tee /tmp/balance-upmaps

When there's remapped pgs (e.g. by applying the above upmaps), you can inspect 
progress with:

    ./placementoptimizer.py showremapped
    ./placementoptimizer.py showremapped --by-osd

And you can get a nice Pool and OSD usage overview:

    ./placementoptimizer.py show --osds --per-pool-count --sort-utilization


Of course there's many more features and optimizations to be added,
but it served us very well in reclaiming terrabytes of until then unavailable 
storage already where the `mgr balancer` could no longer optimize.

What do you think?

Cheers
  -- Jonas
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to