Re: [ceph-users] Ceph Balancer Limitations

2019-09-13 Thread Adam Tygart
Thanks, I moved back to crush-compat mapping, the pool that was at "90% full" is now under 76% full. Before doing that, I had the automatic balancer off, and ran 'ceph balancer optimize test'. It ran for 12 hours before I killed it. In upmap mode, it was "balanced" or at least as balanced as it c

Re: [ceph-users] Ceph Balancer Limitations

2019-09-11 Thread Konstantin Shalygin
We're using Nautilus 14.2.2 (upgrading soon to 14.2.3) on 29 CentOS osd servers. We've got a large variation of disk sizes and host densities. Such that the default crush mappings lead to an unbalanced data and pg distribution. We enabled the balancer manager module in pg upmap mode. The balance

[ceph-users] Ceph Balancer Limitations

2019-09-11 Thread Adam Tygart
Hello all, We're using Nautilus 14.2.2 (upgrading soon to 14.2.3) on 29 CentOS osd servers. We've got a large variation of disk sizes and host densities. Such that the default crush mappings lead to an unbalanced data and pg distribution. We enabled the balancer manager module in pg upmap mode.