Hello,

after reaching certain ceiling of host/PG ratio, moving empty bucket
in causes a small rebalance:

ceph osd crush add-bucket 10.10.2.13
ceph osd crush move 10.10.2.13 root=default rack=unknownrack

I have two pools, one is very large and it is keeping up with proper
amount of pg/osd but another one contains in fact lesser amount of PGs
than the number of active OSDs and after insertion of empty bucket in
it goes to a rebalance, though that the actual placement map is not
changed. Keeping in mind that this case is very far from being
offensive to any kind of a sane production configuration, is this an
expected behavior?

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to