Hi all,

I wanted to test Dan's upmap-remapped script for adding new osd's to a
cluster. (Then letting the balancer gradually move pgs to the new OSD
afterwards)

I've created a fresh (virtual) 12.2.10 4-node cluster with very small disks
(16GB each). 2 OSD's per node.
Put ~20GB of data on the cluster.

Now when i set the norebalance flag and add a new OSD, 99% of pgs end up
recovering or in recovery_wait. Only a few will be backfill_wait.

The recovery starts as expected (norebalance only stops backfilling pgs)
and finished eventually

The upmap-remapped script only works with pgs which need to be backfilled.
it does work for the handful of pgs in backfill_wait status but my question
is:

When is ceph doing recovery in stead of backfilling? Only when the cluster
is rather empty or what is the criteria? Are the OSD's too small?

Kind regards,
Caspar
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to