Currently removing a bucket with a lot of objects: radosgw-admin bucket rm --bucket=$BUCKET --bypass-gc --purge-objects
This process was killed by the out-of-memory killer. Then looking at the graphs, we see a continuous increase of memory usage for this process, about +24 GB per day. Removal rate is about 3 M objects per day.
It is not the fastest hardware, and this index pool is still without SSDs. The bucket is sharded, 1024 shards. We are on Nautilus 14.2.1, now about 500 OSDs.
So with this bucket with 60 M objects, we would need about 480 GB of RAM to come through. Or is there a workaround? Should I open a tracker issue?
The killed remove command can just be called again, but it will be killed again before it finishes. Also, it has to run some time until it continues to actually remove objects. This "wait time" is also increasing. Last time, after about 16 M objects already removed, the wait time was nearly 9 hours. Also during this time, there is a memory ramp, but not so steep.
BTW it feels strange that the removal of objects is slower (about 3 times) than adding objects.
Harry _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com