Hello,

I'm encountering an issue with Ceph when using it as the backend storage for 
OpenStack Cinder. Specifically, after deleting RBD snapshots through Cinder, 
I've noticed a significant increase in the removed_snaps_queue entries within 
the corresponding Ceph pool. It seems to affect the pool's performance and 
space efficiency.

I understand that snapshot deletion in Cinder is an asynchronous operation, and 
Ceph itself uses a lazy deletion mechanism to handle snapshot removal. However, 
even after allowing sufficient time, the entries in removed_snaps_queue do not 
decrease as expected.

I have several questions for the community:

Are there recommended methods or best practices for managing or reducing 
entries in removed_snaps_queue?
Is there any tool or command that can safely clear these residual snapshot 
entries without affecting the integrity of active snapshots and data?
Is this issue known, and are there any bug reports or plans for fixes related 
to it?
Thank you very much for your assistance!
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to