Hey folks:
        I was wondering if the community can provide any advice — over time and 
due to some external issues, we have managed to accumulate thousands of 
snapshots of RBD images, which are now in need of cleaning up.  I have recently 
attempted to roll through a “for" loop to perform a “rbd snap rm” on each 
snapshot, sequentially, waiting until the rbd command finishes before moving 
onto the next one, of course.  I noticed that shortly after starting this, I 
started seeing thousands of slow ops and a few of our guest VMs became 
unresponsive, naturally.

My questions are:
        - Is this expected behavior?
        - Is the background cleanup asynchronous from the “rbd snap rm” command?
                - If so, are there any OSD parameters I can set to reduce the 
impact on production?
        - Would “rbd snap purge” be any different?  I expect not, since 
fundamentally, rbd is performing the same action that I do via the loop.

Relevant details are as follows, though I’m not sure cluster size *really* has 
any effect here:
        - Ceph: version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
        - 5 storage nodes, each with:
                - 10x 2TB 7200 RPM SATA Spindles (for a total of 50 OSDs)
                - 2x Samsung MZ7LM240 SSDs (used as journal for the OSDs)
                - 64GB RAM
                - 2x Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz
                - 20GBit LACP Port Channel via Intel X520 Dual Port 10GbE NIC

Let me know if I’ve missed something fundamental.

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com 
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 20000 / ISO 27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) and may contain confidential and privileged 
information. Any unauthorized review, copy, use, disclosure, or distribution is 
STRICTLY prohibited. If you are not the intended recipient, please contact the 
sender by reply e-mail and destroy all copies of the original message.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to