In the mists of time (Luminous) I experienced a cluster reacting rather badly 
to a user issuing ~5000 RBD snap trims at once.  At the time I raised my local 
values of osd_snap_trim_cost and osd_snap_trim_sleep (now 
osd_snap_trim_sleep_???) way up to spread the impact.  The subject cluster was 
roughly half Filestore and half BlueStore; the BlueStore OSDs handled it much 
better than did Filestore.

> On Apr 25, 2025, at 2:11 PM, Eugen Block <ebl...@nde.ag> wrote:
> 
> Interesting, I did something similar just a few weeks back, flattening some 
> images (didn’t look at their sizes or overlaps of the snapshots). But I don’t 
> see any spikes in memory usage at all. At that time the cluster was running 
> latest Pacific.
> 
> Zitat von Dominique Ramaekers <dominique.ramaek...@cometal.be>:
> 
>> Hi,
>> 
>> Housekeeping... I was cleaning up my snapshots and was flattening clones... 
>> Suddenly I ran out of memory on my nodes!
>> 
>> 4 node cluster with each 10 ssd osd's with total storage size 25TiB. Each 
>> node has about 45 GiB of free (available) memory in normal operation.
>> 
>> After flattening several images and removing dozens of snapshots, the free 
>> memory probably was already lower than the usual 45GiB. I was flattening an 
>> image of 75GiB and 2 out of 4 nodes ran out of memory. One node even 
>> automatically killed processes randomly to free up memory... After regaining 
>> control over the system, I put that node in maintenance and rebooted this 
>> node. After reboot the free mem was like 70Gib. Over night all of the nodes 
>> were back on the usual 45GiB.
>> 
>> Today I checked the free mem of each node: 45GiB free. So I flattened again 
>> an image of 75GiB. And yes, the free mem dropped from 45GiB to 5GiB really 
>> fast!
>> 
>> Is there a way to avoid this behavior of the cluster?
>> 
>> Greetings,
>> 
>> Dominique.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to