On 2020-08-07 09:27, Manuel Lausch wrote:
> I cannot confirm that more memory target will solve the problem
> completly. In my case the OSDs have 14GB memory target and I did have
> huge user IO impact while snaptrim (many slow ops the whole time). Since
> I set bluefs_bufferd_io=true it seems to work without issue. 
> In my cluster I don't use rgw. But I don't see why
> different types of access the cluster do affect the form the kernel
> manages its memory. My experience why the kernel begins to swap are
> mostly numa related and/or memory fragmentation.

Can you share the amount of buffer cache available on your storage nodes?

We run the OSDs with osd_memory_target=11G and 22 GB of buffer cache
available. And with the buffer on (Mimic 13.2.8).

Thanks,

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / i...@bit.nl
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to