Running your osd's with resource limitations is not so straightforward. I can 
guess that if you are running close to full resource utilization on your nodes, 
it makes more sense to make sure everything stays as much within their 
specified limits. (Aside from the question if you would even want to operate 
such environment, and aside from the question if you even want to force osd's 
into oom)

However if you are not walking such thin line, and have eg. more memory 
available. It is just not good, not using that memory. I do not really know how 
advanced most orchestrators are nowadays, and if you can dynamically change 
resource limits on containers. But if not, you will just not use memory as 
cache, and not using memory as cache means increased disk io, and decreased 
performance.

I think the linux kernel is probably better in deciding how to share resources 
among my osd's than I am. And that is a reason why I do not put them in 
containers. ( but I am still on nautilus, so I will keep an eye on this 'noisy' 
when upgrading ;) )


> 
> and that is exactly why I run osds containerized with limited cpu and
> memory as well as "bluestore cache size", "osd memory target", and "mds
> cache memory limit".  Osd processes have become noisy neighbors in the
> last
> few versions.
> 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to