Of course:

free -h
              total        used        free      shared  buff/cache   available
Mem:          125Gi        96Gi       9.8Gi       4.0Gi        19Gi       7.6Gi
Swap:            0B          0B          0B


Luis Domingues
Proton AG


------- Original Message -------
On Monday, July 24th, 2023 at 16:42, Konstantin Shalygin <k0...@k0ste.ru> wrote:


> Hi,
> 
> Can you paste `free -h` output for this hosts?
> 
> 
> k
> Sent from my iPhone
> 
> > On 24 Jul 2023, at 14:42, Luis Domingues luis.doming...@proton.ch wrote:
> > 
> > Hi,
> > 
> > So after, looking into OSDs memory usage, which seem to be fine, on a 
> > v16.2.13 running with cephadm, on el8, it seems that the kernel is using a 
> > lot of memory.
> > 
> > # smem -t -w -k
> > Area Used Cache Noncache
> > firmware/hardware 0 0 0
> > kernel image 0 0 0
> > kernel dynamic memory 65.0G 18.6G 46.4G
> > userspace memory 50.1G 260.5M 49.9G
> > free memory 9.9G 9.9G 0
> > ---------------------------------------------------------- 125.0G 28.8G 
> > 96.3G
> > 
> > Comparing with a similar other cluster, same OS, same ceph version, but 
> > running packages instead if containers, and machines have a little bit more 
> > memory:
> > 
> > # smem -t -w -k
> > Area Used Cache Noncache
> > firmware/hardware 0 0 0
> > kernel image 0 0 0
> > kernel dynamic memory 52.8G 50.5G 2.4G
> > userspace memory 123.9G 198.5M 123.7G
> > free memory 10.6G 10.6G 0
> > ---------------------------------------------------------- 187.3G 61.3G 
> > 126.0G
> > 
> > Does anyone have an idea why when using containers with podman the kernel 
> > needs a lot more memory?
> > 
> > Luis Domingues
> > Proton AG
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to