I was talking about the hosts where the MDS containers are running on. The 
clients are all RHEL 9.

Kind regards, 
Sake 

> Op 31-08-2024 08:34 CEST schreef Alexander Patrakov <patra...@gmail.com>:
> 
>  
> Hello Sake,
> 
> The combination of two active MDSs and RHEL8 does ring a bell, and I
> have seen this with Quincy, too. However, what's relevant is the
> kernel version on the clients. If they run the default 4.18.x kernel
> from RHEL8, please either upgrade to the mainline kernel or decrease
> max_mds to 1. If they run a modern kernel, then it is something I do
> not know about.
> 
> On Sat, Aug 31, 2024 at 1:21 PM Sake Ceph <c...@paulusma.eu> wrote:
> >
> > @Anthony: it's a small virtualized cluster and indeed SWAP shouldn't be 
> > used, but this doesn't change the problem.
> >
> > @Alexander: the problem is in the active nodes, the standby replay don't 
> > have issues anymore.
> >
> > Last night's backup run increased the memory usage to 86% when rsync was 
> > running for app2. It dropped to 77,8% when it was done. When the rsync for 
> > app4 was running it increased to 84% and dropping to 80%. After a few hours 
> > it's now settled on 82%.
> > It looks to me the MDS server is caching something forever while it isn't 
> > being used..
> >
> > The underlying host is running on RHEL 8. Upgrade to RHEL 9 is planned, but 
> > hit some issues with automatically upgrading hosts.
> >
> > Kind regards,
> > Sake
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> 
> -- 
> Alexander Patrakov
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to