Ok, thanks for the clarification. This does disprove my theory.
On Wed, Sep 4, 2024 at 12:30 AM Sake Ceph wrote:
>
> But the client which is doing the rsync, doesn't hold any caps after the
> rsync. Cephfs-top shows 0 caps. Even a system reboot of the client doesn't
> make a change.
>
> Kind re
But the client which is doing the rsync, doesn't hold any caps after the rsync.
Cephfs-top shows 0 caps. Even a system reboot of the client doesn't make a
change.
Kind regards,
Sake
> Op 03-09-2024 04:01 CEST schreef Alexander Patrakov :
>
>
> MDS cannot release an inode if a client has ca
MDS cannot release an inode if a client has cached it (and thus can
have newer data than OSDs have). The MDS needs to know at least which
client to ask if someone else requests the same file.
MDS does ask clients to release caps, but sometimes this doesn't work,
and there is no good troubleshootin
The folders contain a couple of million files, but are really static. We have
another folder with a lot of updates and the MDS server for that folder has
indeed a continuous increase of memory usage. But I would focus on the app2 and
app4 folders, because those have a lot less changes in it.
Bu
Can you tell if the number of objects increases in your cephfs between
those bursts? I noticed something similar in a 16.2.15 cluster as
well. It's not that heavily used, but it contains home directories and
development working directories etc. And when one user checked out a
git project, t
As a workaround, to reduce the impact of the MDS slowed down by
excessive memory consumption, I would suggest installing earlyoom,
disabling swap, and configuring earlyoom as follows (usually through
/etc/sysconfig/earlyoom, but could be in a different place on your
distribution):
EARLYOOM_ARGS="-
Ow it got worse after the upgrade to Reef (was running Quincy). With Quincy the
memory usage was also a lot of times around 95% and some swap usage, but never
exceeding both to the point of crashing.
Kind regards,
Sake
> Op 31-08-2024 09:15 CEST schreef Alexander Patrakov :
>
>
> Got it.
>
It was worse with 1 MDS, therefor we moved to 2 active MDS with directory
pinning (so the balancer won't be an issue/make things extra complicated).
The number of caps stay for the most part the same, some ups and downs. I would
guess it maybe has something to do with caching the accessed direc
Got it.
However, to narrow down the issue, I suggest that you test whether it
still exists after the following changes:
1. Reduce max_mds to 1.
2. Do not reduce max_mds to 1, but migrate all clients from a direct
CephFS mount to NFS.
On Sat, Aug 31, 2024 at 2:55 PM Sake Ceph wrote:
>
> I was ta
I was talking about the hosts where the MDS containers are running on. The
clients are all RHEL 9.
Kind regards,
Sake
> Op 31-08-2024 08:34 CEST schreef Alexander Patrakov :
>
>
> Hello Sake,
>
> The combination of two active MDSs and RHEL8 does ring a bell, and I
> have seen this with Qui
Hello Sake,
The combination of two active MDSs and RHEL8 does ring a bell, and I
have seen this with Quincy, too. However, what's relevant is the
kernel version on the clients. If they run the default 4.18.x kernel
from RHEL8, please either upgrade to the mainline kernel or decrease
max_mds to 1.
@Anthony: it's a small virtualized cluster and indeed SWAP shouldn't be used,
but this doesn't change the problem.
@Alexander: the problem is in the active nodes, the standby replay don't have
issues anymore.
Last night's backup run increased the memory usage to 86% when rsync was
running fo
On Fri, Aug 30, 2024 at 9:22 PM Sake Ceph wrote:
>
> I hope someone can help us with a MDS caching problem.
>
> Ceph version 18.2.4 with cephadm container deployment.
>
> Question 1:
> For me it's not clear how much cache/memory you should allocate for the MDS.
> Is this based on the number of op
13 matches
Mail list logo