It was worse with 1 MDS, therefor we moved to 2 active MDS with directory
pinning (so the balancer won't be an issue/make things extra complicated).
The number of caps stay for the most part the same, some ups and downs. I would
guess it maybe has something to do with caching the accessed direc
Hi,
We are using raw cephfs data 182 TB replica 2 and single MDS seemed to
regularly run around 4002 req/s, So how many MDS & MON servers are required?
Also mentioned current ceph cluster servers
Client : 60
MDS: 3 (2 Active + 1 Standby)
MON: 4
MGR: 3 (1 Active + 2 Standby)
OSD: 52
PG : auto
The number of mons is ideally an odd number. For production 5 is usually the
right number.
MDS is a complicated question.
> On Aug 30, 2024, at 2:24 AM, s.dhivagar@gmail.com wrote:
>
> Hi,
>
> We are using raw cephfs data 182 TB replica 2 and single MDS seemed to
> regularly run around 4
Den lör 31 aug. 2024 kl 15:42 skrev Tim Holloway :
>
> I would greatly like to know what the rationale is for avoiding
> containers.
>
> Especially in large shops. From what I can tell, you need to use the
> containerized Ceph if you want to run multiple Ceph filesystems on a
> single host. The leg
Den fre 30 aug. 2024 kl 20:43 skrev Milan Kupcevic :
>
> On 8/30/24 12:38, Tim Holloway wrote:
> > I believe that the original Ansible installation process is deprecated.
>
> This would be a bad news as I repeatedly hear from admins running large
> storage deployments that they prefer to stay away
I would greatly like to know what the rationale is for avoiding
containers.
Especially in large shops. From what I can tell, you need to use the
containerized Ceph if you want to run multiple Ceph filesystems on a
single host. The legacy installations only support dumping everything
directly under
Ow it got worse after the upgrade to Reef (was running Quincy). With Quincy the
memory usage was also a lot of times around 95% and some swap usage, but never
exceeding both to the point of crashing.
Kind regards,
Sake
> Op 31-08-2024 09:15 CEST schreef Alexander Patrakov :
>
>
> Got it.
>
As a workaround, to reduce the impact of the MDS slowed down by
excessive memory consumption, I would suggest installing earlyoom,
disabling swap, and configuring earlyoom as follows (usually through
/etc/sysconfig/earlyoom, but could be in a different place on your
distribution):
EARLYOOM_ARGS="-
Got it.
However, to narrow down the issue, I suggest that you test whether it
still exists after the following changes:
1. Reduce max_mds to 1.
2. Do not reduce max_mds to 1, but migrate all clients from a direct
CephFS mount to NFS.
On Sat, Aug 31, 2024 at 2:55 PM Sake Ceph wrote:
>
> I was ta