[ceph-users] Re: [ceph-ansible] rolling-upgrade variables not present

2021-08-19 Thread Dylan McCulloch
> >I deployed Ceph 14.2.16 with ceph-ansible stable-4.0 a while back, and >want to test upgrading. So for now I am trying rolling_update.yml for >latest 14.x (before trying stable-5.0 and 15.x) but getting some errors, >which seem to indicate empty or missing variables. > >Initially monitor_inte

[ceph-users] Re: As the cluster is filling up, write performance decreases

2021-04-13 Thread Dylan McCulloch
We noticed this degraded write performance too recently when the nearfull flag is present (cephfs kernel client, kernel 4.19.154). Appears to be due to forced synchronous writes when nearfull. https://github.com/ceph/ceph-client/blob/558b4510f622a3d96cf9d95050a04e7793d343c7/fs/ceph/file.c#L1837-L1

[ceph-users] Re: ceph slow at 80% full, mds nodes lots of unused memory

2021-02-25 Thread Dylan McCulloch
Simon Oosthoek wrote: > On 24/02/2021 22:28, Patrick Donnelly wrote: > > Hello Simon, > > > > On Wed, Feb 24, 2021 at 7:43 AM Simon Oosthoek > > wrote: > > > > On 24/02/2021 12:40, Simon Oosthoek wrote: > > Hi > > > > we've been running our Ceph cluster for near

[ceph-users] Re: Reducing RAM usage on production MDS

2020-06-10 Thread Dylan McCulloch
). Cheers, Dylan >On Wed, May 27, 2020 at 10:09 PM Dylan McCulloch wrote: >> >> Hi all, >> >> The single active MDS on one of our Ceph clusters is close to running out of >> RAM. >> >> MDS total system RAM = 528GB >> MDS current free system RAM = 4

[ceph-users] Reducing RAM usage on production MDS

2020-05-27 Thread Dylan McCulloch
Hi all, The single active MDS on one of our Ceph clusters is close to running out of RAM. MDS total system RAM = 528GB MDS current free system RAM = 4GB mds_cache_memory_limit = 451GB current mds cache usage = 426GB Presumably we need to reduce our mds_cache_memory_limit and/or mds_max_caps_pe

[ceph-users] Re: upmap balancer and consequences of osds briefly marked out

2020-05-01 Thread Dylan McCulloch
Ster >Sent: Friday, 1 May 2020 5:53 PM >To: Dylan McCulloch >Cc: ceph-users > >Subject: Re: [ceph-users] upmap balancer and consequences of osds briefly >marked out > >Hi, > >You're correct that all the relevant upmap entries are removed when an >OSD is marked

[ceph-users] upmap balancer and consequences of osds briefly marked out

2020-05-01 Thread Dylan McCulloch
Hi all, We're using upmap balancer which has made a huge improvement in evenly distributing data on our osds and has provided a substantial increase in usable capacity. Currently on ceph version: 12.2.13 luminous We ran into a firewall issue recently which led to a large number of osds being