>
>I deployed Ceph 14.2.16 with ceph-ansible stable-4.0 a while back, and
>want to test upgrading. So for now I am trying rolling_update.yml for
>latest 14.x (before trying stable-5.0 and 15.x) but getting some errors,
>which seem to indicate empty or missing variables.
>
>Initially monitor_inte
We noticed this degraded write performance too recently when the nearfull flag
is present (cephfs kernel client, kernel 4.19.154).
Appears to be due to forced synchronous writes when nearfull.
https://github.com/ceph/ceph-client/blob/558b4510f622a3d96cf9d95050a04e7793d343c7/fs/ceph/file.c#L1837-L1
Simon Oosthoek wrote:
> On 24/02/2021 22:28, Patrick Donnelly wrote:
> > Hello Simon,
> >
> > On Wed, Feb 24, 2021 at 7:43 AM Simon Oosthoek
> > wrote:
> >
> > On 24/02/2021 12:40, Simon Oosthoek wrote:
> > Hi
> >
> > we've been running our Ceph cluster for near
).
Cheers,
Dylan
>On Wed, May 27, 2020 at 10:09 PM Dylan McCulloch wrote:
>>
>> Hi all,
>>
>> The single active MDS on one of our Ceph clusters is close to running out of
>> RAM.
>>
>> MDS total system RAM = 528GB
>> MDS current free system RAM = 4
Hi all,
The single active MDS on one of our Ceph clusters is close to running out of
RAM.
MDS total system RAM = 528GB
MDS current free system RAM = 4GB
mds_cache_memory_limit = 451GB
current mds cache usage = 426GB
Presumably we need to reduce our mds_cache_memory_limit and/or
mds_max_caps_pe
Ster
>Sent: Friday, 1 May 2020 5:53 PM
>To: Dylan McCulloch
>Cc: ceph-users
>
>Subject: Re: [ceph-users] upmap balancer and consequences of osds briefly
>marked out
>
>Hi,
>
>You're correct that all the relevant upmap entries are removed when an
>OSD is marked
Hi all,
We're using upmap balancer which has made a huge improvement in evenly
distributing data on our osds and has provided a substantial increase in usable
capacity.
Currently on ceph version: 12.2.13 luminous
We ran into a firewall issue recently which led to a large number of osds being