We noticed this degraded write performance too recently when the nearfull flag 
is present (cephfs kernel client, kernel 4.19.154).
Appears to be due to forced synchronous writes when nearfull.
https://github.com/ceph/ceph-client/blob/558b4510f622a3d96cf9d95050a04e7793d343c7/fs/ceph/file.c#L1837-L1839
https://tracker.ceph.com/issues/49406

>It might be more accurate to say that the default nearfull is 85% for
>that reason, among others. Raising it will probably not get you enough
>storage to be worth the hassle.
>
>On Tue, Apr 13, 2021 at 7:18 AM zp_8483 <zp_8...@163.com> wrote:
>>
>> Backend:
>>
>> XFS for the filestore back-end.
>>
>>
>> In our testing, we found the performance decreases when cluster usage exceed 
>> default nearfull ratio(85%), is it by design?
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>_______________________________________________
>ceph-users mailing list -- ceph-users@ceph.io
>To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to