Good day

We've been struggling with this issue since we've upgraded post 16.2.11 to
16.2.15 and now up to Reef 18.2.7. We didn't have this issue prior to
upgrading from 16.2.11 to 16.2.15.

The moment the fullest OSD % is between the nearful and backfillful value,
CephFS goes into a limp mode. Meaning our Client IO goes from ~30 *GiB*/s
to 100 *MiB*/s. Doesn't matter what the threshold values are set.

If my nearful% is 75% and backfillfull 80%, with the highest OSD 75.01% it
will limp.
If my nearful% is 85% and backfillfull 90%, with the highest OSD 85.01% it
will go into limp mode.

The only way to get CephFS operational again, is to set both ratios to
either far below or above the highest OSD value.

e.g. if the fullest is 72% , I need to set it to ceph osd
set-nearfull-ratio 0.76 & ceph osd set-backfillfull-ratio 0.81

When I deploy a brand new cluster from scratch on our testbed I get the
same issue, Cephadm  / Reef 18.2.4.

Back in the days I was able to manipulate these settings with the following
as well, but I think since Pacific they are no longer tuneable:

ceph tell osd.1231 config show | grep -E
'osd_nearfull_ratio|osd_backfillfull_ratio|osd_full_ratio'
"mon_osd_backfillfull_ratio": "0.900000", "mon_osd_full_ratio": "0.950000",
"mon_osd_nearfull_ratio": "0.850000",

  ceph tell osd.$osd injectargs '--osd_nearfull_ratio=0.85'
  ceph tell osd.$osd injectargs '--osd_backfillfull_ratio=0.90'
  ceph tell osd.$osd injectargs '--osd_full_ratio=0.95'


URL to the issue: https://tracker.ceph.com/issues/70129

Any ideas would be greatly appreciated.

-- 



*Jeremi-Ernst Avenant, Mr.*Cloud Infrastructure Specialist
Inter-University Institute for Data Intensive Astronomy
5th Floor, Department of Physics and Astronomy,
University of Cape Town

Tel: 021 959 4137 <0219592327>
Web: www.idia.ac.za | www.ilifu.ac.za
E-mail (IDIA): jer...@idia.ac.za <mfu...@idia.ac.za>
Rondebosch, Cape Town, 7600, South Africa
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to