> ALL my HDD based OSDs on ALL 7 hosts are complaining 

Have you tried taking one OSD offline, compacting it, then restarting it?

Also, what does your hardware look like?  How many spinners per host?  Do you 
have an IR (RAID) HBA, or a plain IT (JBOD) HBA?  SAS or SATA?  Internal WAL+DB 
or offloaded?

Using hdparm / sdparm (or equivalent tool) to disable HDD volatile cache — at 
least when using a plain HBA — can make a distinct improvement in latency.





> <image.png>
> Steven
> 
> On Tue, 13 May 2025 at 03:07, Laimis Juzeliūnas <laimis.juzeliu...@oxylabs.io 
> <mailto:laimis.juzeliu...@oxylabs.io>> wrote:
>> HI Steven,
>> 
>> This issue has been around mailing lists since the new version came out:
>> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/S5XBS63NFIBJHN44RQ4NES7F6GVIMH4T/
>> 
>> This is a new warning that can be adjusted or muted if uncomfortable but its 
>> advisable to check what’s causing this.
>> We had a few warnings once upgraded to 19.2.2 and a single OSD restart 
>> vanished them away. Haven’t seen these warnings ever since.
>> 
>> 
>> Best,
>> Laimis J.
>> 
>>> On 13 May 2025, at 06:55, yite gu <yite...@gmail.com 
>>> <mailto:yite...@gmail.com>> wrote:
>>> 
>>> bdev_async_discard_threads more than 1 issue have been fixed at 18.2.7. pls
>>> use `top -H -p <osd pid>`  observe osd thread cpu load, is there any
>>> abnormal situation? And what abnormal logs are there in osd log?
>>> 
>>> Steven Vacaroaia <ste...@gmail.com <mailto:ste...@gmail.com>> 于2025年5月13日周二 
>>> 02:18写道:
>>> 
>>>> Hi,
>>>> 
>>>> After a cephadm upgrade from 18.2.2 to 18.2.7 that worked perfectly,
>>>> I am noticing lots ( 42 out of 161 OSDs ) "slow operations in bluestore"
>>>> errors
>>>> 
>>>> My cluster has all 3 types of OSDs ( NVME, SSD, HDD+ journaling)
>>>> 
>>>> I found some articles mentioning below setting but it did not help
>>>> 
>>>> Anyone else having this issue ?
>>>> 
>>>> Thanks
>>>> Steven
>>>> 
>>>>     bdev_enable_discard: "true" # quote
>>>>      bdev_async_discard_threads: "1" # quote
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users@ceph.io <mailto:ceph-users@ceph.io>
>>>> To unsubscribe send an email to ceph-users-le...@ceph.io 
>>>> <mailto:ceph-users-le...@ceph.io>
>>>> 
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@ceph.io <mailto:ceph-users@ceph.io>
>>> To unsubscribe send an email to ceph-users-le...@ceph.io 
>>> <mailto:ceph-users-le...@ceph.io>
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to