Hi Everyone,
well, indeed this warning has been introduced in 18.2.6.
But I wouldn't say that's not an issue. Having it permanently visible
(particularly for a specific OSD only) might indicate some issues with
this OSD which could negatively impact overall cluster performance.
OSD log to be checked for potential clues and more research on the root
cause is recommended.
And once again - likely that's not a regression in 18.2.6 but rather
some additional diagnostics brought by the release which reveals a
potential issue.
Thanks,
Igor
On 02.05.2025 11:19, Frédéric Nass wrote:
Hi Michel,
This is not an issue. It's a new warning that can be adjusted or muted. Check
this thread [1] and this part [2] of the Reef documentation about this new
alert.
Came to Reef with PR #59466 [3].
Cheers,
Frédéric.
[1] https://www.spinics.net/lists/ceph-users/msg86131.html
[2]
https://docs.ceph.com/en/latest/rados/operations/health-checks/#bluestore-slow-op-alert
[3] https://github.com/ceph/ceph/pull/59466
----- Le 2 Mai 25, à 9:44, Michel Jouvin michel.jou...@ijclab.in2p3.fr a écrit :
Hi,
Since our upgrade to 18.2.6 2 days ago, our cluster is reporting the
warning "1 OSD(s) experiencing slow operations in BlueStore":
[root@dig-osd4 bluestore-slow-ops]# ceph health detail
HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
[WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in
BlueStore
osd.247 observed slow operation indications in BlueStore
I have never seen this warning before so I've the feeling it is somehow
related to the upgrade and it doesn't seem related to the regression
mentioned in another thread (that should result in an OSD crash).
Googling quickly, I found this reported on 19.2.1 with SSD where in my
case it is an HDD. I don't know if the workaround mentioned in the issue
(bdev_xxx_discard=true) also applies to 18.2.6...
Did somebody saw this in 18.2.x? Any recommandation? Our plan was,
according to best practicies described recently in another thread to
move from 18.2.2 to 18.2.6 and then from 18.2.6 to 19.2.2... Will 19.2.2
clear this issue (at the risk of others as it is probably not widely used)?
Best regards,
Michel
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io