Hi Anthony! My osd is HDD 12TB 7200 and SSD 960GB for wal/db
Thanks Anthony! Vào Th 3, 11 thg 6, 2024 vào lúc 10:29 Anthony D'Atri < anthony.da...@gmail.com> đã viết: > What specifically are your OSD devices? > > On Jun 10, 2024, at 22:23, Phong Tran Thanh <tranphong...@gmail.com> > wrote: > > Hi ceph user! > > I am encountering a problem with IOPS and disk utilization of OSD. > Sometimes, my disk peaks in IOPS and utilization become too high, which > affects my cluster and causes slow operations to appear in the logs. > > 6/6/24 9:51:46 AM[WRN]Health check update: 0 slow ops, oldest one blocked > for 36 sec, osd.268 has slow ops (SLOW_OPS) > > 6/6/24 9:51:37 AM[WRN]Health check update: 0 slow ops, oldest one blocked > for 31 sec, osd.268 has slow ops (SLOW_OPS) > > <image.png> > This is config tu reduce it, but its not resolve my problem > global advanced osd_mclock_profile > custom > > global advanced > osd_mclock_scheduler_background_best_effort_lim 0.100000 > > global advanced > osd_mclock_scheduler_background_best_effort_res 0.100000 > > global advanced > osd_mclock_scheduler_background_best_effort_wgt 1 > > global advanced osd_mclock_scheduler_background_recovery_lim > 0.100000 > > global advanced osd_mclock_scheduler_background_recovery_res > 0.100000 > > global advanced osd_mclock_scheduler_background_recovery_wgt > 1 > > global advanced osd_mclock_scheduler_client_lim > 0.400000 > > global advanced osd_mclock_scheduler_client_res > 0.400000 > > global advanced osd_mclock_scheduler_client_wgt 4 > > Hope someone can help me > > Thanks so much! > -- > > Email: tranphong...@gmail.com > Skype: tranphong079 > _______________________________________________ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > > -- Trân trọng, ---------------------------------------------------------------------------- *Tran Thanh Phong* Email: tranphong...@gmail.com Skype: tranphong079 _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io