Are observing something similar to this thread: 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z/#FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z
 ?

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: hoann...@gmail.com <hoann...@gmail.com>
Sent: 16 June 2020 05:15:18
To: ceph-users@ceph.io
Subject: [ceph-users] Many osds down , ceph mon has a lot of scrub logs

Hi all.

My cluster has many osds down, 1 mon log has many line :

2020-06-15 18:00:22.575 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2176495218})
2020-06-15 18:00:22.661 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=472876490})
2020-06-15 18:00:22.747 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2793143323})
2020-06-15 18:00:22.830 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=3517702147})
2020-06-15 18:00:22.916 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2566175247})
2020-06-15 18:00:22.999 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=1643204334})
2020-06-15 18:00:23.087 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=220430164})
2020-06-15 18:00:23.170 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=1336353918})
2020-06-15 18:00:23.296 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2573498114})
2020-06-15 18:00:23.421 7fa2deffe700  0 log_channel(cluster) log [DBG] : scrub 
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=1132070786})

After 10 minutes, 1 mon and 1 mgr restart. My cluster health ok.
what happened with my cluster? How i can debug this.

Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to