51 nvme 0.97029 osd.51 up 1.0 1.0
52 nvme 0.97029 osd.52 up 1.0 1.0
53 nvme 0.97029 osd.53 up 1.0 1.0
Regards,
Vadim
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: +49-341-97-
Hello Cephers,
it is a mystery. My cluster is out of error state. How - don't really
know. I initiated deep scrubbing for affected pgs yesterday. Maybe that
was fixing it.
Cheers,
Vadim
On 6/24/21 1:15 PM, Vadim Bulst wrote:
Dear List,
since my update yesterday from 14.2.18 to 14.2
t_state.old
1099539039016 -rw--- 1 svcslurm domain users 16 Oct 11 09:47
priority_last_decay_ran
1099539038974 -rw--- 1 svcslurm domain users 16 Oct 11 09:42
priority_last_decay_ran.old
1099539038998 -rw--- 1 svcslurm domain users 796 Oct 11 09:45
qos_usage
1099539038965 -rw
I removed all entries with:
ceph tell mds.$filesystem:0 damage rm $id
so that cluster was no longer in error state. It didn't take much time
to have again new entries so that it was back to error state.
On 10/11/21 10:49, Vadim Bulst wrote:
ceph tell mds.scfs:0 scrub start / recu