Hi everyone, 

I've recently become aware of this bug [1] that affects Reef and Squid.

When the bug is triggered, OSD will fail to start with "bluefs mount failed to 
replay log: (5) Input/output error" after crashing with 
"bluestore/AvlAllocator.cc: 173: FAILED ceph_assert(rs->start <= start)". 
The Red Hat KB solution [2] recommends redeploying the OSD as soon as possible 
to avoid having to deal with multiple crashed OSDs in the cluster.

Reef v18.2.7 will fix this issue via PR #62840 [3]. Squid v19.2.3 should fix 
this issue via PR #62839 [4].

Thought it might be worth sharing.

Regards,
Frédéric.

[1] https://tracker.ceph.com/issues/70747
[2] https://access.redhat.com/solutions/7113657
[3] https://github.com/ceph/ceph/pull/62840
[4] https://github.com/ceph/ceph/pull/62839
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to