Hello,

Few month ago we experienced issue with Ceph  v13.2.4:

1. One of the nodes had all it's osd's set to out. To clean them up for 
replacement.
2. Noticed that a lot of snaptrim was running.
3. Set nosnaptrim flag on the cluster (to improve performance).
4. Once mon_osd_snap_trim_queue_warn_on appeared, removed nosnaptrim flag.
5. All osds on the cluster crashed and started flapping. Set nosnaptrim flag 
back on.

Issue it registered in tracker, additional logs were collected - 
https://tracker.ceph.com/issues/38124
However, it is still present.

What options do I have? I would like to know when/if issue would be fixed (it 
was not in v13.2.5 release), or, alternatively, to contact developer who can 
resolve it.

--
Best regards,
Vytautas J.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to