Hi all,

When running containerized Ceph (Nautilus) is anyone else seeing a
constant memory leak in the ceph-mgr pod with constant ms_handle_reset
errors in the logs for the backup mgr instance?

---
0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
---

I see a couple of related reports with no activity:


https://tracker.ceph.com/issues/36471
https://tracker.ceph.com/issues/40260

and one related merge that doesn't seem to have corrected the issue:

https://github.com/ceph/ceph/pull/24233

thx
Frank
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to