We have the same mgr memory leak problem. I doubt it’s related to the PID which 
is used to identify peer address.
Maybe you cloud try to set the ‘PidMode’ to ‘host’ in your deployment.

> 2020年7月28日 上午2:44,Frank Ritchie <frankaritc...@gmail.com> 写道:
> 
> Hi all,
> 
> When running containerized Ceph (Nautilus) is anyone else seeing a
> constant memory leak in the ceph-mgr pod with constant ms_handle_reset
> errors in the logs for the backup mgr instance?
> 
> ---
> 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> ---
> 
> I see a couple of related reports with no activity:
> 
> 
> https://tracker.ceph.com/issues/36471
> https://tracker.ceph.com/issues/40260
> 
> and one related merge that doesn't seem to have corrected the issue:
> 
> https://github.com/ceph/ceph/pull/24233
> 
> thx
> Frank
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to