Thanks XuYun. Changing the ceph-mgr deployment to use:

hostPID: true

does stop the memory leak.

I wonder if a bug report should be opened with Ceph or not. It's not
really a Ceph issue but since containerized deploys are becoming more
popular it may be worth looking into.

Any opinions from the list?

Thanks,
Frank

On Tue, Jul 28, 2020 at 6:59 AM XuYun <yu...@me.com> wrote:
>
> We have the same mgr memory leak problem. I doubt it’s related to the PID 
> which is used to identify peer address.
> Maybe you cloud try to set the ‘PidMode’ to ‘host’ in your deployment.
>
> > 2020年7月28日 上午2:44,Frank Ritchie <frankaritc...@gmail.com> 写道:
> >
> > Hi all,
> >
> > When running containerized Ceph (Nautilus) is anyone else seeing a
> > constant memory leak in the ceph-mgr pod with constant ms_handle_reset
> > errors in the logs for the backup mgr instance?
> >
> > ---
> > 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> > 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> > 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> > ---
> >
> > I see a couple of related reports with no activity:
> >
> >
> > https://tracker.ceph.com/issues/36471
> > https://tracker.ceph.com/issues/40260
> >
> > and one related merge that doesn't seem to have corrected the issue:
> >
> > https://github.com/ceph/ceph/pull/24233
> >
> > thx
> > Frank
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to