Hi @all,
On 02/08/2017 08:45 PM, Jim Kilborn wrote:
> I have had two ceph monitor nodes generate swap space alerts this week.
> Looking at the memory, I see ceph-mon using a lot of memory and most of the
> swap space. My ceph nodes have 128GB mem, with 2GB swap (I know the
> memory/swap ratio
and managed to address? Would you be willing to discuss this
further?
Many thanks
Andrei
- Original Message -
> From: "Jim Kilborn"
> To: "Joao Eduardo Luis" , "ceph-users"
>
> Sent: Thursday, 9 February, 2017 13:04:16
> Subject: Re: [ceph-
indows 10
From: Graham Allan<mailto:g...@umn.edu>
Sent: Thursday, February 9, 2017 11:24 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph-mon memory issue jewel 10.2.5 kernel 4.4
I've been trying to figure out the same thing
I've been trying to figure out the same thing recently - I had the same
issues as others with jewel 10.2.3 (?) but for my current problem I
don't think it's a ceph issue.
Specifically ever since our last maintenance day, some of our OSD nodes
having been suffering OSDs killed by OOM killer des
<mailto:j...@suse.de>
Sent: Thursday, February 9, 2017 3:06 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph-mon memory issue jewel 10.2.5 kernel 4.4
Hi Jim,
On 02/08/2017 07:45 PM, Jim Kilborn wrote:
> I have had two ceph monitor n
Hi Jim,
On 02/08/2017 07:45 PM, Jim Kilborn wrote:
I have had two ceph monitor nodes generate swap space alerts this week.
Looking at the memory, I see ceph-mon using a lot of memory and most of the
swap space. My ceph nodes have 128GB mem, with 2GB swap (I know the
memory/swap ratio is odd)
+1
Ever since upgrading to 10.2.x I have been seeing a lot of issues with our ceph
cluster. I have been seeing osds down, osd servers running out of memory and
killing all ceph-osd processes. Again, 10.2.5 on 4.4.x kernel.
It seems what with every release there are more and more problems with
+1
Ever since upgrading to 10.2.x I have been seeing a lot of issues with our ceph
cluster. I have been seeing osds down, osd servers running out of memory and
killing all ceph-osd processes. Again, 10.2.5 on 4.4.x kernel.
It seems what with every release there are more and more problems with
We have alerting on our mons to notify us when the memory usage is above 80%
and go around and restart the mon services in that cluster. It is a memory
leak somewhere in the code, but the problem is so infrequent it's hard to get
good enough logs to track it down. We restart the mons in a clus