you define the max size of your heap (-Xmx), but you do not define the max
size of your offheap (MaxMetaspaceSize for jdk 8, PermSize for jdk7), so
you could occupy all of the memory on the instance.  your system killed the
process to preserve itself.  you should also take into account that the
memory size per stack (Xss) is -ontop- of what you define for the heap, and
offheap, so number of spawned threads could be a culprit as well, if you
tune your offheap size and keep seeing the same trouble.  id figure out
approximately how many thread stacks are getting created, times that by
256k, add that to your heap size, and subtract that number from the total
amount of memory available to the host to come to a proper offheap size.

On Mon, Jul 23, 2018 at 2:44 PM, Mark Rose <markr...@markrose.ca> wrote:

> On 19 July 2018 at 10:43, Léo FERLIN SUTTON <lfer...@mailjet.com.invalid>
> wrote:
> > Hello list !
> >
> > I have a question about cassandra memory usage.
> >
> > My cassandra nodes are slowly using up all my ram until they get
> OOM-Killed.
> >
> > When I check the memory usage with nodetool info the memory
> > (off-heap+heap) doesn't match what the java process is really using.
>
> Hi Léo,
>
> It's possible that glibc is creating too many memory arenas. Are you
> setting/exporting MALLOC_ARENA_MAX to something sane before calling
> the JVM? You can check that in /proc/<pid>/environ.
>
> I would also turn on -XX:NativeMemoryTracking=summary and use jcmd to
> check out native memory usage from the JVM's perspective.
>
> -Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

Reply via email to