Hi John,

The default metaspace size is intend for working with a major proportion of
jobs. We are aware that for some jobs that need to load lots of classes,
the default value might not be large enough. However, having a larger
default value means for other jobs that do not load many classes, the
overall memory requirements might be unnecessarily high. (Imagine you have
a task manager with the default total memory 1.5GB, but 512m of it is
reserved for metaspace.)

Another possible problem is metaspace leak. When you say "eventually task
nodes started shutting down with OutOfMemory Metaspace", does this problem
happen shortly after the job execution starts, or does it happen after job
running for a while? Does the metaspace footprint keep growing or become
stable after the initial growth? If the metaspace keeps growing along with
time, it's usually an indicator of metaspace memory leak.

Thank you~

Xintong Song



On Tue, Feb 25, 2020 at 7:50 AM John Smith <java.dev....@gmail.com> wrote:

> Hi, I just upgraded to 1.10 and I started deploying my jobs. Eventually
> task nodes started shutting down with OutOfMemory Metaspace.
>
> I look at the logs and the task managers are started with:
> -XX:MaxMetaspaceSize=100663296
>
> So I configed: taskmanager.memory.jvm-metaspace.size: 256m
>
> It seems to be ok for now. What are your thoughts? And should I try 512m
> or is that too much?
>

Reply via email to