Hi,
It’s 3.4.10 and does contain the bug. I’ll patch my flink client and see if it
happens again.
Best regards,
Arnaud
De : LINZ, Arnaud
Envoyé : mercredi 18 novembre 2020 10:35
À : 'Guowei Ma'
Cc : 'user'
Objet : RE: Random Task executor shutdown
Hello,
We are won
20 00:49
À : LINZ, Arnaud mailto:al...@bouyguestelecom.fr>>
Cc : user mailto:user@flink.apache.org>>
Objet : Re: Random Task executor shutdown
Hi, Arnaud
Would you like to share the log of the shutdown task executor?
BTW could you check the gc log of the task execut
1)
and the last one, there is only one minute. Is there other parameters to adjust
to make the Zookeeper synchronization more robust when the network is slowed
down ?
Best,
Arnaud
De : Guowei Ma
Envoyé : mardi 17 novembre 2020 00:49
À : LINZ, Arnaud
Cc : user
Objet : Re: Random Task executo
It is possible, but I am not entirely sure about the load order affecting
the metaspace usage.
To find out why your taskmanager container is exceeding the metaspace, we
would need to know what value the max metaspace size is set to and then
find out how much of the metaspace is actually being used
Another big potential candidate is the fact that JDBC libs I use in my job
are put into the Flink lib folder instead of putting them into the fat
jar..tomorrow I'll try to see if the metaspace is getting cleared correctly
after that change.
Unfortunately our jobs were written before the child-first
Hi, Arnaud
Would you like to share the log of the shutdown task executor?
BTW could you check the gc log of the task executor?
Best,
Guowei
On Mon, Nov 16, 2020 at 8:57 PM LINZ, Arnaud
wrote:
> (reposted with proper subject line -- sorry for the copy/paste)
> -Original message-
> Hello,
Thank you Kye for your insights...in my mind, if the job runs without
problems one or more times the heap size, and thus the medatadata-size, is
big enough and I should not increase it (on the same data of course).
So I'll try to understand who is leaking what..the advice to avoid the
dynamic class
Hello!
The JVM metaspace is where all the classes (not class instances or objects)
get loaded. jmap -histo is going to show you the heap space usage info not
the metaspace.
You could inspect what is happening in the metaspace by using jcmd (e.g.,
jcmd JPID VM.native_memory summary) after restarti
The exclusions should not have any impact on that, because what defines
which classloader will load which class is not the presence or
particular class in a specific jar, but the configuration of
parent-first-patterns [1].
If you don't use any flink internal imports, than it still might be the
I've tried to remove all possible imports of classes not contained in the
fat jar but I still face the same problem.
I've also tried to reduce as much as possible the exclude in the shade
section of the maven plugin (I took the one at [1]) so now I exclude only
few dependencies..could it be that I
Yes, that could definitely cause this. You should probably avoid using
these flink-internal shaded classes and ship your own versions (not shaded).
Best,
Jan
On 11/16/20 3:22 PM, Flavio Pompermaier wrote:
Thank you Jan for your valuable feedback.
Could it be that I should not use import shad
Thank you Jan for your valuable feedback.
Could it be that I should not use import shaded-jackson classes in my user
code?
For example import
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper?
Bets,
Flavio
On Mon, Nov 16, 2020 at 3:15 PM Jan Lukavský wrote:
> Hi Flavi
Hi Flavio,
when I encountered quite similar problem that you describe, it was
related to a static storage located in class that was loaded
"parent-first". In my case it was it was in java.lang.ClassValue, but it
might (and probably will be) different in your case. The problem is that
if user-
13 matches
Mail list logo