Hi,
what changed between version 1.4.2 and 1.5.2 was the addition of the
application level flow control mechanism which changed a bit how the
network buffers are configured. This could be a potential culprit.
Since you said that the container ran for some time, I'm wondering whether
there is some
we dont set it up anywhere so i guess its default 16. Do you think its too
much?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
ld impact the JM's memory footprint.
Best
Yun
From: eSKa
Sent: Tuesday, September 25, 2018 14:45
To: user@flink.apache.org
Subject: Re: JobManager container is running beyond physical memory limits
anyone?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
anyone?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,
after switching from 1.4.2. to 1.5.2 we started to have problems with JM
container.
Our use case is as follows:
- we get request from user
- run DataProcessing job
- once finished we store details to DB
We have ~1000 jobs per day. After version update our container is dying
after ~1-2 da