When YARN kills a job because of memory, it usually means that the job has
used more memory than it requested. Since Flink's memory model consists not
only from the Java on-heap memory but also some rocksdb off-heap memory,
it's usually harder to stay within the boundaries. The general shortcoming
Hi, everyone:
I’m a flink sql user, and the version is 1.8.2.
Recently I confuse about memory and backpressure. I have two job on yarn,
due to memory over, it’s frequently killed by yarn.
One job,I have 3 taskmanagers and 6 parallelism, each one has 8G memory.It read
from kafka, one minute