ne an OOM for a component like JM
that doesn't run business logic (job parallelism is 3000, with multiple agg
operations and sinks)
Replied Message
| From | Geng Biao |
| Date | 07/18/2022 23:31 |
| To | SmileSmile |
| Cc | user |
| Subject | Re: flink on yarn job always rest
es it receive SIGNAL 15
2. is it because of some configuration? (e.g. deploy timeout causing kill?)
Replied Message
| From | Geng Biao |
| Date | 07/18/2022 22:36 |
| To | SmileSmile、user |
| Cc | |
| Subject | Re: flink on yarn job always restart |
Hi,
One possible direction is to check
.
Replied Message
| From | Zhanghao Chen |
| Date | 07/18/2022 21:19 |
| To | SmileSmile、user |
| Cc | |
| Subject | Re: flink on yarn job always restart |
Hi, could you provide the whole JM log?
Best,
Zhanghao Chen
From: SmileSmile
Sent: Monday, July 18, 2022 20:46
To: user
hi all
we meet a situation, parallelism 3000,the job contains multiple agg
operation,the job recover from checkpoint or savepoint must be unrecoverable,
the job restarts repeatedly
jm error logorg.apache.flink.runtime.entrypoint.ClusterEntrypoint[] -
RECEIVED S
IGNAL 15: SIGTERM. Shuttin
Hi
I use Flink 1.12.4 on yarn, job topology is. kafka -> source ->
flatmap -> window 1 min agg -> sink -> kafka. Checkpoint is enable ,
checkpoint interval is 20s . When I cancel my job, some TM cancel success,
some TM become cenceling and the TM will be kill by itself with
ta
Hi,
after failover still OutOfOrderSequenceException. when I close checkpoint,
kafka broker still return OutOfOrderSequenceException to me .
At 2021-06-04 17:52:22, "Yun Gao" wrote:
Hi,
Have you checked if the error during normal execution, or right after failover?
Best,
Y
Dear all:
flink version is 1.12.4,kafka version is 1.1.1。topology is very
simple ,source-->flatmap--->sink ,enable checkpoint,job will fail after a few
hours 。 the error message is
Caused by: org.apache.flink.streaming.connectors.kafka.FlinkKafkaException:
Failed to send data to Kaf
Best
Yun Tang
From: SmileSmile
Sent: Friday, July 3, 2020 14:01
To: 'user@flink.apache.org'
Subject: Checkpoint is disable, will history data in rocksdb be leak when job
restart?
Hi
My job work on flink 1.10.1 with event time , container memory usage will rise
2G after one restar
Hi
My job work on flink 1.10.1 with event time , container memory usage will rise
2G after one restart,then pod will be killed by os after some times restart。
I find history data will be cleared when new data arrive, call the function
onEventTime() to clearAllState.But my job no need Checkpo