[ https://issues.apache.org/jira/browse/FLINK-29545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
xiaogang zhou updated FLINK-29545: ---------------------------------- Description: the task dag is like attached file. the task is started to consume from earliest offset, it will stop when the first checkpoint triggers. is it normal?, for sink is busy 0 and the second operator has 100 backpressure and check the checkpoint summary, we can find some of the sub task is n/a. I tried to debug this issue and found in the triggerCheckpointAsync , the triggerCheckpointAsyncInMailbox took a lot time to call looks like this has something to do with logCheckpointProcessingDelay, Has any fix on this issue? can anybody help me on this issue? thanks was: the task dag is like attached file. the task is started to consume from earliest offset, it will stop when the first checkpoint triggers. is it normal?, for sink is busy 0 and the second operator has 100 backpressure and check the checkpoint summary, we can find some of the sub task is n/a. I tried to debug this issue and found in the triggerCheckpointAsync , the triggerCheckpointAsyncInMailbox took a lot time to call looks like this has something to do with logCheckpointProcessingDelay, Has any fix on this issue? can anybody help me on this issue? thanks > kafka consuming stop when trigger first checkpoint > -------------------------------------------------- > > Key: FLINK-29545 > URL: https://issues.apache.org/jira/browse/FLINK-29545 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Network > Affects Versions: 1.13.3 > Reporter: xiaogang zhou > Priority: Critical > Attachments: backpressure 100 busy 0.png, task acknowledge na.png, > task dag.png > > > the task dag is like attached file. the task is started to consume from > earliest offset, it will stop when the first checkpoint triggers. > > is it normal?, for sink is busy 0 and the second operator has 100 backpressure > > and check the checkpoint summary, we can find some of the sub task is n/a. > I tried to debug this issue and found in the > triggerCheckpointAsync , the > triggerCheckpointAsyncInMailbox took a lot time to call > > > looks like this has something to do with > logCheckpointProcessingDelay, Has any fix on this issue? > > > can anybody help me on this issue? > > > > > thanks -- This message was sent by Atlassian Jira (v8.20.10#820010)