Hi Mohammad,
please share the logs in text format .
On Mon, Dec 30, 2024 at 1:06 PM Mohammad Aamir Iqubal
wrote:
> Hi Samrat,
>
> I think the issue is the amount of data collected while check pointing
> with 29 KB message the application is tuned properly. But with 102 KB we
> are not able to t
Hi Md Aamir,
Apologies for the delayed response due to the festive season. Thank you for
providing the additional details.
The information shared is helpful but still sparse to pinpoint the root
cause of the backpressure issue. To proceed further, it would be great if
you could share the logs fro
Hi Samrat,
PFB details:
1)1.17 version 2)We are just modifying the json field so the modification
in existing json field we are managing if we talk about state3)Compression
Type snappy used, default fetch and batch size are used 4)process operator
is the bottleneck hence source is getting back pr
Hi Md Aamir,
Thank you for providing the details of your streaming application setup.
To assist you better in identifying and resolving the backpressure issue, I
have a few follow-up questions:
1. Which version of Apache Flink are you using?
2. Is the ProcessFunction stateful? If yes, w
Hi Team,
I am running a streaming application in performance environment. The source
is Kafka and sink is also Kafka but the sink Kafka is secured by kerberos.
Message size is 102kb and source parallelism is 16
process parallelism is 80 and sink parallelism is 12.
I am using process function to re