Hi wei,
I had a similar issue when I changed from FlinkKafkaConsumer to
KafkaSource. In my case, I had the _metadata size increase inside the
checkpoint. I have tried to rollback to the old flink version with the
old checkpoint/savepoint, and then change the uid of the flink kafka
source and sink
Hi Wei,
>From the error message, I guess the reason for the issue is that the events
sent by SplitEnumerator to the source exceeds the default size of akka. You
can add the option 'akka.framesize' to set the akka packet size, or try to
decrease the event size.
When you use 'FlinkKafkaConsumer' to
Hi Team,
We hit an issue after we upgrade our job from Flink 1.12 to 1.15, there's
a consistent akka.remote.OversizedPayloadException after job restarts:
Transient association error (association remains live)
akka.remote.OversizedPayloadException: Discarding oversized payload sent to
Actor[akka.