Good suggestion,

Turns out hard coding the config works... Thank you for the suggestion.
Rest to see exactly what typo I made in the yaml.

Best Regard,
JM



On Thu, Sep 18, 2025 at 4:41 PM Gabor Somogyi <[email protected]>
wrote:

> Hi Jean-Marc,
>
> Could you please double check that your code is having the mentioned fix +
> give a simple repro steps?
> Please hardcode the batch size value in the code [1] to avoid any yaml to
> config issues.
> I would take a look if you can help a bit.
>
> BR,
> G
>
> [1]
> https://github.com/apache/flink/pull/25764/files#diff-278bac11f68be56ee499b24afe5e1d53a7c61b4d636654fd96b4167e2a45cbacR125
>
>
> On Thu, Sep 18, 2025 at 5:30 PM Jean-Marc Paulin <[email protected]>
> wrote:
>
>> Hi,
>>
>> Using Flink 1.20.1, we get this error when trying to read a savepoint:
>>
>> Caused by: java.lang.RuntimeException: Record size is too large for
>> CollectSinkFunction. Record size is 5411925 bytes, but max bytes per batch
>> is only 2097152 bytes. Please consider increasing max bytes per batch value
>> by setting collect-sink.batch-size.max
>>         at
>> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction.invoke(CollectSinkFunction.java:288)
>> ...
>>
>> I tried to set collect-sink.batch-size.max in the flink-conf.yaml, but I
>> still hit the same error. It's like it's not taken into account. I see
>> there is a fix in 1.20.1 (https://github.com/apache/flink/pull/25764)
>> for this, but I still face the same issue,
>>
>> JM
>>
>>
>>

Reply via email to