gt;>>>>>>> assumption?
>>>>>>>>>
>>>>>>>>> G
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jul 23, 2024 at 12:46 PM Salva Alcántara <
>>>>>
gt;>>>> tweak. More precisely, this is how my code looks like now (kudos to
>>>>>>>>> my dear
>>>>>>>>> colleague Den!):
>>>>>>>>>
>>>>>>>>> ```
>>>>>>&g
{
"emoji": "👍",
"version": 1
}
gt;>>>>>>>
>>>>>>>> TypeSerializer serializer =
>>>>>>>> dataStream.getType().createSerializer(env.getConfig());
>>>>>>>> String accumulatorName = "dataStreamCollect_" +
>>
ator();
>>>>>>> CollectResultIterator iterator =
>>>>>>> new CollectResultIterator<>(
>>>>>>> operator.getOperatorIdFuture(),
>>>>>>> serializer,
>>>>>>> accumulatorName,
>>>>&g
{
"emoji": "👍",
"version": 1
}
iterator.setJobClient(jobClient);
>>>>>>
>>>>>> var clientAndIterator = new ClientAndIterator<>(jobClient,
>>>>>> iterator);
>>>>>> List results = new ArrayList<>(limit);
>>>>>> while (limit
{
"emoji": "👍",
"version": 1
}
to the
>>>>> CollectSinkOperatorFactory constructor here:
>>>>> -
>>>>> https://github.com/apache/flink/blob/release-1.15.4/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java#L1378
>>>>>
>>>>> This works but it's obviously in
t; On Mon, Jul 22, 2024 at 10:04 AM Salva Alcántara <
>>>> salcantara...@gmail.com> wrote:
>>>>
>>>>> The same happens with this slight variation:
>>>>>
>>>>> ```
>>>>> Configuration config = new Configuration();
>>>
{
"emoji": "👍",
"version": 1
}
t; Thanks for your suggestion. Unfortunately, this does not work, I still
>>>>> get the same error message:
>>>>>
>>>>> ```
>>>>> Record size is too large for CollectSinkFunction. Record size is
>>>>> 9623137 bytes, but
t; Configuration config = new Configuration();
>>>> config.setString("collect-sink.batch-size.max", "10mb");
>>>> StreamExecutionEnvironment env =
>>>> StreamExecutionEnvironment.getExecutionEnvironment(config);
>>>> SavepointReader
t;
>>> var matcher = savepoint.readKeyedState("Raw Triggers", new
>>> MatcherReader());
>>> var matcherState = matcher.executeAndCollect(1000);
>>> ```
>>>
>>> I have tried other ways but none has worked (the setting is always
>>> ignored in the end).
>>>
&
none has worked (the setting is always
>> ignored in the end).
>>
>> Regards,
>>
>> Salva
>>
>>
>>
>> On Sun, Jul 21, 2024 at 9:10 AM Zhanghao Chen
>> wrote:
>>
>>> Hi, you could increase it as follows:
>>>
>>
t;10mb");
>> StreamExecutionEnvironment env =
>> StreamExecutionEnvironment.getExecutionEnvironment(config);
>> --
>> *From:* Salva Alcántara
>> *Sent:* Saturday, July 20, 2024 15:05
>> *To:* user
>> *Subject:* SavepointRe
=
> StreamExecutionEnvironment.getExecutionEnvironment(config);
> --
> *From:* Salva Alcántara
> *Sent:* Saturday, July 20, 2024 15:05
> *To:* user
> *Subject:* SavepointReader: Record size is too large for
> CollectSinkFunction
>
> Hi all!
tara
Sent: Saturday, July 20, 2024 15:05
To: user
Subject: SavepointReader: Record size is too large for CollectSinkFunction
Hi all!
I'm trying to debug a job via inspecting its savepoints but I'm getting this
error message:
```
Caused by: java.lang.RuntimeException: Record s
Hi all!
I'm trying to debug a job via inspecting its savepoints but I'm getting
this error message:
```
Caused by: java.lang.RuntimeException: Record size is too large for
CollectSinkFunction. Record size is 9627127 bytes, but max bytes per batch
is only 2097152 bytes. Please consider increasing
19 matches
Mail list logo