Re: flight buffer local storage

2024-07-22 Thread Enric Ott
Thanks,Zhanghao. I think it's the async upload mechanism helped mitigating the in flight buffers materialization latency,and the execution vertex restarting procedure just reads the in flight buffers and the local  TaskStateSnapshots to make its job done. -- Original 

Re: Flink state

2024-07-22 Thread Saurabh Singh
Hi Banu, Rocksdb is intelligently built to clear any un-useful state from its purview. So you should be good and any required cleanup will be automatically done by RocksDb itself. >From the current documentation, it looks quite hard to relate Flink Internal DS activity to RocksDB DS activity. In m

Re: SavepointReader: Record size is too large for CollectSinkFunction

2024-07-22 Thread Salva Alcántara
The same happens with this slight variation: ``` Configuration config = new Configuration(); config.setString("collect-sink.batch-size.max", "100mb"); StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.configure(config); SavepointReader savepoint = Savepoint

Re: SavepointReader: Record size is too large for CollectSinkFunction

2024-07-22 Thread Salva Alcántara
Hi Zhanghao, Thanks for your suggestion. Unfortunately, this does not work, I still get the same error message: ``` Record size is too large for CollectSinkFunction. Record size is 9623137 bytes, but max bytes per batch is only 2097152 bytes. Please consider increasing max bytes per batch value b

Re: Flink Slot request bulk is not fulfillable!

2024-07-22 Thread Saurabh Singh
Hi Li, The error suggests that Job is not able to acquire the required TaskManager TaskSlots within the configured time duration of 5 minutes. Job Runs on the TaskManagers (Worker Nodes). Helpful Link - https://nightlies.apache.org/flink/flink-docs-master/docs/concepts/flink-architecture/#anatomy-