Thanks,Zhanghao.
I think it's the async upload mechanism helped mitigating the in flight buffers
materialization latency,and the execution vertex restarting procedure just
reads the in flight buffers and the local TaskStateSnapshots to make its
job done.
-- Original
Hi Banu,
Rocksdb is intelligently built to clear any un-useful state from its
purview. So you should be good and any required cleanup will be
automatically done by RocksDb itself.
>From the current documentation, it looks quite hard to relate Flink
Internal DS activity to RocksDB DS activity. In m
The same happens with this slight variation:
```
Configuration config = new Configuration();
config.setString("collect-sink.batch-size.max", "100mb");
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.configure(config);
SavepointReader savepoint = Savepoint
Hi Zhanghao,
Thanks for your suggestion. Unfortunately, this does not work, I still get
the same error message:
```
Record size is too large for CollectSinkFunction. Record size is 9623137
bytes, but max bytes per batch is only 2097152 bytes.
Please consider increasing max bytes per batch value b
Hi Li,
The error suggests that Job is not able to acquire the required TaskManager
TaskSlots within the configured time duration of 5 minutes.
Job Runs on the TaskManagers (Worker Nodes). Helpful Link -
https://nightlies.apache.org/flink/flink-docs-master/docs/concepts/flink-architecture/#anatomy-