Hello,
I have implemented a custom source that reads tables in parallel, with each
split corresponding to a table and custom source implementation can be
found here -
https://github.com/adasari/mastering-flink/blob/main/app/src/main/java/org/example/paralleljdbc/DatabaseSource.java
However, it see
Hi, Lim.
What about adding 'sink.buffer-flush.max-rows' and 'sink.buffer-flush.interval'[1] options to merge records eith the same key and reduce tombstone messages?
best,
Yanquan
[1]https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/upsert-kafka/å¨ Qing Lim ï¼2024å¹´10