Hi folks,
Is there a way we can configure the *dead letter queue* (dlq) for Kafka
source connector with *Table API? *Is Datastream API the only option for
now?
Thanks,
Eric
Hi all,
We are trying to ingest large amounts of data (20TB) from S3 using Flink
filesystem connector to bootstrap a Hudi table. Data are well partitioned
in S3 by date/time, but we have been facing OOM issues in Flink jobs, so we
wanted to update the Flink job to ingest the data chunk by chuck (p
Hello,
Is anyone familiar with the "blob server connection"? We have constantly
been seeing the "Error while executing Blob connection" error, which
sometimes causes a job stuck in the middle of a run if there are too many
connection errors and eventually causes a failure, though most of the time