Hi, I am using flink cdc to stream CDC changes in an iceberg table. When I first run the flink job for a topic which has all the data for a table, it get out of heap memory as flink try to load all the data during my 15mins checkpointing interval. Right now, only solution I have is to pass *-ytm 8192 -yjm 2048m* for a table with 10M rows and then reduce it after flink has consumed all the data. Is there a way to tell flink cdc code to trigger checkpoint or throttle the consumption speed(I think backpressure should have handled this)?
-- Ayush Chauhan Software Engineer | Data Platform [image: mobile-icon] +91 9990747111 -- This email is intended only for the person or the entity to whom it is addressed. If you are not the intended recipient, please delete this email and contact the sender.