we are pulling from a DB, maybe 100k rows at a time. for each row we send
it to kafka.

now the problem is , if any errors happen during the db pulling (such as an
earthquake), we stop.
next time we wake up. we can't write the same record again into kafka,
otherwise there might be over counting.


does kafka support a "BATCH mode" with commit ? so basically at the start
of my session, I declare "start transaction", and after all records have
been pulled and sent to kafka, I declare "commit" .

Thanks
Yang

Reply via email to