Hi, I would like to know how to handle following scenarios while processing events in a kafka streams application:
1. the streams application needs data from a globalKtable which loads it from a topic that is populated by some other service/application. So, if the streams application starts getting events from input source topic however it doesn't find required data in GlobalKTable since that other application/service hasn't yet loaded that data then the Kafka streams application gets error while processing the event and application handles the exception by logging an error and it goes onto processing other events. Since auto.commit is true, the polling will go on fetching next batch and probably it will set the offset of previous batch, causing loss of events that had an exception while processing. I want to halt the processing here if an error occurs while processing the event, so instead of going on to the next event, the processing should keep trying previous event until application level error is resolved. How can I achieve this?