Hi Kafka Group, I have to pull the data from the Topic and index into Elastic Search with Bulk API and wanted to commit only batch that has been committed and still continue to read from topic further on same topic. I have auto commit to be off.
List<Message> batch ..... while (iterator.hasNext()) { batch.add(iterator.next().message()); if(batch size is 50 ){ //===>>>> Once the bulk API is successful it will commit the offset to zookeeper... executor.submit(new Thread() process batch and commit batch, cconsumerConnector) batch = new batch buffer.... } } This commitOffset API commits all messages that have been read so far. What is best way to continue reading and only commit another thread finish batch process is successful. This will lead to fragmentation of the Consumer offset so what is best way to implement continuous reading stream and commit the rage offset. Is Simple Consumer a better approach for this. Thanks, Bhavesh Thanks, Bhavesh