Imran,
Remember too, that different threads will always be processing a different
set of partitions. No 2 threads will ever own the same partition,
simultaneously.
A consumer connector can own many partitions (split among its threads),
each with a different offset. So, yes, it is complicated, a
sorry, one more thought --
I've realized that the difficulties are also because I'm trying to
guarantee read *exactly* once. The standard consumer group guarantees
read *at least* once (with more than one read happening for those
messages that get read, but then the process dies before the offset
Hi,
thanks again for the quick answer on this. However, though this
solution works, it is *really* complicated to get the user code
correct if you are reading data with more than 1 thread. Before I
begin processing one batch of records, I have to make sure all of the
workers reading from kafka s
perfect, thank you!
On Wed, Nov 20, 2013 at 8:44 AM, Neha Narkhede wrote:
> You can turn off automatic offset commit (auto.commit.enable=false) and use
> the commitOffsets() API. Note that this API will commit offsets for all
> partitions owned by the consumer.
>
> Thanks,
> Neha
> On Nov 20, 201
You can turn off automatic offset commit (auto.commit.enable=false) and use
the commitOffsets() API. Note that this API will commit offsets for all
partitions owned by the consumer.
Thanks,
Neha
On Nov 20, 2013 6:39 AM, "Imran Rashid" wrote:
> Hi,
>
> I have an application which reads messages f
Hi,
I have an application which reads messages from a kafka queue, builds
up a batch of messages, and then performs some action on that batch.
So far I have just used the ConsumerGroup api. However, I realized
there is a potential problem -- my app may die sometime in the middle
of the batch, and