That makes sense. Thank you.
On Thu, Mar 27, 2014 at 8:32 PM, Neha Narkhede wrote:
> When I call consumer.commitOffsets(); before killing session, unit test
> succeeded. This problem would happen only with autoCommit enabled
>
> That seems expected. If you call commitOffsets() explicitly before
When I call consumer.commitOffsets(); before killing session, unit test
succeeded. This problem would happen only with autoCommit enabled
That seems expected. If you call commitOffsets() explicitly before
simulating a GC pause on the consumer, there will be no duplicates since
the next consumer in
When I call consumer.commitOffsets(); before killing session, unit test
succeeded. This problem would happen only with autoCommit enabled.
Could you fix this problem before releasing 0.8.1.1?
Thank you
Best, Jae
On Thu, Mar 27, 2014 at 3:57 PM, Bae, Jae Hyeon wrote:
> Hi
>
> While testing kaf
Hi
While testing kafka 0.8 consumer's zk resilience, I found that on the zk
session kill and handleNewSession() is called, high level consumer is
replaying messages.
Is this know issue? I am attaching unit test source code.
package com.netflix.nfkafka.zktest;
import com.fasterxml.jackson.core.J