vinay sharma [mailto:vinsharma.t...@gmail.com]
> Sent: 28 April 2016 21:34
> To: users@kafka.apache.org
> Subject: Re: Detecting rebalance while processing ConsumerRecords (0.9.0.1)
>
> Hi Phil,
>
> I tested my code and it is correct. I do see heartbeats getting missed
> sometime
owing that the ones I haven’t committed will be picked up after the rebalance
completes.
Many thanks,
Phil
-Original Message-
From: vinay sharma [mailto:vinsharma.t...@gmail.com]
Sent: 28 April 2016 21:34
To: users@kafka.apache.org
Subject: Re: Detecting rebalance while processing Consu
Hi Phil,
I tested my code and it is correct. I do see heartbeats getting missed
sometimes and causing session time out for consumer where generation is
marked dead. I see that there are long time windows where there is no
heartbeat whereas i do commit in between these time windows and there is no
Hi Phil,
This sounds great. Thanks for trying these serrings. This means probably
something wrong in my code or setup. I will check what is causing this
issue in my case.
I have a 3 broker 1 zk cluster and my topic has 3 partitions with
replication factor 3.
Regards,
Vinay Sharma
6 April 2016 17:29
To: users@kafka.apache.org
Subject: Re: Detecting rebalance while processing ConsumerRecords (0.9.0.1)
Hi Phil,
Config ConsumerConfig.METADATA_MAX_AGE_CONFIG has default 30 ms. This
config drives a mechanism where a proactive meta data refresh request is issued
by consumer
Hi Phil,
Config ConsumerConfig.METADATA_MAX_AGE_CONFIG has default 30 ms. This
config drives a mechanism where a proactive meta data refresh request is
issued by consumer periodically. i have seen that i get log about
successful heartbeat along with commit only before this request. once this
r
s a consumer rebalance will also trigger a metadata refresh but what
> else might?
>
> Thanks
> Phil Luckhurst
>
> -Original Message-
> From: vinay sharma [mailto:vinsharma.t...@gmail.com]
> Sent: 26 April 2016 13:24
> To: users@kafka.apache.org
> Subject: RE: D
ata request is useful, I'll watch out
> for that if we change our commit logic.
>
> Thanks
> Phil Luckhurst
>
>
> -Original Message-
> From: vinay sharma [mailto:vinsharma.t...@gmail.com]
> Sent: 25 April 2016 20:30
> To: users@kafka.apache.org
> Subject: R
ur commit logic.
>
> Thanks
> Phil Luckhurst
>
>
> -Original Message-
> From: vinay sharma [mailto:vinsharma.t...@gmail.com]
> Sent: 25 April 2016 20:30
> To: users@kafka.apache.org
> Subject: Re: Detecting rebalance while processing ConsumerRecords (0.9.0.1)
>
Original Message-
From: vinay sharma [mailto:vinsharma.t...@gmail.com]
Sent: 25 April 2016 20:30
To: users@kafka.apache.org
Subject: Re: Detecting rebalance while processing ConsumerRecords (0.9.0.1)
Hi Phil,
Regarding identifying a rebalance, how about comparing array used for consumer
pause with cur
onsumer
> would allow us to safely abort the current processing loop knowing that the
> remaining messages would be picked up by another consumer after the
> rebalance - that would stop us processing duplicates.
>
> Thanks
> Phil Luckhurst
>
>
> -Original Message-
&g
licates.
Thanks
Phil Luckhurst
-Original Message-
From: vinay sharma [mailto:vinsharma.t...@gmail.com]
Sent: 22 April 2016 14:24
To: users@kafka.apache.org
Subject: Re: Detecting rebalance while processing ConsumerRecords (0.9.0.1)
Hi Phil,
Regarding pause and resume,I have not tried this
Hi Phil,
Regarding pause and resume,I have not tried this approach but i think this
approach may not be feasible. If your consumer no longer has that partition
assigned from which record being processed was fetched or even if partition
is assigned again to consumer somehow you may still not be abl
Thanks for all the responses. Unfortunately it seems that currently there is no
fool proof solution to this. It's not a problem with the stored offsets as it
will happen even if I do a commitSync after each record is processed. It's the
unprocessed records in the batch that get processed twice.
Hi,
By design Kafka does ensure not to send same record to multiple consumers
in same consumer group. Issue is because of rebalance while a processing is
going on and records are not yet commited. In my view there are only 2
possible solutions to it
1) As mentioned in documentation, store offsets
regarding pause and resume approach, I think there will still be a chance
that you end up processing duplicate records. Rebalance can still get
triggered due to numerous reasons while you are processing records.
On Thu, Apr 21, 2016 at 10:34 AM, vinay sharma
wrote:
> I was also struggling with t
Note that Kafka is not designed to prevent duplicate records anyway. For
example, if your app writes into an external system (for example a
database) once per consumer record, and you do synchronous offset commit
after every consumer record, you can still have duplicate messages. Here's
the case wo
I was also struggling with this problem. I have found one way to do it
without making consumers aware of each others processing or assignment
state. You can set autocommit to true. Irrespective of autocommit interval
setting autocommit true will make kafka commit all records already sent to
consume
This is an example of the scenario I'm trying avoid where 2 consumers end up
processing the same records from a partition at the same time.
1. I have a topic with 2 partitions and two consumers A and B which have
each been assigned a partition from the topic.
2. Consumer A is proc
19 matches
Mail list logo