s you
may have a dependency on external systems which may respond slow in some
rare but possible scenarios. This is why i also implement 3rd approach
which also alerts me well in advance when my consumer is marked dead due to
some reason.
Regards,
Vinay Sharma
On Mon, May 2, 2016 at 11:53 PM, David
ption I know I can
> safely abandon the current batch of records I am processing and return to
> my poll command knowing that the ones I haven’t committed will be picked up
> after the rebalance completes.
>
> Many thanks,
> Phil
>
> -Original Message-
> From:
happened which is not yet categorized.
Regards,
Vinay
On Wed, Apr 27, 2016 at 7:52 AM, vinay sharma
wrote:
> Hi Phil,
>
> This sounds great. Thanks for trying these serrings. This means probably
> something wrong in my code or setup. I will check what is causing this
> issue in my case.
gt;> code was 0.9.0. Simple mistake and one that I should have thought of
> >> sooner. Once I had them bump up to the latest kafka all was well.
> >> Thank you for your help!
> >>>
> >>> v
> >>>
> >>>
> >>>> On Apr 25, 2
Hi Phil,
This sounds great. Thanks for trying these serrings. This means probably
something wrong in my code or setup. I will check what is causing this
issue in my case.
I have a 3 broker 1 zk cluster and my topic has 3 partitions with
replication factor 3.
Regards,
Vinay Sharma
Hi Phil,
Config ConsumerConfig.METADATA_MAX_AGE_CONFIG has default 30 ms. This
config drives a mechanism where a proactive meta data refresh request is
issued by consumer periodically. i have seen that i get log about
successful heartbeat along with commit only before this request. once this
r
s a consumer rebalance will also trigger a metadata refresh but what
> else might?
>
> Thanks
> Phil Luckhurst
>
> -Original Message-
> From: vinay sharma [mailto:vinsharma.t...@gmail.com]
> Sent: 26 April 2016 13:24
> To: users@kafka.apache.org
> Subject: RE: D
defect but
it seems something is fixed related to time reset of hearbeat task so that
next heatbeat request time is calculated correctly. From next version
commitSync will act as heartbeat as per the defect.
Regards,
Vinay Sharma
On Apr 26, 2016 4:53 AM, "Phil Luckhurst"
wrote:
mitting on regular intervals (which sends heartbeat) this somehow does
not saves consumer from getting timeout during a meta refresh. This issue
does not happen if i am committing after each record that is between 2-4
seconds or if a commit happens tight after meta refresh response.
Regards,
Vinay Sharm
_AVAILABLE}"
Do you see any such error in your logs?
Regards,
Vinay Sharma
On Mon, Apr 25, 2016 at 9:38 AM, Fumo, Vincent <
vincent_f...@cable.comcast.com> wrote:
>
>
> My code is very straightforward. I create a producer, and then call it to
> send messages. Here i
n a 3 broker 1 zookeeper kafka setup. I ran test for more than
a minute and saw just once for both producers before their 1st send.
Regards,
Vinay Sharma
On Apr 22, 2016 3:15 PM, "Fumo, Vincent"
wrote:
> Hi. I've not set that value. My producer properties are as follows :
&
Generally a proactive metadata refresh request is sent by producer and
consumer every 5 minutes but this interval can be overriden with property "
metadata.max.age.ms" which has default value 30 i.e 5 minutes. Check if
you have set this property very low in your producer?
On Fri, Apr 22, 2016
ss by another consumer.
Regards,
Vinay Sharma
On Thu, Apr 21, 2016 at 2:09 PM, Phil Luckhurst
wrote:
> Thanks for all the responses. Unfortunately it seems that currently there
> is no fool proof solution to this. It's not a problem with the stored
> offsets as it will happen even i
may appear first."
You can set retry off by setting this value to zero or less and handle send
failures and retries yourself in your code.
Regards,
Vinay Sharma
never be processed if consumer crashes
while processing records which are already marked committed due to
rebalance.
Regards,
Vinay Sharma
regarding pause and resume approach, I think there will still be a chance
that you end up processing duplicate records. Rebalance can still get
triggered due to numerous reasons while you are processing records.
On Thu, Apr 21, 2016 at 10:34 AM, vinay sharma
wrote:
> I was also struggling w
I was also struggling with this problem. I have found one way to do it
without making consumers aware of each others processing or assignment
state. You can set autocommit to true. Irrespective of autocommit interval
setting autocommit true will make kafka commit all records already sent to
consume
Hi Everyone,
I see that on each metadata refresh a rebalance is triggered and any
consumer in middle of a processing starts throwing errors like
"UNKNOWN_MEMBER_ID" on commit. There is no change in partitions or
leadership of partitions or brokers. Any idea what could cause this
behavior?
What is
; or "REBALANCE_IN_PROGRESS"?
what is ideal way to deal with this?
Any pointers will be much appreciated.
Regards,
Vinay Sharma
19 matches
Mail list logo