Hello Sean,
We are adding a background thread doing heartbeats as part of the adopted
KIP-62:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
Does this resolve your issue?
Guozhang
On Thu, Aug 11, 2016 at 2:47 PM, Sean Morr
Hello,
We will be sharing a live-stream as well as recording after the meetup.
Thanks for asking!
Ed
On Fri, Aug 12, 2016 at 2:56 PM, Prabhjot Bharaj
wrote:
> Hi,
>
> Thanks for the invitation. I won't be able to make it this soon.
> However, it'll be great if you could arrange to share the vi
Hi,
Thanks for the invitation. I won't be able to make it this soon.
However, it'll be great if you could arrange to share the video recordings.
Thanks,
Prabhjot
On Aug 12, 2016 4:33 PM, "Joel Koshy" wrote:
> Hi everyone,
>
> We would like to invite you to a Stream Processing Meetup at
> Linke
Typically "preferred leader election” would fail if/when one or more brokers
still did not come back online after being down for some time. Is that your
scenario?
-Zakee
> On Aug 11, 2016, at 12:42 AM, Sudev A C wrote:
>
> Hi,
>
> With *auto.leader.rebalance.enable=true* Kafka runs preferre
Hi there,
I created a broker as stand by using Kafka Mirror maker but same messages gets
consumed by both Source broker and mirror broker.
Ex:
I send 1000 messages let's say offset value 1 to 1000 and consumed 500
messages from the source broker. Now my broker goes down and want to read rest
Hi everyone,
We would like to invite you to a Stream Processing Meetup at
LinkedIn’s *Mountain
View campus on Tuesday, August 23 at 6pm*. Please RSVP here (only if you
intend to attend in person):
https://www.meetup.com/Stream-Processing-Meetup-LinkedIn/events/232864129
We have three great talks
Hi Yuanjia,
New consumer's group registry information is stored on the Kafka brokers,
not ZK any more, and the brokers use heartbeats to detect if the consumer
has been failed and hence can be removed from the group or not.
In addition, the consumer's committed offsets (i.e. "positions" as you
me
Is there a JIRA for it? Could you point to where the issue exists in the
code?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On August 12, 2016 at 5:15:33 PM, Oleg Zhurakousky (
ozhurakou...@hortonworks.com) wrote:
It hangs indefinitely in any container. It’s a known issue and has been
Increasing number of retries and/or retry.backoff.ms will help reduce the data
loss. Figure out how long NLFPE occurs (this happens only as long as metadata
is obsolete), and configure below props accordingly.
message.send.max.retries=3 (default)
retry.backoff.ms=100 (default)
> On Aug 12, 201
Ok, thanks for the info.
On Fri, Aug 12, 2016 at 8:29 AM Tom Crayford wrote:
> Hey David,
>
> There have been numerous issues with the log cleaner thread.
> https://issues.apache.org/jira/browse/KAFKA-3894 talks about the history
> of
> "log cleaner thread crashing" issues. Some of them are only
#1 turned out to be invalid, my logging was simply bad (I was logging the
number of messages I'd read so far for that partition, not the requested
read offset)
#2 is still valid, though. I'm thinking that a possible explanation might
be that the part of the log I was processing was deleted, but I
It hangs indefinitely in any container. It’s a known issue and has been brought
up many times on this list, yet there is not fix for it.
The problem is with the fact that while poll() attempts to create an elusion
that it is async and even allows you to set a timeout it is essentially very
misle
Hy I'm new to Kafka and messaging at all.
I have a simple java application that contains a consumer and a producer. It is
working on the host system but if I try to run it in a docker container (Kafka
is not in the container, it is still on the host) consumer.poll() hangs up and
does not return
Hey David,
There have been numerous issues with the log cleaner thread.
https://issues.apache.org/jira/browse/KAFKA-3894 talks about the history of
"log cleaner thread crashing" issues. Some of them are only fixed in
0.10.0.0 (https://issues.apache.org/jira/browse/KAFKA-3587), and we're
working on
I am out of the office until 08/15/2016.
Note: This is an automated response to your message "Long commits lead to
rebalance" sent on 8/11/2016 4:47:32 PM.
This is the only notification you will receive while this person is away.
**
This email and any attachments may contain information th
We are seeing data loss, whenever we see "NotLeaderForPartitionException"
exception,
We are using 0.8.2 java client publisher API with callback, when i get
callback with the error i am logging them in a file and retrying them
later.
So number of errors = number of logged events are matching but ove
You would have to have the client servers have their own "cluster" and use
mirror maker to replicate to the main cluster.
I haven't found a way to clear on demand but you can temporarily set the time
to live short, like one second, and wait for Kafka to clear the messages.
Dave
> On Aug 11, 20
I am occasionally seeing commits of offsets taking up to 30 seconds which is
leading to a rebalance because the consumer hasn't called poll() to heartbeat.
I currently have a "heartbeat" routine that runs periodically to handle that I
have long processing times of data, and thus don't go back to
18 matches
Mail list logo