Hi,
Would like to if New Consumer API is GAed.
Regards,
Rajeswari
Thanks for finding and reporting, Liquan. I'll wait a day or two for
more testing and roll out a new RC.
In other news:
We keep running into last minute issues with our shell scripts,
because we have zero automated testing for them.
Contribution of automated tests for our scripts will be super hel
We found a blocking issue on the release
https://issues.apache.org/jira/browse/KAFKA-3692. This may cause the
external CLASSPATH not be included in the final CLASSPATH in
kafka-run-class.sh. There is no easy work around of this and we need a new
RC.
Thanks,
Liquan
On Mon, May 9, 2016 at 6:49 PM,
Thanks. I had a quick look at the code and it's not obvious how this could
happen. Let's see how your testing goes. :)
Ismael
On Wed, May 11, 2016 at 2:04 AM, Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia.com> wrote:
> Ismael,
>
> We are setting up a 0.10.0 test cluster now. I will report
Hello all,
I have a design for a solution to the problem of "partition imbalances in
Kafka clusters".
It would be great to get some feedback on it.
https://soumyajitsahu.wordpress.com/2016/05/11/kafka-partition-reassignment-service-using-an-adoption-marketplace-model/
I have also put a proof-of-
Ismael,
We are setting up a 0.10.0 test cluster now. I will report on whether this bug
springs up in that cluster or not after a week or so.
Glad to hear that SocketServer bugs are being taken care of in 0.10.0 and hope
this issue is ironed out as a result.
Regards,
Buvana
-Original Mess
And where is the documentation for this topic: "__consumers_offsets"
On Tue, May 10, 2016 at 1:16 AM, Spico Florin wrote:
> Hi!
> Yes both are possible. The new versions 0.9 and above store the offsets in
> a special Kafka topic named __consumers_offsets.
> Regards,
> florin
>
> On Tue, May 10
Or retry with a volumeMount/persistentVolume for your single ZK pod.
On Tue, May 10, 2016 at 9:01 AM, Paolo Patierno wrote:
> Ok .. thanks.
> I'll retry with a zookeeper cluster.
>
> Paolo.
>
> Paolo PatiernoSenior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoTMicros
Hi Paolo,
The best way to do this would be to have broker3 start up with the same
broker id as the failed broker2. broker3 will then rejoin the cluster,
begin catching up with broker1, and eventually rejoin the ISR. If it starts
up with a new broker id, you'll need to run the partition reassignmen
Hi Sahitya,
I wonder if your consumers are experiencing soft failures because they're
busy processing a large collection of messages and not calling poll()
within session.timeout.ms? In this scenario, the group coordinator (a
broker) would not receive a heartbeat within session.timeout.ms and woul
You may find this interesting, although I don't believe it's exactly what
you're looking for:
https://github.com/pinterest/secor
I'm not sure how stable and commonly used it is.
Additionally, I see a lot of users use MirrorMaker for a "backup," where
MirrorMaker copies all topics from one Kafka
Thanks Buvana. Is this happening in production only or can you also
reproduce it in a test cluster? If the latter, would you be able to test
the latest 0.10.0.0 release candidate? We fixed a few issues in the
SocketServer.
Ismael
On Tue, May 10, 2016 at 8:27 PM, Ramanan, Buvana (Nokia - US) <
buv
Ismael,
Created bug:
https://issues.apache.org/jira/browse/KAFKA-3689
Hope to get a quick resolution.
Thanks,
Buvana
-Original Message-
From: isma...@gmail.com [mailto:isma...@gmail.com] On Behalf Of EXT Ismael Juma
Sent: Tuesday, May 10, 2016 10:54 AM
To: users@kafka.apache.org
Subject
Hi team,
I want to know what would happen if the consumer group rebalance takes long
time like longer than the session timeout?
For example I have two consumers A and B using the same group id. For some
reasons during rebalance consumer A takes long time to finish
onPartitionsRevoked what would h
Ok .. thanks.
I'll retry with a zookeeper cluster.
Paolo.
Paolo PatiernoSenior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoTMicrosoft Azure Advisor
Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience
> Date: Tue, 10 May 2016 17:59:16 +0200
> From: ra..
Kafka is expecting the state to be there when the zookeeper comes back. One way
to protect yourself from what you see happening, is to have a zookeeper quorum.
Run a cluster of 3 zookeepers, then repeat your exercise.
Kafka will continue to work absolutely fine. Just remember, with 3 ZK
instanc
Yes correct ... the new restarted zookeeper instance is completely new ... it
has no information about previous topics and brokers of course.
Paolo PatiernoSenior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoTMicrosoft Azure Advisor
Twitter : @ppatierno
Linkedin : paol
Ah, but your retarted container does not have any data Kafka recorded
previously. Correct?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
de.linkedin.com/in/radgruchalski
Confidentiality:
This communication is intended for the above-named person and may be
confidential and/or legally
This is what Kubernetes says me ...
Name:zookeeper
Namespace:default
Labels:
Selector:name=zookeeper
Type:ClusterIP
IP:10.0.0.184
Port:zookeeper2181/TCP
Endpoints:172.17.0.4:2181
Session Affinity:None
So t
Are you sure you’re getting the same IP address?
Regarding zookeeper connection being closed, is kubernetes doing a soft
shutdown of your container? If so, zookeeper is asked politely to stop.
–
Best regards,
Radek Gruchalski
radek@gruchalski.commailto:ra...@gruchalski.com
de.linkedin.com/in/r
Hi all,
experiencing with Kafka on Kubernetes I have the following error on Kafka
server reconnection ...
A cluster with one zookeeper and two kafka server ... I turn off the zookeeper
pod but kubernetes restart it and guaratees the same IP address for it but the
kafka server starts to retry c
Hi!
Thank you for your answer. Ithelps me since it confirms me the
observations (I'm not the only one :)).
I could not find the documentation that states this clear "Rebalance is
performed at the group level no matter from what topic your are consuming
that belongs to it. "
The pictures that I'v
OK, thanks. I suggest filing a bug in JIRA and please provide as much
information as possible (steps to reproduce would be ideal, but sometimes
that is hard to do). It does look like a Kafka bug.
Ismael
On Tue, May 10, 2016 at 2:45 PM, Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia.com> wrot
Ismael,
Version 0.9.0.1
Do you have any idea how to prevent this from happening? Is it a Kafka issue?
-Buvana
-Original Message-
From: isma...@gmail.com [mailto:isma...@gmail.com] On Behalf Of EXT Ismael Juma
Sent: Monday, May 09, 2016 8:02 PM
To: users@kafka.apache.org
Subject: Re: ERR
Hi Paolo,
We just jump on the box (don't use kuberenetes) and change the
metadata.properties file manually, then restart. We are only doing a small
amount of non prod traffic so it's easy for us to manage.
Thanks,
Ben
On Tuesday, 10 May 2016, Paolo Patierno wrote:
> Hi Ben,
>
> in order to av
Hi Ben,
in order to avoid conflicts I have auto generation for broker id in the
server.properties file.
Of course it's a workaround but I'm so curious how do you tell Kubernetes to
start a new Kafka pod with a previous broker id ?
Btw, waiting for an answer from the Kafka team.
Paolo.
Paolo P
Hi Paolo,
Would be interested to hear the answer also. We've been getting around this
ourselves by setting the new broker that comes up back to id 1001.
Thanks,
Ben
On Tuesday, 10 May 2016, Paolo Patierno wrote:
> Hello,
>
> I'm experiencing the usage of Kafka with Kubernetes and I don't unde
Hi folks,
Zookeeper is not required for storing offsets with the new consumer, but it
still has a variety of other uses for Kafka and will always be required for
coordination purposes. Please see Gwen's answer here as it's a pretty nice
write up:
https://www.quora.com/What-is-the-actual-role-of-
Hello,
I'm experiencing the usage of Kafka with Kubernetes and I don't understand the
reason of the following behavior.
Starting with a pod with zookeeper and a pod with a kafka server (id = 1001).
Create a topic named "test" ... producer and consumer can send/receive message
without problems.
I haven't browsed the source for the rebalance algorithm but anecdotally It
appears this is the case. In our system we have a consumer group whose
application instances are not only scaled but also split by topics (some
topics have much higher message rates). When we perform a deployment of
one of
I am trying to understand the replication procedure and saw this document
https://cwiki.apache.org/confluence/display/KAFKA/kafka+Detailed+Replication+Design+V3
describing:
LeaderAndISR path: stores leader and ISR of a partition
/brokers/topics/[topic]/[partition_id]/leaderAndISR --> {leader_epoc
But if we set autocommit to false and fetch data using simple consumer, will it
still use zookeeper for any purpose?
- Original Message -
From: "Spico Florin"
To: users@kafka.apache.org
Sent: Tuesday, May 10, 2016 1:55:39 PM
Subject: Re: Kafka 9 version offset storage mechanism chang
I have 3 topics A,B,C with same number of partitions. I use the same group
name for all the consumers to this topics.
My questions are:
1. If a consumer for one of the topics/partitions will rebalance be
triggered for the other two topics consumers?
2. Same if adding a new partition for one top
If you want to consume __consume_offset topic. we have to use this
configuration also exclude.internal.topics=false in consumer properties
Regards,
Ashiq
On Tue, May 10, 2016 at 9:50 AM, Henry Cai
wrote:
> Which deserializer class to use?
>
> On Mon, May 9, 2016 at 5:14 PM, Guozhang Wang wro
Hi!
Is just a guess (perhaps someone will correct me if I'm wrong). It
depends on the API you are using for consumers:
- simple API uses ZK for storring the offsets
- high level API stores the offests in Kafka Broker __consumer_offests
topic.
I hope it help.
Florin
On Tue, May 10, 2016 at 11:17
so zookeeper not needed anymore?
> On May 10, 2016, at 1:46 PM, Spico Florin wrote:
>
> Hi!
> Yes both are possible. The new versions 0.9 and above store the offsets in
> a special Kafka topic named __consumers_offsets.
> Regards,
> florin
>
> On Tue, May 10, 2016 at 8:33 AM, Gerard Klijs
> wr
Hi!
Yes both are possible. The new versions 0.9 and above store the offsets in
a special Kafka topic named __consumers_offsets.
Regards,
florin
On Tue, May 10, 2016 at 8:33 AM, Gerard Klijs
wrote:
> Both are possible, but the 'new' consumer stores the offset in an __offset
> topic.
>
> On Tue,
37 matches
Mail list logo