KAFKA-1008 has been checked into the 0.8 branch and needs to be manually
double-committed to trunk. To avoid merging problems, I suggest that for
all future changes in the 0.8 branch, we double commit them to trunk. Any
objections?
Thanks,
Jun
On Mon, Oct 7, 2013 at 5:33 PM, Jun Rao wrote:
>
Sounds good to me.
Thanks,
Neha
On Wed, Oct 9, 2013 at 8:56 AM, Jun Rao wrote:
> KAFKA-1008 has been checked into the 0.8 branch and needs to be manually
> double-committed to trunk. To avoid merging problems, I suggest that for
> all future changes in the 0.8 branch, we double commit them to
Kafka's consumer rebalancing strategy is explained in detail here -
http://kafka.apache.org/documentation.html#distributionimpl
Hope that helps!
-Neha
On Tue, Oct 8, 2013 at 11:42 PM, Markus Roder wrote:
> Hi Neha,
>
> thanks for this information.
> Can you give me a hint for implementing a own
This is in regards to consumer group consumption in 0.7.2.
Say we have 3 machines with 3 partitions in each topic totaling 9 partitions.
Now if I create a consumer group with 9 threads on the same machine then all
partitions will be read from. Now what happens if I start another 9 threads on
a
Any recommendation for NodeJS client for Kafka 0.8? thx
signature.asc
Description: Message signed with OpenPGP using GPGMail
Sounds good to me too.
-Jay
On Wed, Oct 9, 2013 at 8:56 AM, Jun Rao wrote:
> KAFKA-1008 has been checked into the 0.8 branch and needs to be manually
> double-committed to trunk. To avoid merging problems, I suggest that for
> all future changes in the 0.8 branch, we double commit them to trun
Will the topics be distributed across both machines or will it still be all
read from the first process that spawned up the 9 threads?
It will read from the first process that spawned 9 threads.
In general, is it better to have 1 machine running 9 threads to read all
partitions or 9 machines runn
Hello,
I am testing out the AddPartitionCommand in Kafka 8.0 and wanted to verify
the behavior I am seeing. Currently, running producers and consumers
appear unaffected by scaling out of partitions. i.e. Producers do not pick
up the new partition size for calls to the Partitioner and consumers a
Hello Shone,
Could you do a git log to see if KAFKA-1075 has been included in your
repository? It was checked in this Monday, so if you checked out the code
earlier than that then the fix will not be included.
Guozhang
On Wed, Oct 9, 2013 at 1:34 PM, Shone Sadler wrote:
> Hello,
>
>
> I am te
Ahh yes, thanks Guozhang!
I thought I did a pull yesterday, but the last change I had was for
Kafka-1073 (below). I did a pull again and I now see the change. I'll run
my tests again.
Thanks,
Shone
shonesadler$ git log
commit 71ed6ca3368ff38909f502565a4bf0f39e70fc6c
Author: Jun Rao
Date: Mo
Just verified the fix. Works great! Thanks for the quick response and fix
Guozhang.
Shone
On Wed, Oct 9, 2013 at 5:32 PM, Shone Sadler wrote:
> Ahh yes, thanks Guozhang!
>
> I thought I did a pull yesterday, but the last change I had was for
> Kafka-1073 (below). I did a pull again and I no
We are seeing that mirrormaker consumer started looping through offset out
of range and reset offset errors for some of partitions (2 out of 8
partitions). The consumerOffsetChecker reported very high Lag for these 2
partitions. Looks like this problem has started after a consumer rebalance.
Here i
We are upgrading to Kafka .8 and have discovered we have broken our
brethren developers over in the PHP world of our shop. Does anyone know if
there's an impending release of the PHP client for .8 before we go off and
try to build a rounder wheel (or create proxy via Java that they can submit
to i
> Will the topics be distributed across both machines or will it still be all
> read from the first process that spawned up the 9 threads?
>
> It will read from the first process that spawned 9 threads.
I should have actually asked the opposite of this. Say I have a consumer
running 3 threads on
I run into following error while doing some performance testing in Kafka
with producers running in multiple threads.
I can see the topic under /broker/topics in zooKeeper
I believe the producer tries to get the metadata info from Zookeeper.
I have tried to restart the 2-node Kafka broker cluster
I uploaded a patch against trunk which also fixes KAFKA-1036, the other
knows windows issue. Review appreciated. Should be an easy one.
https://issues.apache.org/jira/browse/KAFKA-1008
-Jay
On Wed, Oct 9, 2013 at 8:56 AM, Jun Rao wrote:
> KAFKA-1008 has been checked into the 0.8 branch and ne
Are you using 0.8 HEAD ?
Can you send around the full stack trace since one of the common reasons
for failed topic metadata requests are socket timeouts.
Thanks,
Neha
On Wed, Oct 9, 2013 at 4:30 PM, Shafaq wrote:
> I run into following error while doing some performance testing in Kafka
> wit
None so far that have added themselves to
https://cwiki.apache.org/confluence/display/KAFKA/Clients
On Wed, Oct 9, 2013 at 3:42 PM, Pete Laurina wrote:
> We are upgrading to Kafka .8 and have discovered we have broken our
> brethren developers over in the PHP world of our shop. Does anyone know
If one machine crashes is there anything I can do to make sure those 3
partitions are still read from?
If all those consumers are part of the same group, nothing. It will
automatically rebalance to consume the partitions that were consumed by the
failed client.
Thanks,
Neha
On Wed, Oct 9, 2013
yes I'm using 0.8 head
roducer starting with option to multi-thread
compressed---true
Exception in thread "pool-1-thread-1" kafka.common.KafkaException: Failed
to fetch topic metadata for topic: 225topic1381362148396
at
kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerParti
Do you see WARN messages like -
"Error while fetching metadata [%s] for topic [%s]: %s "
OR
"Error while fetching metadata %s for topic partition [%s,%d]: [%s]"
On Wed, Oct 9, 2013 at 5:49 PM, Shafaq wrote:
> yes I'm using 0.8 head
>
>
> roducer starting with option to multi-thread
> compr
Not sure what the issue is. Are you using 0.8 beta1? Did you enable auto
offset commit?
Thanks,
Jun
On Wed, Oct 9, 2013 at 3:00 PM, Rajasekar Elango wrote:
> We are seeing that mirrormaker consumer started looping through offset out
> of range and reset offset errors for some of partitions (2
I could not see such messages (Error while fetching metadata [%s] for topic
[%s]: %s ")
In the perf test, I create new topic for sending the JSON blob in a new
producer thread.
The error happened when I increased no. of files (mapping to producer
threads). There are already topics of previous pe
Neha, Thank you for the response. We saw that page, which is what
ultimately prompted me to send out the message to the group to see if
anyone might have started working on it, but wasn't ready to tell the
public yet. I will discuss with the team tomorrow during our scrum to see
what our course of
Could use the c++ library https://github.com/adobe-research/libkafka and
create a PHP extension?
On Thu, Oct 10, 2013 at 1:34 AM, Pete Laurina
wrote:
> Neha, Thank you for the response. We saw that page, which is what
> ultimately prompted me to send out the message to the group to see if
> anyon
I did some more debugging and found the corresponding debug message on the
broker for topic (417topic1381383668416) in the producer:
Seems like broker is removing fetcher for partition as producer throws no
metadata for topic.
[2013-10-09 22:41:09,529] INFO [Log Manager on Broker 2] Created log
26 matches
Mail list logo