It covers any consumer rebalance - i.e., which could be caused by a
consumer instance joining/leaving the group, session expiration, new
partitions showing up, etc.
On Thu, Nov 07, 2013 at 07:56:25PM -0800, Vadim Keylis wrote:
> Joel. Just to be clear consumer re-balances you mentioning in the FAQ
Excellent - thanks for putting that together! Will review it more
carefully tomorrow and suggest some minor edits if required.
On Thu, Nov 07, 2013 at 10:45:40PM -0500, Marc Labbe wrote:
> I've just added a page for purgatory, feel free to comment/modify at will.
> I hope I didn't misinterpret too
If I want to use kafka_2.10 0.8.0-beta1, which repo I should go to? Seems
apache repo don't have it. While there are com.sksamuel.kafka and
com.twitter.tormenta-kafka_2.10
Which one should I go to or neither?
Best Regards,
Raymond Liu
Which class is not found?
Thanks,
Jun
On Thu, Nov 7, 2013 at 11:56 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Let me describe my environment. Working on two nodes currently:
> 1.Single-node hadoop cluster (will refer as Node1)
> 2.Single node Kafka cluster (will refer as Node2)
>
> Node 2 ha
Joel. Just to be clear consumer re-balances you mentioning in the FAQ refers to
consumer restart, am I correct?
On Wed, Nov 6, 2013 at 5:51 PM, Joel Koshy wrote:
> This question seems to come up often - added this to the FAQ.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Idon%
I've just added a page for purgatory, feel free to comment/modify at will.
I hope I didn't misinterpret too much of the code.
https://cwiki.apache.org/confluence/display/KAFKA/Request+Purgatory+(0.8)
I added a few questions of my own.
On Fri, Nov 1, 2013 at 9:43 PM, Joe Stein wrote:
> To edit
offset commit API could solve your problem, it's for 0.8 version.
---Sent from Boxer | http://getboxer.com
Thanks Neha! I guess auto-commit it is for now...
On Tue, Nov 5, 2013 at 5:08 AM, Neha Narkhede wrote:
> Currently, the only way to achieve that is to use the SimpleConsumer API.
It might be you are starting a consumer after the messages are produced and
since your consumer is starting for the first time when it registers with
ZK you won't see messages since the default is to start at the largest
offset.
so, try starting ZK, the broker and your consumer and then start up t
Hi,
This might be a really, really simple question but how do I get my test
Kafka program working out of the box? I followed the directions from
http://kafka.apache.org/documentation.html#quickstart. I started zk, the
server, the producer and consumer. I played with the producer, sending
msgs t
Can you see if this applies in your case:
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog%3F
Also, what version of kafka 0.8 are you using? If not the beta, then
what's the git hash?
Joel
On Thu, Nov 07, 2013 at 02:51:41PM -0500, Ahmed H. wrote
>
> kafka-add-partitions.sh is in 0.8 but not in 0.8-beta1. Therefore we cannot
> use this tool with
> 0.8-beta1. If I download latest 0.8 and compile it, can I use its
> kafka-add-partitions.sh to add
> partitions for the topics that already exist in our 0.8-beta1 kafka? Thanks.
Unfortunately,
Hi team,
Here is what I want to do:
We are using 0.8-beta1 currently. We already have some topics and want to add
partitions
for them.
kafka-add-partitions.sh is in 0.8 but not in 0.8-beta1. Therefore we cannot use
this tool with
0.8-beta1. If I download latest 0.8 and compile it, can I use its
Let me describe my environment. Working on two nodes currently:
1.Single-node hadoop cluster (will refer as Node1)
2.Single node Kafka cluster (will refer as Node2)
Node 2 has 1 broker started with a topic (iot.test.stream) and one command
line producer and one command line consumer to test the k
Hello all,
I am not sure if this is a Kafka issue, or an issue with the client that I
am using.
We have a fairly small setup, where everything sits on one server (Kafka
0.8, and Zookeeper). The message frequency is not too high (1-2 per second).
The setup works fine for a certain period of time
This is likely to happen if you don't do controlled shutdown -
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-1.ControlledShutdown
Thanks,
Neha
On Thu, Nov 7, 2013 at 7:40 AM, Shafaq wrote:
> Using 0.8 head
>
> I have a 2-node broker cluster, One of which
Using 0.8 head
I have a 2-node broker cluster, One of which I had to restart as it was
down while the producers are pushing data into kafka broker for 2 topics.
When the 2nd broker came up, I get exception in the broker below.
The 2nd topic consumer is not getting any data.
Why does not the lea
Yes you are correct. I mis-understood the feature.
auto.create.topics.enable has an interesting side effect. Say your kafka
configuration file allows for 10 partitions of a topic. If the topic
auto-creates you get 10 partitions. Doing it the way I did it gives you
control of the number of partition
auto.create.topics.enable
is true by default. For this test I relied on that property.I don't
think a real production class should rely on that though.. Too easy to mess
things up with a typo- cb
On Wed, Nov 6, 2013 at 9:28 PM, Edward Capriolo wrote:
> One thing I noticed about your c
18 matches
Mail list logo