This issue is zookeeper resiliency.
What I have done is, ephemeral node creation is replaced by Apache
Curator's PersistentEphemeralNode recipe, to reinstate ephemeral nodes
after zookeeper blip. Also, all watchers also should be reinstated. Kafka
internally only handles session expired event but
Hi,
We are prototyping kafka + storm for our stream processing / event
processing needs. One of the issues we face is a huge influx of stream data
from one of our customers. If we have a single topic for this stream for
all customers, other customers who are behind the big customer stream would
1. What does 'zookeeper state changed (Expired)' mean?
2. Has anyone seen issues like this before? Where zookeeper connections
are flaky enough to cause leader elections?
It means zookeeper expired the session. The most common reason for this is
client side GC (in your case, client is the Kafk
Which brings up the question - Do we need ShutdownBroker anymore? It seems
like the config should handle controlled shutdown correctly anyway.
Thanks,
Neha
On Thu, Mar 20, 2014 at 9:16 PM, Jun Rao wrote:
> We haven't been testing the ShutdownBroker command in 0.8.1 rigorously
> since in 0.8.1,
ZookeeperConsumerConnector actually has a smart to avoid doing n rebalances
when n consumers start one after the other in quick succession. It queues
up requests for more rebalances while the current rebalance is in progress,
effectively reducing the number of rebalance attempts. Look for
watcherEx
We haven't been testing the ShutdownBroker command in 0.8.1 rigorously
since in 0.8.1, one can do the controlled shutdown through the new config
"controlled.shutdown.enable". Instead of running the ShutdownBroker command
during the upgrade, you can also wait until under replicated partition
count d
The example of consumer api is here:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
I read the source code of the consumer
[kafka.consumer.ZookeeperConsumerConnector] , I found the rebalance was
completed in consumer. I want to know, when a consumer A watched the
/consume
Hm, just saw something a little fishy.
(In the following logs, analytics1021 (id 21) and analytics1022 (id 22) are
Brokers and analytics1023,analytics1024,analytics1025 are Zookeepers.)
At 2014-03-20 21:12:26, analytics1021 lost its connection to zookeeper. It
reconnected to analytics1023, bu
While upgrading from 0.8.0 to 0.8.1 in place, I observed some surprising
behavior using kafka.admin.ShutdownBroker. At the start, there were no
underreplicated partitions. After running
bin/kafka-run-class.sh kafka.admin.ShutdownBroker --broker 10 ...
Partitions that had replicas on broker 10 w
Guozhang:
Nice approach. I'll give that a try as well. Thanks.
Todd
From: Guozhang Wang
To: "users@kafka.apache.org" ,
Date: 2014-03-19 07:02 PM
Subject:Re: Can a producer detect when a topic has no consumers?
Currently producers cannot detect if a topic is consumed by
1) The new producer and consumer is being designed to take care of
auto balancing between partitions. Right ?
That's correct.
2) with the current available producer and consumer, is my current
setup(pls see attached file) a good design in terms of scalability.
Your Kafka setup seems reasonable t
Thanks Guozhang and Neha for your answers.
I will try Neha's approach as I want to replace the brokers.
On Thu, Mar 20, 2014 at 6:27 PM, Neha Narkhede wrote:
> Reshef,
>
> If you would like to just replace one broker at a time, then you can
> shutdown the broker and start up the broker on the n
Reshef,
If you would like to just replace one broker at a time, then you can
shutdown the broker and start up the broker on the new box with the *same*
broker.id. By doing this, the broker will automatically sync data for all
the partitions it hosts. You can wait for the under replicated partition
Hello Reshef,
Have you checked this page?
http://kafka.apache.org/documentation.html#basic_ops_cluster_expansion
Guozhang
On Thu, Mar 20, 2014 at 5:44 AM, Reshef Mann wrote:
> Hi,
>
> I hope someone can point me to the right place.
>
> I'm running a Kafka (0.8) cluster of 3 machines and woul
I would say those specs are probably a bit much for zookeeper particularly
the memory and SAS disks assuming your usage of zookeeper is consistent
with doing many more reads than writes which is the typical zookeeper use
case. The CPU and network interface seem about right but I would go with
lowe
I’m using jmxtrans to do this for Ganglia, but it should work the same for
Graphite:
http://www.jmxtrans.org/
Here’s an example Kafka jmxtrans json file.
https://github.com/wikimedia/puppet-kafka/blob/master/kafka-jmxtrans.json.md
You can change the output writers to use Graphite instead of Gan
Please I need help for sending metrics to Graphite. Can anyone help me in
resolving this.
Thank You.
Regards,
Sanjay Mengani
Extn : 4060
Mobile : +91-9985267763
Ian:
That's a nice way to look at the problem. Yes, I'd would welcome looking
at a snippet of your code.
Todd
From: Ian Friedman
To: users@kafka.apache.org, Todd Gatts/Raleigh/IBM@IBMUS,
Date: 2014-03-19 04:02 PM
Subject:Re: Can a producer detect when a topic has no consume
Hi Jun,
Thanks for your response and help. Yes indeed BOTH the kafka and zookeeper
variations were different that was causing this error!
It's now working!
Thanks for you help and time!
Cheers,
Mo.
On 19 March 2014 14:07, Mo Firouz wrote:
> Hello.
>
> I am trying to migrate from Kafka 0.7 to
I will give it a look and see how much effort a native port would be.
/Magnus
2014-03-19 22:20 GMT+07:00 Tianning Zhang :
> Hi Magnus,
>
> our applications are running under Windows and use many Windows features.
> Therefore Cygwin can not be used for runtime. However it would be
> interesting
Hi,
I hope someone can point me to the right place.
I'm running a Kafka (0.8) cluster of 3 machines and would like to upgrade
to bigger machines with bigger disks by replacing the servers one by one.
What are steps for performing it without compromising the stability of the
system?
Thanks,
Reshe
Hi Dan,
I am currently moving the code from our TFS repository to github. I will have
to remove proprietary dependencies and also go through the company internal
process for make it public. My tentative estimation is that it can be
available in public github in May.
KR
Tianning
> Von: Dan H
22 matches
Mail list logo