You may want to take a look at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whycan'tmyconsumers/producersconnecttothebrokers
?
Thanks,
Jun
On Wed, Jun 11, 2014 at 12:12 PM, François Langelier
wrote:
> FYI, I found out what was wrong and I think it's a wierd behaviour that you
> g
Hi,
Good Morning !!
Could you please add my email to the Kafka mailing list
My Email id solo...@gmail.com
--
Thanks
Satya Dandi
WebSphere Sr Consultant
832 721 2763
I’m having the same trouble using the Camus HDFS consumer. Were you able to
figure it out?
In your second case (1-broker cluster and putting your laptop to sleep) these
exceptions should be transient and disappear after a while.
In the logs you should see ZK session expirations (hence the initial/transient
exceptions, which in this case are expected and ok), followed by new ZK
sessio
There is no error in controller.log and no log in state-change.log
On Wed, Jun 11, 2014 at 11:03 PM, Jun Rao wrote:
> It sounds like broker 5 didn't get the needed message from the controller.
> Was there any error in the controller and state-change log when the above
> error started?
>
> Thank
Take a look at Loggly.com's AWS setup for Kafka, e.g. as described on theor
blog (very recently) as well as in their talk at AWS reInvent 2013.
--Michael
> On 11.06.2014, at 19:43, S Ahmed wrote:
>
> For those of you hosting on ec2, could someone suggest a "minimum"
> recommended setup for ka
In console producer you can specify the producer properties in command line
as metadata-expiry-ms.
You can type just ./kafka-console-producer.sh and it will show you all the
configs that you can specify.
Guozhang
On Wed, Jun 11, 2014 at 10:56 AM, Prakash Gowri Shankor <
prakash.shan...@gmail.co
Thanks for your response Michael.
In step 3, I am actually stopping the entire cluster and restarting it
without the 2nd broker. But I see your point. When i look in
/tmp/kafka-logs-2 ( which is the log dir for the 2nd broker ) I see it
holds test2-1 ( ie 1st partition of test2 topic ).
For /tmp/k
Prakash,
you are configure the topic with a replication factor of only 1, i.e. no
additional replica beyond "the original one". This replication setting
of 1 means that only one of the two brokers will ever host the (single)
replica -- which is implied to also be the leader in-sync replica -- of
FYI, I found out what was wrong and I think it's a wierd behaviour that you
guys may want to fix...
First of all, my setup is the following
I have 3 brokers named :
kaf1.kafka.mydns -> 50.50.50.51
kaf2.kafka.mydns -> 50.50.50.52
kaf3.kafka.mydns -> 50.50.50.53
Those are the name in the DNS.
Guozhang,
I set this in my producer.properties
topic.metadata.refresh.interval.ms=1000
Then I start the console producer as
./kafka-console-producer.sh --broker-list localhost:9092 --topic test2
I still dont see data being written to different partitions after every 1
second.
I wonder if the
yes,
here are the steps:
Create topic as : ./kafka-topics.sh --topic test2 --create --partitions 3
--zookeeper localhost:2181 --replication-factor 1
1) Start cluster with 2 brokers, 3 consumers.
2) Dont start any producer
3) Shutdown cluster and disable one broker from starting
4) restart clust
Is this what you want from kafka-topics ? I took this script dump now when
the exception is occuring.
./kafka-topics.sh --describe test2 --zookeeper localhost:2181
Topic:test2 PartitionCount:3 ReplicationFactor:1 Configs:
Topic: test2 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test2 Parti
For those of you hosting on ec2, could someone suggest a "minimum"
recommended setup for kafka? i.e. the # and type of instance size that you
would say is the bare minimum to get started with kafka in ec2.
My guess is the suggest route is the m3 instance type?
How about:
m3.medium 1 cpu, 3.75GB
Thank Joel, I try to put the trace but I saw nothing... maybe I did it
wrong...
anyway, I tried another way... I started the console consumer and producer
on a new topic to my remote brokers...
The topic was create successfully but when i produce a message, nothing
happen...
- The message is
The jmx should be of the form of clientId*-ConsumerLag under kafka.server.
Pausing the iteration will indirectly pause the underlying fetcher.
Thanks,
Jun
On Wed, Jun 11, 2014 at 3:09 AM, Bogdan Dimitriu (bdimitri) <
bdimi...@cisco.com> wrote:
> Which JMX MBeans are you referring to, Jun? I co
It sounds like broker 5 didn't get the needed message from the controller.
Was there any error in the controller and state-change log when the above
error started?
Thanks,
Jun
On Tue, Jun 10, 2014 at 10:00 PM, Bongyeon Kim
wrote:
> No, broker 5 is alive with log.
>
>
> [2014-06-11 13:59:45,17
Which JMX MBeans are you referring to, Jun? I couldn’t find anything that
gives me the same information as the ConsumerOffsetChecker tool.
In any case, my main problem is that I don’t know when I should slow down
the iteration because I don’t know which stream the iteration is
consuming. I have the
Opened https://issues.apache.org/jira/browse/KAFKA-1489 .
Regards,
András
On 6/11/2014 6:19 AM, Jun Rao wrote:
Could you file a jira to track this?
Thanks,
Jun
On Tue, Jun 10, 2014 at 8:22 AM, András Serény
wrote:
Hi Kafka devs,
are there currently any plans to implement the global thre
19 matches
Mail list logo