Hi Yashika,
No logs in broker log is not normal, can you verify if you turned off
logging in your log4j properties file?
If it is please enable it and try again, and see what is in the logs.
Tim
On Thu, Apr 24, 2014 at 10:53 PM, Yashika Gupta
wrote:
> Jun,
>
> I am using Kafka 2.8.0- 0.8.0 ver
Jun,
I am using Kafka 2.8.0- 0.8.0 version.
There are no logs for the past month in the controller and state-change log.
Though I can see dome gc logs in the kafka-home-dir/logs folder.
zookeeper-gc.log
kafkaServer-gc.log
Yashika
__
From: Jun Rao
Sent: Friday, April 25,
Which version of Kafka are you using? Any error in the controller and
state-change log?
Thanks,
Jun
On Thu, Apr 24, 2014 at 7:37 PM, Yashika Gupta
wrote:
> I am running a single broker and the leader column has 0 as the value.
>
> pushkar priyadarshi wrote:
>
>
> you can use the kafka-list-to
I am running a single broker and the leader column has 0 as the value.
pushkar priyadarshi wrote:
you can use the kafka-list-topic.sh to find out if leader for particual
topic is available.-1 in leader column might indicate trouble.
On Fri, Apr 25, 2014 at 6:34 AM, Guozhang Wang wrote:
> Co
I had cleaned up the topics using the following commands:
Rm -rf /tmp/kafka-logs/*
And verified using the topics list command before executing the script.
Am I missing anything else.
Regards,
Yashika
Guozhang Wang wrote:
Could you double check if the topic LOGFILE04 is already created on the
you can use the kafka-list-topic.sh to find out if leader for particual
topic is available.-1 in leader column might indicate trouble.
On Fri, Apr 25, 2014 at 6:34 AM, Guozhang Wang wrote:
> Could you double check if the topic LOGFILE04 is already created on the
> servers?
>
> Guozhang
>
>
> On
I don't do any partition reassignment.
When broker occure following error, this phenomenon will happen.
[hadoop@nelo76 libs]$ [2014-03-14 12:11:44,310] INFO Partition
[nelo2-normal-logs,0] on broker 0: Shrinking ISR for partition
[nelo2-normal-logs,0] from 0,1 to 0 (kafka.cluster.Partition)
Hi Sadhan,
Do you see any errors on the server logs?
Guozhang
On Thu, Apr 24, 2014 at 12:57 PM, Sadhan Sood wrote:
> We are seeing some strange behavior from brokers after we we had to change
> our log retention policy on brokers yesterday. We had a huge spike in
> producer data for a small p
Could you double check if the topic LOGFILE04 is already created on the
servers?
Guozhang
On Thu, Apr 24, 2014 at 10:46 AM, Yashika Gupta wrote:
> Jun,
>
> The detailed logs are as follows:
>
> 24.04.2014 13:37:31812 INFO main kafka.producer.SyncProducer -
> Disconnecting from localhost:9092
>
I had this error before and corrected by increasing nofile limit
add to file an entry for the user running the broker.
/etc/security/limits.conf
kafka - nofile 98304
On Thu, Apr 24, 2014 at 1:46 PM, Yashika Gupta
wrote:
> Jun,
>
> The detailed logs are as follows:
>
> 24.04.2014 13:37:31812 I
We are seeing some strange behavior from brokers after we we had to change
our log retention policy on brokers yesterday. We had a huge spike in
producer data for a small period which caused brokers to get very close to
the max disk space. Normally our retention policy is good 6-7 days but
since ou
Jun,
The detailed logs are as follows:
24.04.2014 13:37:31812 INFO main kafka.producer.SyncProducer - Disconnecting
from localhost:9092
24.04.2014 13:37:38612 WARN main kafka.producer.BrokerPartitionInfo - Error
while fetching metadata [{TopicMetadata for topic LOGFILE04 ->
No partition metadat
Before that error messge, the log should tell you the cause of the error.
Could you dig that out?
Thanks,
Jun
On Thu, Apr 24, 2014 at 10:12 AM, Yashika Gupta wrote:
> Hi,
>
> I am working on a POC where I have 1 Zookeeper and 2 Kafka Brokers on my
> local machine. I am running 8 sets of Kafka
0.8.1.1 is being voted now.
Thanks,
Jun
On Thu, Apr 24, 2014 at 10:07 AM, Drew Goya wrote:
> This just hit me this morning as well, any news on 0.8.1.1? My ops guy is
> going to kill me, we just rolled off my older build of 0.8.1 to the
> official release.
>
>
> On Thu, Apr 3, 2014 at 11:55
Partition reassignment wasn't fully working in 0.8-beta. So you probably
will have to upgrade existing brokers to 0.8.1 before running partition
reassignment. Also, 0.8.1.1 will be out soon.
Thanks,
Jun
On Thu, Apr 24, 2014 at 9:49 AM, vimpy batra wrote:
> Hello,
>
> We are currently running
Delete topic doesn't quite work yet and we will try to fix it in the next
release. https://issues.apache.org/jira/browse/KAFKA-1397
Thanks,
Jun
On Thu, Apr 24, 2014 at 9:49 AM, Drew Goya wrote:
> Just tried my first topic delete today and it looks like something went
> wrong on the controller
Hi,
I am working on a POC where I have 1 Zookeeper and 2 Kafka Brokers on my local
machine. I am running 8 sets of Kafka consumers and producers running in
parallel.
Below are my configurations:
Consumer Configs:
zookeeper.session.timeout.ms=12
zookeeper.sync.time.ms=2000
zookeeper.connecti
Interesting. Which version of Kafka are you using? Were you doing some
partition reassignment?
Thanks,
Jun
On Wed, Apr 23, 2014 at 11:14 PM, 陈小军 wrote:
> Hi Team,
>I found a strange phenomenon of isr list in my kafka cluster
>
>When I use the tool that kafka provide to get the topic i
This just hit me this morning as well, any news on 0.8.1.1? My ops guy is
going to kill me, we just rolled off my older build of 0.8.1 to the
official release.
On Thu, Apr 3, 2014 at 11:55 PM, Krzysztof Ociepa <
ociepa.krzysz...@gmail.com> wrote:
> Hi Guozhang,
> Hi Neha,
>
> Thanks a lot for y
Just tried my first topic delete today and it looks like something went
wrong on the controller. I issued the command on a test topic and shortly
after that a describe looked like:
Topic:TimeoutQueueTest PartitionCount:256 ReplicationFactor:3 Configs:
Topic: TimeoutQueueTest Partition: 0 Leader:
Hello,
We are currently running a kafka 0.8-beta cluster. We are planning to expand
the existing cluster and use 0.8.1 version on the new nodes. Before upgrading
the older ones we want the new ones to participate in the cluster. We plan to
use "reassign-partitions" tool in 0.8.1 to reassign pa
We typically run all of our Zookeeper instances separate, but we do have
one Kafka cluster that is colocated with the Zookeeper nodes. It works
just fine, probably in part because Zookeeper handles everything serially.
The caveat is that the cluster that we¹re doing this on is not designed
for perf
Oo, I’m curious about this as well! Wikimedia is considering doing this
if/when we install brokers in our web caching data centers.
On Apr 24, 2014, at 11:49 AM, Sudarshan Kadambi (BLOOMBERG/ 731 LEXIN)
wrote:
> Are there any thoughts on running Zookeeper on the same physical nodes that
> r
Are there any thoughts on running Zookeeper on the same physical nodes that run
the Kafka broker? So the loss of a node affects quorum and possibly requires
electing new leaders at both the ZK and the broker level. Are there race
conditions or other failure cases that could come about from eithe
24 matches
Mail list logo