someone can help to analysis it?
> 在 2017年11月10日,上午11:08,Json Tu 写道:
>
> I‘m so sorry for my poor english.
>
> what I really means is my broker machine is configured as 8 core 16G. but my
> jvm configure is as below.
> java -Xmx1G -Xms1G -server -XX:+UseG1GC -X
写道:
>
> Seems broker `4759750` was always removed for partition [Yelp, 5] every round
> of ISR shrinking. Did you check if everything works alright for this broker?
>
>
> 发件人: Json Tu
> 发送时间: 2017年11月10日 11:08
> 收件人: users@kafka.apache.org
> 抄送: d...@kafka.apache.org;
suggestions.
> 在 2017年11月9日,下午9:59,John Yost 写道:
>
> I've seen this before and it was due to long GC pauses due in large part to
> a memory heap > 8 GB.
>
> --John
>
> On Thu, Nov 9, 2017 at 8:17 AM, Json Tu wrote:
>
>> Hi,
>>we have a kafka clus
Hi,
we have a kafka cluster which is made of 6 brokers, with 8 cpu and 16G
memory on each broker’s machine, and we have about 1600 topics in the
cluster,about 1700 partitions’ leader and 1600 partitions' replica on each
broker.
when we restart a normal broke, we find that there are 500
Hi all,
we have a cluster with 10 brokers, and our kafka version is 0.9.0.1,we
repeatedly get our metric data such as offlinePartition metric from each broker
with 2 minutes gap to achieve the goal of cluster’s monitor.
but accidental timeout occurs when we get data from some of brokers.
Hi all,
We are now using kafka 0.9.0 in our product enviroment, and we add one
broker to the cluster,and execute reassign partitions between all brokers,we
find our network card and disk io is very high.
and I know KIP-73 has resolved this problem, but I wonder can I merge it to my
kafk
Would be grateful to hear opinions from experts out there. Thanks in advance
> 在 2016年12月16日,下午6:17,Json Tu 写道:
>
> Hi all,
> we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3
> replicas, and send it with ack -1, our sending latency is avg 7ms. I prepare
Hi all,
we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3
replicas, and send it with ack -1, our sending latency is avg 7ms. I prepare to
optimize performance of cluster through adjusting some params.
we find our brokers has set config item as below,
log.flush.inte
Hi,
Can someone else help to review the pr in jira:
https://issues.apache.org/jira/browse/KAFKA-4447
<https://issues.apache.org/jira/browse/KAFKA-4447>.
> 在 2016年11月23日,下午11:28,Json Tu 写道:
>
> Hi,
> We have a cluster of kafka 0.9.0.1 with 3 nodes, and
thanks to Jason Gustafson, hope more contributor can take part in this
discussion.
https://issues.apache.org/jira/browse/KAFKA-4447
<https://issues.apache.org/jira/browse/KAFKA-4447>
> 在 2016年11月27日,下午9:20,Json Tu 写道:
>
> AnyBody?This is very disconcerting! If convenient, Can s
AnyBody?This is very disconcerting! If convenient, Can somebody help to confirm
this strange question.
> 在 2016年11月26日,上午1:35,Json Tu 写道:
>
> thanks guozhang,
> if it's convenient,can we disscuss it in the jira
> https://issues.apache.org/jira/browse/
ontinuously see broker
> 100's listener fires and it acts like a controller then there may be an
> issue with 0.9.0.1 version.
>
> Guozhang
>
> On Wed, Nov 23, 2016 at 7:28 AM, Json Tu wrote:
>
>> Hi,
>>We have a cluster of kafka 0.9.0.1 with 3 nodes
Hi,
We have a cluster of kafka 0.9.0.1 with 3 nodes, and we found a strange
controller log as below.
[2016-11-07 03:14:48,575] INFO [SessionExpirationListener on 100], ZK expired;
shut down all controller components and try to re-elect
(kafka.controller.KafkaController$SessionExpiration
Hi, when I move __consumer_offsets from old broker to new broker, we encounter
error as follow and it always shuabing.
server.log.2016-11-07-19:[2016-11-07 19:17:15,392] ERROR Found invalid messages
during fetch for partition [__consumer_offsets,10] offset 13973569 error
Message found with corru
//issues.apache.org/jira/browse/KAFKA-4360
>> Project: Kafka
>> Issue Type: Bug
>> Components: controller
>> Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>> Reporter: Json Tu
>> Labels
Key: KAFKA-4360
>> URL: https://issues.apache.org/jira/browse/KAFKA-4360
>> Project: Kafka
>>Issue Type: Bug
>>Components: controller
>> Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>>
Hi all,
We have a kafka cluster with 11 nodes, and we found there are some
partition’s replica num is not equal to isr’s num,because our data traffic is
small,we think it should isr’s num should equal to replica’s num at last,
but it can not recovery to normal,so we try to shutdown a brok
epeated error log entries since it should at most print one entry (and
> should be DEBUG not ERROR) for each delayed request whose partition leaders
> have migrated out.
>
>
>
> Guozhang
>
>
>
> On Wed, Oct 26, 2016 at 7:46 AM, Json Tu wrote:
>
>> it make t
in this request may not be
completely be satisfied and return to the fetch broker,
which leads some producer and consumer fail for a longtime,I don’t know is it
correct
> 在 2016年10月25日,下午8:32,Json Tu 写道:
>
> Hi all,
> I use Kafka 0.9.0.0, and we have a cluster with 6 nodes, when
Hi all,
I use Kafka 0.9.0.0, and we have a cluster with 6 nodes, when I restart
a broker,we find there are many logs as below,
[2016-10-24 15:29:00,914] ERROR [KafkaApi-2141642] error when handling request
Name: FetchRequest; Version: 1; CorrelationId: 4928; ClientId:
ReplicaFetcherThre
Thanks. I patch it, and everything goes ok.
> 在 2016年10月9日,下午12:39,Becket Qin 写道:
>
> Can you check if you have KAFKA-3003 when you run the code?
>
> On Sat, Oct 8, 2016 at 12:52 AM, Kafka wrote:
>
>> Hi all,
>>we found our consumer have high cpu load in our product
>> enviroment,as we
Hi all,
I have a kafka 0.9.0.0 cluster with 11 nodes.
First,I found server logs as below,
server.log.2016-10-17-22:[2016-10-17 22:22:13,885] WARN
[ReplicaFetcherThread-0-4], Error in fetch
kafka.server.ReplicaFetcherThread$FetchRequest@367c9f98. Possible cause:
org.apache.kafka.common.pr
22 matches
Mail list logo