答复: console consumer doesn't recognise localhost:2181 but only 127.0.0.1:2181

2017-07-07 Thread Hu Xi
“Caused by: java.net.UnknownHostException: localhsot" - localhsot?  A typo?



发件人: M. Manna 
发送时间: 2017年7月7日 16:59
收件人: users@kafka.apache.org
主题: console consumer doesn't recognise localhost:2181 but only 127.0.0.1:2181

Hello,

As part of my PoC I wanted to check if we have two Windows 10 boxes where

1) One box will have the ZK
2) Other box will have Kafka

The idea was to physically separate zookeeper and Kafka to isolate issues.
For trial, I set it up on my Windows 10 machine where I used the
Documentation to create 3 ZK and 3 Kafka cluster setup. For this, I
downloaded zookeeper-3.4.10 separately, and started them (in order) and
independent of Kafka Servers. So I am* not *using zookeeper-server-start.

When I ran the topics utility

C:/kafka/bin/windows/kafka-topics --create --topic "test1" --partitions 3
--replication-factor 3 --zookeeper
localhost:2181,localhost:2182,localhost:2183

It spits out error since it cannot find localhost:

C:\kafka_2.10-0.10.2.1\bin\windows>kafka-topics.bat --create --topic test1
> --zookeeper localhsot:2181,localhost:2182,localhost:2183 --partitions 3
> --replication-factor 3
> Exception in thread "main" org.I0Itec.zkclient.exception.ZkException:
> Unable to connect to localhsot:2181,localhost:2182,localhost:2183
> at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
> at
> kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:53)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
> Caused by: java.net.UnknownHostException: localhsot
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
> ... 7 more


but when I replace them with 127.0.0.1 it works! I know that I have to keep
correct values for advertised.listeners or listeners property to make this
work correctly (and i have). But since it's ignoring "localhost" name when
zookeeper is started I am thinking that for some reason it couldn't bind
127.0.0.1 to localhost.

Has anyone experienced this with fully segregated ZK and Kafka setup? or is
it just Windows issue?

KR,


答复: Kafka 0.9.0.1 partitions shrink and expand frequently after restart the broker

2017-11-09 Thread Hu Xi
Seems broker `4759750` was always removed for partition [Yelp, 5] every round 
of ISR shrinking. Did you check if everything works alright for this broker?



发件人: Json Tu 
发送时间: 2017年11月10日 11:08
收件人: users@kafka.apache.org
抄送: d...@kafka.apache.org; Guozhang Wang
主题: Re: Kafka 0.9.0.1 partitions shrink and expand frequently after restart the 
broker

I‘m so sorry for my poor english.

what I really means is my broker machine is configured as 8 core 16G. but my 
jvm configure is as below.
java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
-XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
-Djava.awt.headless=true -Xloggc:/xx/yy/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=10M 
-XX:+HeapDumpOnOutOfMemoryError.

we have 30+ clusters with this jvm configure, and are deployed on the machine 
which configured as 8 core 16G. compare to other clusters, the current cluster 
have more than 5 times partitions than other clusters.
when we restart other clusters,  there is no such phenomenon.

may be some metrics or logs can leads to find root cause of this phenomenon.
Looking forward to more suggestions.


> 在 2017年11月9日,下午9:59,John Yost  写道:
>
> I've seen this before and it was due to long GC pauses due in large part to
> a memory heap > 8 GB.
>
> --John
>
> On Thu, Nov 9, 2017 at 8:17 AM, Json Tu  wrote:
>
>> Hi,
>>we have a kafka cluster which is made of 6 brokers,  with 8 cpu and
>> 16G memory on each broker’s machine, and we have about 1600 topics in the
>> cluster,about 1700 partitions’ leader and 1600 partitions' replica on each
>> broker.
>>when we restart a normal broke,  we find that there are 500+
>> partitions shrink and expand frequently when restart the broker,
>> there are many logs as below.
>>
>>   [2017-11-09 17:05:51,173] INFO Partition [Yelp,5] on broker 4759726:
>> Expanding ISR for partition [Yelp,5] from 4759726 to 4759726,4759750
>> (kafka.cluster.Partition)
>> [2017-11-09 17:06:22,047] INFO Partition [Yelp,5] on broker 4759726:
>> Shrinking ISR for partition [Yelp,5] from 4759726,4759750 to 4759726
>> (kafka.cluster.Partition)
>> [2017-11-09 17:06:28,634] INFO Partition [Yelp,5] on broker 4759726:
>> Expanding ISR for partition [Yelp,5] from 4759726 to 4759726,4759750
>> (kafka.cluster.Partition)
>> [2017-11-09 17:06:44,658] INFO Partition [Yelp,5] on broker 4759726:
>> Shrinking ISR for partition [Yelp,5] from 4759726,4759750 to 4759726
>> (kafka.cluster.Partition)
>> [2017-11-09 17:06:47,611] INFO Partition [Yelp,5] on broker 4759726:
>> Expanding ISR for partition [Yelp,5] from 4759726 to 4759726,4759750
>> (kafka.cluster.Partition)
>> [2017-11-09 17:07:19,703] INFO Partition [Yelp,5] on broker 4759726:
>> Shrinking ISR for partition [Yelp,5] from 4759726,4759750 to 4759726
>> (kafka.cluster.Partition)
>> [2017-11-09 17:07:26,811] INFO Partition [Yelp,5] on broker 4759726:
>> Expanding ISR for partition [Yelp,5] from 4759726 to 4759726,4759750
>> (kafka.cluster.Partition)
>> …
>>
>>
>>and repeat shrink and expand after 30 minutes which is the default
>> value of leader.imbalance.check.interval.seconds, and at that time
>> we can find the log of controller’s auto rebalance,which can leads some
>> partition’s leader change to this restarted broker.
>>we have no shrink and expand when our cluster is running except when
>> we restart it,so replica.fetch.thread.num is 1,and it seems enough.
>>
>>we can reproduce it at each restart,can someone give some suggestions.
>> thanks before.
>>
>>
>>
>>
>>
>>
>>
>>



答复: kafka.admin.TopicCommand Failing

2017-11-16 Thread Hu Xi
Increasing `zookeeper.connection.timeout.ms` to a relatively larger value might 
be a help. Besides, you could check GC log to see if the STW pauses expired the 
Zk sessions.



发件人: Abhimanyu Nagrath 
发送时间: 2017年11月17日 13:51
收件人: users@kafka.apache.org; u...@zookeeper.apache.org
主题: Re: kafka.admin.TopicCommand Failing

One more thing was checking my Kafka-server.log its fill with the warning

Attempting to send response via channel for which there is no open
connection, connection id 2 (Kafka.network.Processor)

IS this the reason for the above issue? How to resolve this. Need help
production is breaking.


Regards,
Abhimanyu

On Thu, Nov 16, 2017 at 5:08 PM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:

> Hi, I am using a single node Kafka V 0.10.2 (16 GB RAM, 8 cores) and a
> single node zookeeper V  3.4.9 (4 GB RAM, 1 core ). I am having 64 consumer
> groups and 500 topics each having 250 partitions. I am able to execute the
> commands which require only Kafka broker and its running fine
> ex.
>
> > ./kafka-consumer-groups.sh --bootstrap-server localhost:9092
> > --describe --group 
>
> But when I execute the admin command like create topic, alter topic For
> example
>
> > ./kafka-topics.sh --create --zookeeper :2181
> > --replication-factor 1 --partitions 1 --topic 
>
> Following exception is being displayed:
>
>
>
> > Error while executing topic command : replication factor: 1 larger
> > than available brokers: 0 [2017-11-16 11:22:13,592] ERROR
> > org.apache.kafka.common.errors.InvalidReplicationFactorException:
> > replication factor: 1 larger than available brokers: 0
> > (kafka.admin.TopicCommand$)
>
> I checked my broker is up. In server.log following warnings are there
>
> [2017-11-16 11:14:26,959] WARN Client session timed out, have not
> heard from server in 15843ms for sessionid 0x15aa7f586e1c061
> (org.apache.zookeeper.ClientCnxn)
> [2017-11-16 11:14:28,795] WARN Unable to reconnect to ZooKeeper
> service, session 0x15aa7f586e1c061 has expired (org.apache.zookeeper.
> ClientCnxn)
> [2017-11-16 11:21:46,055] WARN Unable to reconnect to ZooKeeper
> service, session 0x15aa7f586e1c067 has expired (org.apache.zookeeper.
> ClientCnxn)
>
> Below mentioned is my Kafka server configuration :
>
> broker.id=1
> delete.topic.enable=true
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/kafka/data/logs
> num.partitions=1
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=30
> zookeeper.connect=:2181
> zookeeper.connection.timeout.ms=6000
>
> Zookeeper Configuration is :
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=10
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=5
> # the directory where the snapshot is stored.
> # do not use /tmp for storage, /tmp here is just
> # example sakes.
> dataDir=/zookeeper/data
> # the port at which the clients will connect
> clientPort=2181
> # the maximum number of client connections.
> # increase this if you need to handle more clients
> #maxClientCnxns=60
> autopurge.snapRetainCount=20
> # Purge task interval in hours
> # Set to "0" to disable auto purge feature
> autopurge.purgeInterval=48
>
> I am not able to figure out which configuration to tune. What I am missing
> .Any help will be appreciated.
>
>
>
>
>  Regards,
> Abhimanyu
>


答复: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-17 Thread Hu Xi
Congratulations, Rajini Sivaram.  Very well deserved!



发件人: Konstantine Karantasis 
发送时间: 2018年1月18日 6:23
收件人: d...@kafka.apache.org
抄送: users@kafka.apache.org
主题: Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

Congrats Rajini!

-Konstantine

On Wed, Jan 17, 2018 at 2:18 PM, Becket Qin  wrote:

> Congratulations, Rajini!
>
> On Wed, Jan 17, 2018 at 1:52 PM, Ismael Juma  wrote:
>
> > Congratulations Rajini!
> >
> > On 17 Jan 2018 10:49 am, "Gwen Shapira"  wrote:
> >
> > Dear Kafka Developers, Users and Fans,
> >
> > Rajini Sivaram became a committer in April 2017.  Since then, she
> remained
> > active in the community and contributed major patches, reviews and KIP
> > discussions. I am glad to announce that Rajini is now a member of the
> > Apache Kafka PMC.
> >
> > Congratulations, Rajini and looking forward to your future contributions.
> >
> > Gwen, on behalf of Apache Kafka PMC
> >
>


答复: kafka controller setting for detecting broker failure and re-electing a new leader for partitions?

2018-01-25 Thread Hu Xi
Yu Yang,


There does exist a broker-side config named 'controller.socket.timeout.ms'. 
Decrease it to a reasonably smaller value might be a help but please use it 
with caution.


发件人: Yu Yang 
发送时间: 2018年1月25日 15:42
收件人: users@kafka.apache.org
主题: kafka controller setting for detecting broker failure and re-electing a new 
leader for partitions?

Hi everyone,

Recently we had a cluster in which the controller failed to connect to a
broker A for an extended period of time.  I had expected that the
controller would identify the broker as a failed broker, and re-elect
another broker as the leader for partitions that were hosted on broker A.
However, this did not happen in that cluster. What happened was that broker
A was still considered as the leader for some partitions, and those
partitions are marked as under replicated partitions. Is there any
configuration setting in kafka to speed up the broker failure detection?


2018-01-24 14:13:57,132] WARN [Controller-37-to-broker-4-send-thread],
Controller 37's connection to broker testkafka04:9092 (id: 4 rack: null)
was unsuccessful (kafka.controller.RequestSendThread)
java.net.SocketTimeoutException: Failed to connect within 3 ms
at
kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:231)
at
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:182)
at
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:181)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

Thanks!

Regards,
-Yu


答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Hu Xi
Congrats, Dong Lin!



发件人: Matthias J. Sax 
发送时间: 2018年3月29日 6:37
收件人: users@kafka.apache.org; d...@kafka.apache.org
主题: Re: [ANNOUNCE] New Committer: Dong Lin

Congrats!

On 3/28/18 1:16 PM, James Cheng wrote:
> Congrats, Dong!
>
> -James
>
>> On Mar 28, 2018, at 10:58 AM, Becket Qin  wrote:
>>
>> Hello everyone,
>>
>> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
>> our invitation to be a new Kafka committer.
>>
>> Dong started working on Kafka about four years ago, since which he has
>> contributed numerous features and patches. His work on Kafka core has been
>> consistent and important. Among his contributions, most noticeably, Dong
>> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
>> overall cost, added deleteDataBefore() API (KIP-107) to allow users
>> actively remove old messages. Dong has also been active in the community,
>> participating in KIP discussions and doing code reviews.
>>
>> Congratulations and looking forward to your future contribution, Dong!
>>
>> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>



回复: Re:Re: [ANNOUNCE] New committer: Xi Hu

2020-06-25 Thread Hu Xi
Thank you, everyone. It is my great honor to be a part of the community. Will 
make a greater contribution in the coming days.


发件人: Roc Marshal 
发送时间: 2020年6月25日 10:20
收件人: users@kafka.apache.org 
主题: Re:Re: [ANNOUNCE] New committer: Xi Hu

Congratulations ! Xi Hu.


Best,
Roc Marshal.














At 2020-06-25 01:30:33, "Boyang Chen"  wrote:
>Congratulations Xi! Well deserved.
>
>On Wed, Jun 24, 2020 at 10:10 AM AJ Chen  wrote:
>
>> Congratulations, Xi.
>> -aj
>>
>>
>>
>> On Wed, Jun 24, 2020 at 9:27 AM Guozhang Wang  wrote:
>>
>> > The PMC for Apache Kafka has invited Xi Hu as a committer and we are
>> > pleased to announce that he has accepted!
>> >
>> > Xi Hu has been actively contributing to Kafka since 2016, and is well
>> > recognized especially for his non-code contributions: he maintains a tech
>> > blog post evangelizing Kafka in the Chinese speaking community (
>> > https://www.cnblogs.com/huxi2b/), and is one of the most active
>> answering
>> > member in Zhihu (Chinese Reddit / StackOverflow) Kafka topic. He has
>> > presented in Kafka meetup events in the past and authored a
>> > book deep-diving on Kafka architecture design and operations as well (
>> > https://www.amazon.cn/dp/B07JH9G2FL). Code wise, he has contributed 75
>> > patches so far.
>> >
>> >
>> > Thanks for all the contributions Xi. Congratulations!
>> >
>> > -- Guozhang, on behalf of the Apache Kafka PMC
>> >
>>


回复: 你好,我发现了你们kafka-API文档的文本错误问题.

2020-07-01 Thread Hu Xi

谢谢指正。已经提交了一个jira ticket 
[KAFKA-10222]准备修复了:)



发件人: Koray 
发送时间: 2020年6月30日 23:06
收件人: users-subscribe ; users 
; dev-subscribe ; dev 
; dev ; dev-subscribe 
; apache 
主题: 你好,我发现了你们kafka-API文档的文本错误问题.

你好,我发现了你们的API文档上的一个错误信息,提供给你们修改,谢谢.

错误问题是 :   KStreamBuilder这个类,并没有from的方法. 我这里展示的是kafka 
10.0版本..,但愿能修改过来,以免误导开发者们的使用. 谢谢
这是问题网址:  
http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html



[cid:02F3F845@59AAC928.0955FB5E]
 [cid:DBEF9ACA@7D891A2E.0955FB5E]




答复: Failing to get Topic Configuration

2017-01-20 Thread Hu Xi
Seems you have to add some topic-level configs before seeing them. Run command 
below for an example:

 > bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics 
 > --entity-name my-topic --alter --add-config max.message.bytes=128000




发件人: Barot, Abhishek 
发送时间: 2017年1月20日 22:22
收件人: users@kafka.apache.org
主题: Failing to get Topic Configuration

Hi Kafka-users,

Could anyone help me suggest why am I not able to get the topic configuration 
by running below command

$ ./kafka-configs.sh --zookeeper *a.b.c.d:2181/kafka --describe --entity-name 
test --entity-type topics
Configs for topics:test are
$
*a.b.c.d - private url where zookeeper is hosted.

The only line which get printed is Configs for topics:test are and I get the 
prompt back :(. Am I missing anything here?


Thanks,
Abhishek Barot

--
This message w/attachments (message) is intended solely for the use of the 
intended recipient(s) and may contain information that is privileged, 
confidential or proprietary.  If you are not an intended recipient, please 
notify the sender, and then please delete and destroy all copies and 
attachments, and be advised that any review or dissemination of, or the taking 
of any action in reliance on, the information contained in or attached to this 
message is prohibited.
Unless specifically indicated, this message is not an offer to sell or a 
solicitation of any investment products or other financial product or service, 
an official confirmation of any transaction, or an official statement of 
Sender.  Subject to applicable law, Sender may intercept, monitor, review and 
retain e-communications (EC) traveling through its networks/systems and may 
produce any such EC to regulators, law enforcement, in litigation and as 
required by law.
The laws of the country of each sender/recipient may impact the handling of EC, 
and EC may be archived, supervised and produced in countries other than the 
country in which you are located. This message cannot be guaranteed to be 
secure or free of errors or viruses.  Attachments that are part of this EC may 
have additional important disclosures and disclaimers, which you should read.   
By messaging with Sender you consent to the foregoing.


答复: Re: How to choose one broker as Group Coordinator

2017-02-15 Thread Hu Xi
Correct me if I am wrong.


Firstly, determine the target partition of __consumer_offsets where the offset 
will be stored by calculating:

Math.abs(groupID.hashCode() % 50)

Secondly, find out the leader broker for that partition, and make that broker 
as the coordinator.


发件人: Yuanjia 
发送时间: 2017年2月15日 18:10
收件人: users
主题: Re: Re: How to choose one broker as Group Coordinator

My question is the selection procedure.

Thanks.



From: 陈江枫
Date: 2017-02-15 18:01
To: users
Subject: Re: How to choose one broker as Group Coordinator
when a consumer join a group, selection will be triggered, and then
rebalance.

2017-02-15 17:59 GMT+08:00 Yuanjia :

> Hi all,
> Group Coordinator can be different for different consumer groups,When
> a consumer wants to join a group,how to choose the Group Coordinator?
>
> Thanks,
> Yuanjia Li
>


答复: About "org.apache.kafka.common.protocol.types.SchemaException" Problem

2017-04-26 Thread Hu Xi
Seems it's similar to https://issues.apache.org/jira/browse/KAFKA-4599?


发件人: Yang Cui 
发送时间: 2017年4月27日 11:55
收件人: users@kafka.apache.org
主题: Re: About "org.apache.kafka.common.protocol.types.SchemaException" Problem

Hi All,
   Have anyone can help answer this question?  Thanks a lot!

On 26/04/2017, 8:00 PM, "Yang Cui"  wrote:

 Dear All,

  I am using Kafka cluster 2.11_0.9.0.1,  and the new consumer of 
2.11_0.9.0.1.
  When I set the quota configuration is:
  quota.producer.default=100
  quota.consumer.default=100
  And I used the new consumer to consume data, then the error  happened 
sometimes:

  org.apache.kafka.common.protocol.types.SchemaException: Error reading 
field 'responses': Error reading array of size 1140343, only 37 bytes available
  at 
org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
  at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)
  at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
  at com.fw.kafka.ConsumerThread.run(TimeOffsetPair.java:458)

  It is not occurred every time, but when it happened, it occurs repeatedly 
many times.







答复: Resetting offsets

2017-05-03 Thread Hu Xi
Seems there is no command line out of box, but if you could write a simple Java 
client application that firstly calls 'seek' or 'seekToBeginning' to reset 
offsets to what you expect and then invoke commitSync to commit the offsets.



发件人: Paul van der Linden 
发送时间: 2017年5月3日 18:28
收件人: users@kafka.apache.org
主题: Resetting offsets

I'm trying to reset the offsets for all partitions for all topics for a
consumer group, but I can't seem to find a working way.

The command line tool provides a tool to remove a consumer group (which
would be fine in this occasion), but this is not working with the "new"
style consumer groups. I tried to set consumer offsets with a client, which
also didn't work (we are using confluent-kafka-python with librdkafka).

Is there any way to reset the offsets (preferable with python or a command
line tool)?

Thanks


答复: Trouble fetching messages with error "Skipping fetch for partition"

2017-05-04 Thread Hu Xi
This is a trace-level log which means that consumer already creates a fetch 
request to the given node from which you are reading data so no more requests 
cannot be created. Did you get any other warn-level or error-level logs when 
failing to fetching message?



发件人: Sachin Nikumbh 
发送时间: 2017年5月5日 10:08
收件人: users@kafka.apache.org
主题: Trouble fetching messages with error "Skipping fetch for partition"

Hello,
I am using kafka 0.10.1.0 and failing to fetch messages with following error 
message in the log :
Skipping fetch for partition 
MYPARTITION because there is an in-flight request to MYMACHINE:9092 (id: 0 
rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)



Has anyone seen this error? How do I fix this?
Any help would be greatly appreciated,
ThanksSachin


答复: 0.10.1.0 version kafka replica syn slow

2017-05-11 Thread Hu Xi
If you confirm it's the reason, then try to increase `num.replica.fetchers` to 
speed up the replication.


发件人: 蔡高年 <838199...@qq.com>
发送时间: 2017年5月11日 11:45
收件人: users
主题: 0.10.1.0 version kafka replica syn slow

hello
recently ,our prod environment have a proplem,the kafka leader Shrinking and 
Expanding IsR frequently,result in the  consumer can not consume the message.do 
you have any advice?


thank you.


here is the log related.





[serviceop@SZC-L0046001 kafka]$ ./bin/kafka-topics.sh --describe --zookeeper 
30.16.36.181:2181,30.16.36.182:2181,30.16.36.183:2181/ubasKafka --topic 
kafkaUbasTopicProd



Topic:kafkaUbasTopicProdPartitionCount:8ReplicationFactor:3 
Configs:

Topic: kafkaUbasTopicProd   Partition: 0Leader: 1   
Replicas: 1,7,0 Isr: 1,7,0

Topic: kafkaUbasTopicProd   Partition: 1Leader: 2   
Replicas: 2,0,1 Isr: 1,2,0

Topic: kafkaUbasTopicProd   Partition: 2Leader: 3   
Replicas: 3,1,2 Isr: 1,2,3

Topic: kafkaUbasTopicProd   Partition: 3Leader: 4   
Replicas: 4,2,3 Isr: 2,3,4

Topic: kafkaUbasTopicProd   Partition: 4Leader: 5   
Replicas: 5,3,4 Isr: 3,4,5

Topic: kafkaUbasTopicProd   Partition: 5Leader: 6   
Replicas: 6,4,5 Isr: 4,5,6

Topic: kafkaUbasTopicProd   Partition: 6Leader: 7   
Replicas: 7,5,6 Isr: 5,6,7

Topic: kafkaUbasTopicProd   Partition: 7Leader: 0   
Replicas: 0,6,7 Isr: 0










[2017-05-09 18:28:34,533] INFO Partition [kafkaUbasTopicProd,7] on broker 0: 
Expanding ISR for partition [kafkaUbasTopicProd,7] from 0 to 0,6 
(kafka.cluster.Partition)

[2017-05-09 18:28:53,901] INFO Partition [kafkaUbasTopicProd,7] on broker 0: 
Shrinking ISR for partition [kafkaUbasTopicProd,7] from 0,6 to 0 
(kafka.cluster.Partition)

[2017-05-09 18:33:03,901] INFO Partition [__consumer_offsets,2] on broker 0: 
Shrinking ISR for partition [__consumer_offsets,2] from 0,5,6 to 0,5 
(kafka.cluster.Partition)





[2017-05-09 18:33:03,903] INFO Partition [__consumer_offsets,10] on broker 0: 
Shrinking ISR for partition [__consumer_offsets,10] from 0,6,7 to 0,7 
(kafka.cluster.Partition)