[jira] [Updated] (KAFKA-493) High CPU usage on inactive server

2016-02-27 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-493:
--
Fix Version/s: (was: 0.10.1.0)

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Attachments: Kafka-2014-11-10.snapshot.zip, Kafka-sampling1.zip, 
> Kafka-sampling2.zip, Kafka-sampling3.zip, Kafka-trace1.zip, Kafka-trace2.zip, 
> Kafka-trace3.zip, backtraces.txt, stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 29)
> THREAD END (id = 200010)
> THREAD END (id = 200030)
> THREAD END (id = 200028)
> TRACE 301281:
> sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:Unknown 
> line)
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:81)
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
> 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(Sock

[jira] [Resolved] (KAFKA-3251) Requesting committed offsets results in inconsistent results

2016-02-27 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3251.

Resolution: Duplicate

Thanks for letting us know.

> Requesting committed offsets results in inconsistent results
> 
>
> Key: KAFKA-3251
> URL: https://issues.apache.org/jira/browse/KAFKA-3251
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.9.0.0
>Reporter: Dimitrij Denissenko
>Assignee: Jason Gustafson
>
> Hi,
> I am using github.com/Shopify/sarama to retrieve the committed offsets for a 
> high-volume topic, but the bug seems to be actually originating in Kafka 
> itself.
> I have written a little test to query the offsets of all partitions of one 
> topic, every second. The request looks like this:
> {code}
> OffsetFetchRequest{
>   ConsumerGroup: "my-group-name", 
>   Version: 1,
>   TopicPartitions: []TopicPartition{
>  {TopicName: "logs", Partitions: []int32{0,1,2,3,4,5,6,7}
>   }
> }
> {code}
> For most of the time, the responses are correct, but every 10 minutes or so, 
> there is a little glitch. I am not familiar with the Kafka internals, but it 
> looks like a little race. Here's my log output:
> {code}
> ...
> 2016/02/19 09:48:10 topic=logs partition=00 error=0 offset=206567925
> 2016/02/19 09:48:10 topic=logs partition=01 error=0 offset=206671019
> 2016/02/19 09:48:10 topic=logs partition=02 error=0 offset=206567995
> 2016/02/19 09:48:10 topic=logs partition=03 error=0 offset=205785315
> 2016/02/19 09:48:10 topic=logs partition=04 error=0 offset=206526677
> 2016/02/19 09:48:10 topic=logs partition=05 error=0 offset=206713764
> 2016/02/19 09:48:10 topic=logs partition=06 error=0 offset=206524006
> 2016/02/19 09:48:10 topic=logs partition=07 error=0 offset=206629121
> 2016/02/19 09:48:11 topic=logs partition=00 error=0 offset=206572870
> 2016/02/19 09:48:11 topic=logs partition=01 error=0 offset=206675966
> 2016/02/19 09:48:11 topic=logs partition=02 error=0 offset=206573267
> 2016/02/19 09:48:11 topic=logs partition=03 error=0 offset=205790613
> 2016/02/19 09:48:11 topic=logs partition=04 error=0 offset=206531841
> 2016/02/19 09:48:11 topic=logs partition=05 error=0 offset=206718513
> 2016/02/19 09:48:11 topic=logs partition=06 error=0 offset=206529762
> 2016/02/19 09:48:11 topic=logs partition=07 error=0 offset=206634037
> 2016/02/19 09:48:12 topic=logs partition=00 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=01 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=02 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=03 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=04 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=05 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=06 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=07 error=0 offset=-1
> 2016/02/19 09:48:13 topic=logs partition=00 error=0 offset=-1
> 2016/02/19 09:48:13 topic=logs partition=01 error=0 offset=206686020
> 2016/02/19 09:48:13 topic=logs partition=02 error=0 offset=206583861
> 2016/02/19 09:48:13 topic=logs partition=03 error=0 offset=205800480
> 2016/02/19 09:48:13 topic=logs partition=04 error=0 offset=206542733
> 2016/02/19 09:48:13 topic=logs partition=05 error=0 offset=206728251
> 2016/02/19 09:48:13 topic=logs partition=06 error=0 offset=206534794
> 2016/02/19 09:48:13 topic=logs partition=07 error=0 offset=206643853
> 2016/02/19 09:48:14 topic=logs partition=00 error=0 offset=206584533
> 2016/02/19 09:48:14 topic=logs partition=01 error=0 offset=206690275
> 2016/02/19 09:48:14 topic=logs partition=02 error=0 offset=206588902
> 2016/02/19 09:48:14 topic=logs partition=03 error=0 offset=205805413
> 2016/02/19 09:48:14 topic=logs partition=04 error=0 offset=206542733
> 2016/02/19 09:48:14 topic=logs partition=05 error=0 offset=206733144
> 2016/02/19 09:48:14 topic=logs partition=06 error=0 offset=206540275
> 2016/02/19 09:48:14 topic=logs partition=07 error=0 offset=206649392
> ...
> {code}
> As you can see, the returned error code is 0 and there is no obvious reason 
> why the returned offsets are suddenly wrong/blank. 
> I have also added some debugging to our offset committer to make absolutely 
> sure the numbers we are sending are absolutely correct and they are. 
> Any help is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-27 Thread chen zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15170945#comment-15170945
 ] 

chen zhu commented on KAFKA-3261:
-

[~guozhang], it looks like we can not consolidate kafka.cluster.BrokerEndPoint 
and kafka.cluster.EndPoint because they actually have different contents and 
usage. BrokerEndPoint contains host, port and broker id whereas EndPoint 
contains host, port and protocol id. Do you think this ticket should be closed?

> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: chen zhu
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)