[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
versions
Storm : 0.9.2
Kafka  0.8.1   (3 brokers)
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.

When I use kafka spout with storm , somtimes there were zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfer data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48380 which had sessionid 0x14ab82c142b0639
2015-01-10 23:22:11,027 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
2015-01-10 23:22:11,432 [myid:] - E

[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
versions
Storm : 0.9.2
Kafka  0.8.1   (3 brokers)
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.

When I use kafka spout with storm , somtimes there were zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfer data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48380 which had sessionid 0x14ab82c142b0639
2015-01-10 23:22:11,027 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
2015-01-10 23:22:11,432 [myid:] - E

[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
versions
Storm : storm-corer-0.9.2
Kafka  0.8.1   (3 brokers)
storm-kafka : 0.9.2
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.

When I use kafka spout with storm , somtimes there were zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfer data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48380 which had sessionid 0x14ab82c142b0639
2015-01-10 23:22:11,027 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
201

[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
versions
Storm : storm-corer-0.9.2
Kafka  0.8.1   (3 brokers)
storm-kafka : 0.9.2
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.

When I use kafka spout with storm , somtimes there were zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfers data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48380 which had sessionid 0x14ab82c142b0639
2015-01-10 23:22:11,027 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
20

[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
versions
Storm : storm-corer-0.9.2
Kafka  0.8.1   (3 brokers)
storm-kafka : 0.9.2
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.

When I use kafka spout with storm , somtimes there were zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfers data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48380 which had sessionid 0x14ab82c142b0639
2015-01-10 23:22:11,027 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
20

[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
-versions
Storm : storm-corer-0.9.2
Kafka  0.8.1   (3 brokers)
storm-kafka : 0.9.2
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.


- exceptions
EndOfStreamException,
java.nio.channels.CancelledKeyException,
org.apache.zookeeper.KeeperException$BadVersionException

---

When I use kafka spout with storm , sometimes there was zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfers data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
2015-01-10 23:22:11,432 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProces

[jira] [Updated] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoonhyeok Kim updated KAFKA-1857:
-
Description: 
Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
-versions
Storm : storm-corer-0.9.2
Kafka  0.8.1   (3 brokers)
storm-kafka : 0.9.2
zookeeper 3.4.6 (standalone)

But this problem occurs when I use pre-versions as well.


- exceptions
EndOfStreamException,
java.nio.channels.CancelledKeyException,
org.apache.zookeeper.KeeperException$BadVersionException

---

When I use kafka spout with storm , sometimes there was zookeeper logs like
(zookeeper.out)
{code}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{code}


still, zookeeper is working well, and storm-kafka looks fine , transfers data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{code}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:170)
at 
org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
2015-01-10 23:22:11,432 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestProcessor

[jira] [Commented] (KAFKA-1819) Cleaner gets confused about deleted and re-created topics

2015-01-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273260#comment-14273260
 ] 

Gwen Shapira commented on KAFKA-1819:
-

Hey [~nehanarkhede], can you double check that you tested on branches with 
KAFKA-1815 applied?
It seems to resolve the exact error you encountered.

> Cleaner gets confused about deleted and re-created topics
> -
>
> Key: KAFKA-1819
> URL: https://issues.apache.org/jira/browse/KAFKA-1819
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: KAFKA-1819.patch, KAFKA-1819_2014-12-26_13:58:44.patch, 
> KAFKA-1819_2014-12-30_16:01:19.patch
>
>
> I get an error like this after deleting a compacted topic and re-creating it. 
> I think it's because the brokers don't remove cleaning checkpoints from the 
> cleaner-offset-checkpoint file. This is from a build based off commit bd212b7.
> java.lang.IllegalArgumentException: requirement failed: Last clean offset is 
> 587607 but segment base offset is 0 for log foo-6.
> at scala.Predef$.require(Predef.scala:233)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:502)
> at kafka.log.Cleaner.clean(LogCleaner.scala:300)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:214)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:192)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: New consumer client

2015-01-11 Thread Bhavesh Mistry
Hi Jay,

One of the pain point of existing consumer code is CORRUPT_MESSAGE
occasionally. Right now, it is hard to pin-point the problem of
CORRUPT_MESSAGE especially when this happen on Mirror Maker side. Is there
any proposal to auto skip corrupted message and have reporting visibility
of CRC error(metics etc or traceability to find corruption).per topic etc ?
I am not sure if this is correct email thread to address this if not please
let me know.

Will provide feedback about new consumer api and changes.
Thanks,

Bhavesh

On Sun, Jan 11, 2015 at 7:57 PM, Jay Kreps  wrote:

> I uploaded an updated version of the new consumer client (
> https://issues.apache.org/jira/browse/KAFKA-1760). This is now almost
> feature complete, and has pretty reasonable testing and metrics. I think it
> is ready for review and could be checked in once 0.8.2 is out.
>
> For those who haven't been following this is meant to be a new consumer
> client, like the new producer is 0.8.2, and intended to replace the
> existing "high level" and "simple" scala consumers.
>
> This still needs the server-side implementation of the partition assignment
> and group management to be fully functional. I have just stubbed this out
> in the server to allow the implementation and testing of the server but
> actual usage will require it. However the client that exists now is
> actually a fully functional replacement for the "simple consumer" that is
> vastly easier to use correctly as it internally does all the discovery and
> failover.
>
> It would be great if people could take a look at this code, and
> particularly at the public apis which have several small changes from the
> original proposal.
>
> Summary
>
> What's there:
> 1. Simple consumer functionality
> 2. Offset commit and fetch
> 3. Ability to change position with seek
> 4. Ability to commit all or just some offsets
> 5. Controller discovery, failure detection, heartbeat, and fail-over
> 6. Controller partition assignment
> 7. Logging
> 8. Metrics
> 9. Integration tests including tests that simulate random broker failures
> 10. Integration into the consumer performance test
>
> Limitations:
> 1. There could be some lingering bugs in the group management support, it
> is hard to fully test fully with just the stub support on the server, so
> we'll need to get the server working to do better I think.
> 2. I haven't implemented wild-card subscriptions yet.
> 3. No integration with console consumer yet
>
> Performance
>
> I did some performance comparison with the old consumer over localhost on
> my laptop. Usually localhost isn't good for testing but in this case it is
> good because it has near infinite bandwidth so it does a good job at
> catching inefficiencies that would be hidden with a slower network. These
> numbers probably aren't representative of what you would get over a real
> network, but help bring out the relative efficiencies.
> Here are the results:
> - Old high-level consumer: 213 MB/sec
> - New consumer: 225 MB/sec
> - Old simple consumer: 242 Mb/sec
>
> It may be hard to get this client up to the same point as the simple
> consumer as it is doing very little beyond allocating and wrapping byte
> buffers that it reads off the network.
>
> The big thing that shows up in profiling is the buffer allocation for
> reading data. So one speed-up would be to pool these.
>
> Some things to discuss
>
> 1. What should the behavior of consumer.position() and consumer.committed()
> be immediately after initialization (prior to calling poll). Currently
> these methods just fetch the current value from memory, but if the position
> isn't in memory it will try to fetch it from the server, if no position is
> found it will use the auto-offset reset policy to pick on. I think this is
> the right thing to do because you can't guarantee how many calls to poll()
> will be required before full initialization would be complete otherwise.
> But it is kind of weird.
> 2. Overall code structure improvement. These NIO network clients tend to be
> very imperative in nature. I'm not sure this is bad, but if anyone has any
> idea on improving the code I'd love to hear it.
>
> -Jay
>


[jira] [Commented] (KAFKA-1819) Cleaner gets confused about deleted and re-created topics

2015-01-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273265#comment-14273265
 ] 

Gwen Shapira commented on KAFKA-1819:
-

Ick, never mind. I see that the DeleteLog test I added doesn't shut down 
properly. Thats probably it :(
I'll upload a new patch tomorrow morning.

I still have no idea why the error doesn't reproduce in my environment.

> Cleaner gets confused about deleted and re-created topics
> -
>
> Key: KAFKA-1819
> URL: https://issues.apache.org/jira/browse/KAFKA-1819
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: KAFKA-1819.patch, KAFKA-1819_2014-12-26_13:58:44.patch, 
> KAFKA-1819_2014-12-30_16:01:19.patch
>
>
> I get an error like this after deleting a compacted topic and re-creating it. 
> I think it's because the brokers don't remove cleaning checkpoints from the 
> cleaner-offset-checkpoint file. This is from a build based off commit bd212b7.
> java.lang.IllegalArgumentException: requirement failed: Last clean offset is 
> 587607 but segment base offset is 0 for log foo-6.
> at scala.Predef$.require(Predef.scala:233)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:502)
> at kafka.log.Cleaner.clean(LogCleaner.scala:300)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:214)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:192)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review request for patch attached to KAFKA-1836

2015-01-11 Thread Jaikiran Pai
I've now updated the patch submission page to include instructions for 
installation on Debian based systems and also added an overview section 
to help understand what the Kafka patch submission tool is for 
https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review


-Jaikiran

On Wednesday 07 January 2015 10:41 AM, Jaikiran Pai wrote:
Thanks Joe, I'll do that. I had indeed quickly looked at that document 
over the weekend but I saw that it was missing instructions for Debian 
based systems (am on LinuxMint) and so decided to skip that for now 
till I understand the whole tool chain. I'll see which of these tools 
I need to install and how to set them up (and if needed add those 
instructions to that document)


-Jaikiran
On Wednesday 07 January 2015 10:04 AM, Joe Stein wrote:

Hi Jaikiran, do you mind utilizing the kafka patch review tool please
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool 


as it is helpful for folks to comment on your code lines as it posts to
review board for you and updates JIRA too.

/***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop 
/

On Tue, Jan 6, 2015 at 11:26 PM, Jaikiran Pai 
wrote:


Hello Kafka team,

I was just looking around some newbie bugs in the JIRA, during the
weekend, and decided to take up https://issues.apache.org/
jira/browse/KAFKA-1836. I've a patch, which I've attached to that 
JIRA a

few days back, which I would like to be reviewed. The patch handles the
case where the awaitUpdate timeout value for Metadata updates is lesser
than or equal to 0. It also has the documentation and config for
metadata.fetch.timeout.ms property updated. Finally it has a testcase
which verifies this change.

Could one of you please take a look?


-Jaikiran







Re: [DISCUSS] Compatability and KIPs

2015-01-11 Thread Ted Yu
For projects written in Java, there is
http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker

I searched for similar tool for Scala but haven't found one yet.

Cheers

On Sat, Jan 10, 2015 at 10:40 AM, Ashish Singh  wrote:

> Jay,
>
> I totally agree with paying more attention to compatibility across
> versions. Incompatibility is indeed a big cause of customers' woes. Human
> checks, stringent reviews, will help, but I think having compatibility
> tests will be more effective. +INT_MAX for compatibility tests.
>
> - Ashish
>
> On Friday, January 9, 2015, Jay Kreps  wrote:
>
> > Hey guys,
> >
> > We had a bit of a compatibility slip-up in 0.8.2 with the offset commit
> > stuff. We caught this one before the final release so it's not too bad.
> But
> > I do think it kind of points to an area we could do better.
> >
> > One piece of feedback we have gotten from going out and talking to users
> is
> > that compatibility is really, really important to them. Kafka is getting
> > deployed in big environments where the clients are embedded in lots of
> > applications and any kind of incompatibility is a huge pain for people
> > using it and generally makes upgrade difficult or impossible.
> >
> > In practice what I think this means for development is a lot more
> pressure
> > to really think about the public interfaces we are making and try our
> best
> > to get them right. This can be hard sometimes as changes come in patches
> > and it is hard to follow every single rb with enough diligence to know.
> >
> > Compatibility really means a couple things:
> > 1. Protocol changes
> > 2. Binary data format changes
> > 3. Changes in public apis in the clients
> > 4. Configs
> > 5. Metric names
> > 6. Command line tools
> >
> > I think 1-2 are critical. 3 is very important. And 4, 5 and 6 are pretty
> > important but not critical.
> >
> > One thing this implies is that we are really going to have to do a good
> job
> > of thinking about apis and use cases. You can definitely see a number of
> > places in the old clients and in a couple of the protocols where enough
> > care was not given to thinking things through. Some of those were from
> long
> > long ago, but we should really try to avoid adding to that set because
> > increasingly we will have to carry around these mistakes for a long time.
> >
> > Here are a few things I thought we could do that might help us get better
> > in this area:
> >
> > 1. Technically we are just in a really bad place with the protocol
> because
> > it is defined twice--once in the old scala request objects, and once in
> the
> > new protocol format for the clients. This makes changes massively
> painful.
> > The good news is that the new request definition DSL was intended to make
> > adding new protocol versions a lot easier and clearer. It will also make
> it
> > a lot more obvious when the protocol is changed since you will be
> checking
> > in or reviewing a change to Protocol.java. Getting the server moved over
> to
> > the new request objects and protocol definition will be a bit of a slog
> but
> > it will really help here I think.
> >
> > 2. We need to get some testing in place on cross-version compatibility.
> > This is work and no tests here will be perfect, but I suspect with some
> > effort we could catch a lot of things.
> >
> > 3. I was also thinking it might be worth it to get a little bit more
> formal
> > about the review and discussion process for things which will have impact
> > to these public areas to ensure we end up with something we are happy
> with.
> > Python has a PIP process (https://www.python.org/dev/peps/pep-0257/) by
> > which major changes are made, and it might be worth it for us to do a
> > similar thing. We have essentially been doing this already--major changes
> > almost always have an associated wiki, but I think just getting a little
> > more rigorous might be good. The idea would be to just call out these
> wikis
> > as official proposals and do a full Apache discuss/vote thread for these
> > important change. We would use these for big features (security, log
> > compaction, etc) as well as for small changes that introduce or change a
> > public api/config/etc. This is a little heavier weight, but I think it is
> > really just critical that we get these things right and this would be a
> way
> > to call out this kind of change so that everyone would take the time to
> > look at them.
> >
> > Thoughts?
> >
> > -Jay
> >
>
>
> --
>
> Regards,
> Ashish
>


[jira] [Commented] (KAFKA-1855) Topic unusable after unsuccessful UpdateMetadataRequest

2015-01-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272979#comment-14272979
 ] 

Sriharsha Chintalapani commented on KAFKA-1855:
---

[~hpihkala] Can you give more info on your cluster. Is it a single-node 
cluster? and any steps to reproduce this. Thanks.

> Topic unusable after unsuccessful UpdateMetadataRequest
> ---
>
> Key: KAFKA-1855
> URL: https://issues.apache.org/jira/browse/KAFKA-1855
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.2
>Reporter: Henri Pihkala
> Fix For: 0.8.2
>
>
> Sometimes, seemingly randomly, topic creation/initialization might fail with 
> the following lines in controller.log. Other logs show no errors. When this 
> happens, the topic is unusable (gives UnknownTopicOrPartition for all 
> requests).
> For me this happens 5-10% of the time. Feels like it's more likely to happen 
> if there is time between topic creations. Observed on 0.8.2-beta, have not 
> tried previous versions.
> [2015-01-09 16:15:27,153] WARN [Controller-0-to-broker-0-send-thread], 
> Controller 0 fails to send a request to broker 
> id:0,host:192.168.10.21,port:9092 (kafka.controller.RequestSendThread)
> java.io.EOFException: Received -1 when reading from channel, socket has 
> likely been closed.
>   at kafka.utils.Utils$.read(Utils.scala:381)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
>   at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:146)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> [2015-01-09 16:15:27,156] ERROR [Controller-0-to-broker-0-send-thread], 
> Controller 0 epoch 6 failed to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:0;ControllerEpoch:6;CorrelationId:48;ClientId:id_0-host_192.168.10.21-port_9092;AliveBrokers:id:0,host:192.168.10.21,port:9092;PartitionState:[40963064-cdd2-4cd1-937a-9827d3ab77ad,0]
>  -> 
> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:6),ReplicationFactor:1),AllReplicas:0)
>  to broker id:0,host:192.168.10.21,port:9092. Reconnecting to broker. 
> (kafka.controller.RequestSendThread)
> java.nio.channels.ClosedChannelException
>   at kafka.network.BlockingChannel.send(BlockingChannel.scala:97)
>   at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
>   at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-11 Thread Ashish Kumar Singh (JIRA)
Ashish Kumar Singh created KAFKA-1856:
-

 Summary: Add PreCommit Patch Testing
 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh


h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*

h2. Motivation
*With great power comes great responsibility* - Uncle Ben. As Kafka user list 
is growing, mechanism to ensure quality of the product is required. Quality 
becomes hard to measure and maintain in an open source project, because of a 
wide community of contributors. Luckily, Kafka is not the first open source 
project and can benefit from learnings of prior projects.

PreCommit tests are the tests that are run for each patch that gets attached to 
an open JIRA. Based on tests results, test execution framework, test bot, +1 or 
-1 the patch. Having PreCommit tests take the load off committers to look at or 
test each patch.

h2. Tests in Kafka
h3. Unit and Integraiton Tests
[Unit and Integration 
tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
 are cardinal to help contributors to avoid breaking existing functionalities 
while adding new functionalities or fixing older ones. These tests, atleast the 
ones relevant to the changes, must be run by contributors before attaching a 
patch to a JIRA.

h3. System Tests
[System 
tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] are 
much wider tests that, unlike unit tests, focus on end-to-end scenarios and not 
some specific method or class.

h2. Apache PreCommit tests
Apache provides a mechanism to automatically build a project and run a series 
of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
test framework will comment with a +1 or -1 on the JIRA.

You can read more about the framework here:
http://wiki.apache.org/general/PreCommitBuilds

h2. Plan
- Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
other projects) that will take a jira as a parameter, apply on the appropriate 
branch, build the project, run tests and report results. This script should be 
committed into the Kafka code-base. To begin with, this will only run unit 
tests. We can add code sanity checks, system_tests, etc in the future.
- Create a jenkins job for running the test (as described in 
http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
manually. This must be done by a committer with Jenkins access.
- Ask an Hadoop committer (or someone else with access to 
https://builds.apache.org/job/PreCommit-Admin/) to add Kafka to the list of 
projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-11 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1856:
--
Description: 
h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*

h2. Motivation
*With great power comes great responsibility* - Uncle Ben. As Kafka user list 
is growing, mechanism to ensure quality of the product is required. Quality 
becomes hard to measure and maintain in an open source project, because of a 
wide community of contributors. Luckily, Kafka is not the first open source 
project and can benefit from learnings of prior projects.

PreCommit tests are the tests that are run for each patch that gets attached to 
an open JIRA. Based on tests results, test execution framework, test bot, +1 or 
-1 the patch. Having PreCommit tests take the load off committers to look at or 
test each patch.

h2. Tests in Kafka
h3. Unit and Integraiton Tests
[Unit and Integration 
tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
 are cardinal to help contributors to avoid breaking existing functionalities 
while adding new functionalities or fixing older ones. These tests, atleast the 
ones relevant to the changes, must be run by contributors before attaching a 
patch to a JIRA.

h3. System Tests
[System 
tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] are 
much wider tests that, unlike unit tests, focus on end-to-end scenarios and not 
some specific method or class.

h2. Apache PreCommit tests
Apache provides a mechanism to automatically build a project and run a series 
of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
test framework will comment with a +1 or -1 on the JIRA.

You can read more about the framework here:
http://wiki.apache.org/general/PreCommitBuilds

h2. Plan
- Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
other projects) that will take a jira as a parameter, apply on the appropriate 
branch, build the project, run tests and report results. This script should be 
committed into the Kafka code-base. To begin with, this will only run unit 
tests. We can add code sanity checks, system_tests, etc in the future.
- Create a jenkins job for running the test (as described in 
http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
manually. This must be done by a committer with Jenkins access.
- Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ to 
add Kafka to the list of projects PreCommit-Admin triggers.

  was:
h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*

h2. Motivation
*With great power comes great responsibility* - Uncle Ben. As Kafka user list 
is growing, mechanism to ensure quality of the product is required. Quality 
becomes hard to measure and maintain in an open source project, because of a 
wide community of contributors. Luckily, Kafka is not the first open source 
project and can benefit from learnings of prior projects.

PreCommit tests are the tests that are run for each patch that gets attached to 
an open JIRA. Based on tests results, test execution framework, test bot, +1 or 
-1 the patch. Having PreCommit tests take the load off committers to look at or 
test each patch.

h2. Tests in Kafka
h3. Unit and Integraiton Tests
[Unit and Integration 
tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
 are cardinal to help contributors to avoid breaking existing functionalities 
while adding new functionalities or fixing older ones. These tests, atleast the 
ones relevant to the changes, must be run by contributors before attaching a 
patch to a JIRA.

h3. System Tests
[System 
tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] are 
much wider tests that, unlike unit tests, focus on end-to-end scenarios and not 
some specific method or class.

h2. Apache PreCommit tests
Apache provides a mechanism to automatically build a project and run a series 
of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
test framework will comment with a +1 or -1 on the JIRA.

You can read more about the framework here:
http://wiki.apache.org/general/PreCommitBuilds

h2. Plan
- Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
other projects) that will take a jira as a parameter, apply on the appropriate 
branch, build the project, run tests and report results. This script should be 
committed into the Kafka code-base. To begin with, this will only run unit 
tests. We can add code sanity checks, system_tests, etc in the future.
- Create a jenkins job for running the test (as described in 
http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
manually. This must be done by a committer with Jenkins access.
- Ask an Hadoop committer (or someone else with access 

[jira] [Commented] (KAFKA-1855) Topic unusable after unsuccessful UpdateMetadataRequest

2015-01-11 Thread Henri Pihkala (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273077#comment-14273077
 ] 

Henri Pihkala commented on KAFKA-1855:
--

Yes - a single node cluster, running on CentOS Linux 64-bit. 

I was basically creating topics in small bursts, say 3-10 topics quickly one 
after another. I can't say if this pattern was important in producing the bug 
or not. I was creating topics by sending topic metadata requests using the Java 
API and having auto.create.topics.enable=true.

Sometimes all the topics would be created perfectly fine, and sometimes a topic 
or two would fail as described.

Let me know if I can provide any further information.

> Topic unusable after unsuccessful UpdateMetadataRequest
> ---
>
> Key: KAFKA-1855
> URL: https://issues.apache.org/jira/browse/KAFKA-1855
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.2
>Reporter: Henri Pihkala
> Fix For: 0.8.2
>
>
> Sometimes, seemingly randomly, topic creation/initialization might fail with 
> the following lines in controller.log. Other logs show no errors. When this 
> happens, the topic is unusable (gives UnknownTopicOrPartition for all 
> requests).
> For me this happens 5-10% of the time. Feels like it's more likely to happen 
> if there is time between topic creations. Observed on 0.8.2-beta, have not 
> tried previous versions.
> [2015-01-09 16:15:27,153] WARN [Controller-0-to-broker-0-send-thread], 
> Controller 0 fails to send a request to broker 
> id:0,host:192.168.10.21,port:9092 (kafka.controller.RequestSendThread)
> java.io.EOFException: Received -1 when reading from channel, socket has 
> likely been closed.
>   at kafka.utils.Utils$.read(Utils.scala:381)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
>   at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:146)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> [2015-01-09 16:15:27,156] ERROR [Controller-0-to-broker-0-send-thread], 
> Controller 0 epoch 6 failed to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:0;ControllerEpoch:6;CorrelationId:48;ClientId:id_0-host_192.168.10.21-port_9092;AliveBrokers:id:0,host:192.168.10.21,port:9092;PartitionState:[40963064-cdd2-4cd1-937a-9827d3ab77ad,0]
>  -> 
> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:6),ReplicationFactor:1),AllReplicas:0)
>  to broker id:0,host:192.168.10.21,port:9092. Reconnecting to broker. 
> (kafka.controller.RequestSendThread)
> java.nio.channels.ClosedChannelException
>   at kafka.network.BlockingChannel.send(BlockingChannel.scala:97)
>   at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
>   at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: Patch for KAFKA-1760

2015-01-11 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 12, 2015, 12:57 a.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-1760


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description (updated)
---

New consumer.


Diffs (updated)
-

  build.gradle ba52288031e2abc70e35e9297a4423dd5025950b 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
1bce50185273dbdbc131fbc9c7f5f3e9c346517a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
a5fedce9ff05ccfdb58ef083118d23bfa7a9bd4a 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
8cab16c0a0bdb671fea1fc2fc2694247f66cc971 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
1d30f9edd95337f86e632a09fc8f4126a67c238b 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
84a7a07269c51ccc22ebb4ff9797292d07ba778e 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
4dd2cdf773f7eb01a93d7f994383088960303dfc 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
2fc471f64f4352eeb128bbd3941779780076fb8c 
  clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 
99364c1ca464f7b81be7d3da15b40ab66717a659 
  
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 
3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
  
clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java 
6b7c269ad7679df57c6bd505516075add39b7534 
  clients/src/main/java/org/apache/kafka/common/serialization/Deserializer.java 
3c001d33091c0f04ac3bf49a6731ab9e9f2bb0c4 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
527dd0f9c47

[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-11 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273125#comment-14273125
 ] 

Jay Kreps commented on KAFKA-1760:
--

Updated reviewboard https://reviews.apache.org/r/27799/diff/
 against branch trunk

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-11 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1760:
-
Attachment: KAFKA-1760_2015-01-11_16:57:15.patch

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-11 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273128#comment-14273128
 ] 

Jay Kreps commented on KAFKA-1760:
--

I updated this patch and consider it more or less complete. I think we should 
review and check in as soon as 0.8.2 is out. I'd appreciate review.

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


New consumer client

2015-01-11 Thread Jay Kreps
I uploaded an updated version of the new consumer client (
https://issues.apache.org/jira/browse/KAFKA-1760). This is now almost
feature complete, and has pretty reasonable testing and metrics. I think it
is ready for review and could be checked in once 0.8.2 is out.

For those who haven't been following this is meant to be a new consumer
client, like the new producer is 0.8.2, and intended to replace the
existing "high level" and "simple" scala consumers.

This still needs the server-side implementation of the partition assignment
and group management to be fully functional. I have just stubbed this out
in the server to allow the implementation and testing of the server but
actual usage will require it. However the client that exists now is
actually a fully functional replacement for the "simple consumer" that is
vastly easier to use correctly as it internally does all the discovery and
failover.

It would be great if people could take a look at this code, and
particularly at the public apis which have several small changes from the
original proposal.

Summary

What's there:
1. Simple consumer functionality
2. Offset commit and fetch
3. Ability to change position with seek
4. Ability to commit all or just some offsets
5. Controller discovery, failure detection, heartbeat, and fail-over
6. Controller partition assignment
7. Logging
8. Metrics
9. Integration tests including tests that simulate random broker failures
10. Integration into the consumer performance test

Limitations:
1. There could be some lingering bugs in the group management support, it
is hard to fully test fully with just the stub support on the server, so
we'll need to get the server working to do better I think.
2. I haven't implemented wild-card subscriptions yet.
3. No integration with console consumer yet

Performance

I did some performance comparison with the old consumer over localhost on
my laptop. Usually localhost isn't good for testing but in this case it is
good because it has near infinite bandwidth so it does a good job at
catching inefficiencies that would be hidden with a slower network. These
numbers probably aren't representative of what you would get over a real
network, but help bring out the relative efficiencies.
Here are the results:
- Old high-level consumer: 213 MB/sec
- New consumer: 225 MB/sec
- Old simple consumer: 242 Mb/sec

It may be hard to get this client up to the same point as the simple
consumer as it is doing very little beyond allocating and wrapping byte
buffers that it reads off the network.

The big thing that shows up in profiling is the buffer allocation for
reading data. So one speed-up would be to pool these.

Some things to discuss

1. What should the behavior of consumer.position() and consumer.committed()
be immediately after initialization (prior to calling poll). Currently
these methods just fetch the current value from memory, but if the position
isn't in memory it will try to fetch it from the server, if no position is
found it will use the auto-offset reset policy to pick on. I think this is
the right thing to do because you can't guarantee how many calls to poll()
will be required before full initialization would be complete otherwise.
But it is kind of weird.
2. Overall code structure improvement. These NIO network clients tend to be
very imperative in nature. I'm not sure this is bad, but if anyone has any
idea on improving the code I'd love to hear it.

-Jay


[jira] [Created] (KAFKA-1857) Kafka Broker ids are removed ( with zookeeper , Storm )

2015-01-11 Thread Yoonhyeok Kim (JIRA)
Yoonhyeok Kim created KAFKA-1857:


 Summary: Kafka Broker ids are removed ( with zookeeper , Storm )
 Key: KAFKA-1857
 URL: https://issues.apache.org/jira/browse/KAFKA-1857
 Project: Kafka
  Issue Type: Bug
  Components: consumer, controller
Affects Versions: 0.8.1
 Environment: Ubuntu ,   With Storm-kafka and zookeeeper 3.4.6
Reporter: Yoonhyeok Kim
Assignee: Neha Narkhede


Hi,
I am using kind of Real-time analytics system with
zookeeper,  Storm & Kafka.
versions
Storm : 0.9.2
Kafka  0.8.1
zookeeper 3.4.6

But this problem occurs when I use pre-versions as well.

When I use kafka spout with storm , somtimes there were zookeeper logs like
(zookeeper.out)
{quote}
2015-01-10 19:19:00,836 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
{quote}


still, zookeeper is working well, and storm-kafka looks fine , transfer data 
rightly.
But as time goes by, those kind of Error keep occurs and then I saw different 
logs like...

{quote}
2015-01-10 23:22:11,022 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48504 which had sessionid 0x14ab82c142b0644
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b001d, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,023 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:55885 which had sessionid 0x14ab82c142b001d
2015-01-10 23:22:11,023 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b063e, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48444 which had sessionid 0x14ab82c142b063e
2015-01-10 23:22:11,026 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0639, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,026 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:48380 which had sessionid 0x14ab82c142b0639
2015-01-10 23:22:11,027 [myid:] - WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x14ab82c142b0658, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:619)
2015-01-10 23:22:11,027 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket 
connection for client /70.7.12.38:56724 which had sessionid 0x14ab82c142b0658
2015-01-10 23:22:11,431 [myid:] - ERROR [SyncThread:0:NIOServerCnxn@178] - 
Unexpected Exception: 
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
at 
org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081)
at 
org.apache.zookeeper.server.FinalRequestPro