[jira] [Created] (KAFKA-1689) automatic migration of log dirs to new locations

2014-10-08 Thread Javier Alba (JIRA)
Javier Alba created KAFKA-1689:
--

 Summary: automatic migration of log dirs to new locations
 Key: KAFKA-1689
 URL: https://issues.apache.org/jira/browse/KAFKA-1689
 Project: Kafka
  Issue Type: New Feature
  Components: config, core
Affects Versions: 0.8.1.1
Reporter: Javier Alba
Priority: Minor


There is no automated way in Kafka 0.8.1.1 to make a migration of log data if 
we want to reconfigure our cluster nodes to use several data directories where 
we have mounted new disks instead our original data directory.

For example, say we have our brokers configured with:

  log.dirs = /tmp/kafka-logs

And we added 3 new disks and now we want our brokers to use them as log.dirs:

  logs.dirs = /srv/data/1,/srv/data/2,/srv/data/3

It would be great to have an automated way of doing such a migration, of course 
without losing current data in the cluster.

It would be ideal to be able to do this migration without losing service.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1644) Inherit FetchResponse from RequestOrResponse

2014-10-08 Thread Anton Karamanov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14163449#comment-14163449
 ] 

Anton Karamanov commented on KAFKA-1644:


Thanks.

Is there a chance it could be included in 0.8.2 release?

> Inherit FetchResponse from RequestOrResponse
> 
>
> Key: KAFKA-1644
> URL: https://issues.apache.org/jira/browse/KAFKA-1644
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anton Karamanov
>Assignee: Anton Karamanov
> Fix For: 0.8.3
>
> Attachments: 
> 0001-KAFKA-1644-Inherit-FetchResponse-from-RequestOrRespo.patch, 
> 0002-KAFKA-1644-Inherit-FetchResponse-from-RequestOrRespo.patch, 
> 0003-KAFKA-1644-Inherit-FetchResponse-from-RequestOrRespo.patch
>
>
> Unlike all other Kafka API responses {{FetchResponse}} is not a subclass of 
> RequestOrResponse, which requires handling it as a special case while 
> processing responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Schema Based Topics

2014-10-08 Thread Joe Stein
I started to put thoughts on what a schema based topic would look like
https://cwiki.apache.org/confluence/display/KAFKA/Schema+based+topics

The main idea around this feature is that topics will have a registry of
what schemas it support and validate for producing and return (in addition
to the key, message) a schemaHash.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/


[jira] [Commented] (KAFKA-1681) Newly elected KafkaController might not start deletion of pending topics

2014-10-08 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14163604#comment-14163604
 ] 

Sriharsha Chintalapani commented on KAFKA-1681:
---

[~nehanarkhede] [~junrao]
adding this code to TopicDeletionManager.start method should cover this issue 
right?
  if (topicsToBeDeleted.size > 0)
resumeTopicDeletionThread()


> Newly elected KafkaController might not start deletion of pending topics
> 
>
> Key: KAFKA-1681
> URL: https://issues.apache.org/jira/browse/KAFKA-1681
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.2
>
>
> As part of KAFKA-1663 deleteTopicStateChanged.set(true) is removed from 
> start(). This will cause newly elected kafka controller not to process the 
> existing delete topic requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1681) Newly elected KafkaController might not start deletion of pending topics

2014-10-08 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1681:
-
Reviewer: Neha Narkhede

> Newly elected KafkaController might not start deletion of pending topics
> 
>
> Key: KAFKA-1681
> URL: https://issues.apache.org/jira/browse/KAFKA-1681
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.2
>
>
> As part of KAFKA-1663 deleteTopicStateChanged.set(true) is removed from 
> start(). This will cause newly elected kafka controller not to process the 
> existing delete topic requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-10-08 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1477:
-
Fix Version/s: (was: 0.9.0)
   0.8.3

> add authentication layer and initial JKS x509 implementation for brokers, 
> producers and consumer for network communication
> --
>
> Key: KAFKA-1477
> URL: https://issues.apache.org/jira/browse/KAFKA-1477
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
> Fix For: 0.8.3
>
> Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
> KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
> KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-10-08 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1477:
-
Issue Type: Sub-task  (was: New Feature)
Parent: KAFKA-1682

> add authentication layer and initial JKS x509 implementation for brokers, 
> producers and consumer for network communication
> --
>
> Key: KAFKA-1477
> URL: https://issues.apache.org/jira/browse/KAFKA-1477
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
> Fix For: 0.8.3
>
> Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
> KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
> KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1691) new java consumer needs ssl support as a client

2014-10-08 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1691:


 Summary: new java consumer needs ssl support as a client
 Key: KAFKA-1691
 URL: https://issues.apache.org/jira/browse/KAFKA-1691
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1690) new java producer needs ssl support as a client

2014-10-08 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1690:


 Summary: new java producer needs ssl support as a client
 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-10-08 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164017#comment-14164017
 ] 

Joe Stein commented on KAFKA-1477:
--

We need to have it support configurable ports and we can then do on/off for 
secured and non/secured or even have both on at same time.  
*advertised.port.ssl* should be added along with *port.ssl* 

> add authentication layer and initial JKS x509 implementation for brokers, 
> producers and consumer for network communication
> --
>
> Key: KAFKA-1477
> URL: https://issues.apache.org/jira/browse/KAFKA-1477
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
> Fix For: 0.8.3
>
> Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
> KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
> KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1684) Implement TLS/SSL authentication

2014-10-08 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164147#comment-14164147
 ] 

Joe Stein commented on KAFKA-1684:
--

Three related ticket for this.  I want to get one more change in there at least 
(multi port support) and then commit to trunk.  The producer and consumer work 
will go a lot faster iterating on it on trunk.  I also opened tickets for go, 
python and c++ libraries to add support too.  We can do snapshot trunk builds 
for that too will be smoother.

> Implement TLS/SSL authentication
> 
>
> Key: KAFKA-1684
> URL: https://issues.apache.org/jira/browse/KAFKA-1684
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0
>Reporter: Jay Kreps
>
> Add an SSL port to the configuration and advertise this as part of the 
> metadata request.
> If the SSL port is configured the socket server will need to add a second 
> Acceptor thread to listen on it. Connections accepted on this port will need 
> to go through the SSL handshake prior to being registered with a Processor 
> for request processing.
> SSL requests and responses may need to be wrapped or unwrapped using the 
> SSLEngine that was initialized by the acceptor. This wrapping and unwrapping 
> is very similar to what will need to be done for SASL-based authentication 
> schemes. We should have a uniform interface that covers both of these and we 
> will need to store the instance in the session with the request. The socket 
> server will have to use this object when reading and writing requests. We 
> will need to take care with the FetchRequests as the current 
> FileChannel.transferTo mechanism will be incompatible with wrap/unwrap so we 
> can only use this optimization for unencrypted sockets that don't require 
> userspace translation (wrapping).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1684) Implement TLS/SSL authentication

2014-10-08 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164147#comment-14164147
 ] 

Joe Stein edited comment on KAFKA-1684 at 10/8/14 8:59 PM:
---

Three related ticket for this.  I want to get one more change in KAFKA-1477 at 
least (multi port support) and then commit to trunk.  The producer and consumer 
work will go a lot faster iterating on it on trunk.  I also opened tickets for 
go, python and c++ libraries to add support too.  We can do snapshot trunk 
builds for that too will be smoother.


was (Author: joestein):
Three related ticket for this.  I want to get one more change in there at least 
(multi port support) and then commit to trunk.  The producer and consumer work 
will go a lot faster iterating on it on trunk.  I also opened tickets for go, 
python and c++ libraries to add support too.  We can do snapshot trunk builds 
for that too will be smoother.

> Implement TLS/SSL authentication
> 
>
> Key: KAFKA-1684
> URL: https://issues.apache.org/jira/browse/KAFKA-1684
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0
>Reporter: Jay Kreps
>
> Add an SSL port to the configuration and advertise this as part of the 
> metadata request.
> If the SSL port is configured the socket server will need to add a second 
> Acceptor thread to listen on it. Connections accepted on this port will need 
> to go through the SSL handshake prior to being registered with a Processor 
> for request processing.
> SSL requests and responses may need to be wrapped or unwrapped using the 
> SSLEngine that was initialized by the acceptor. This wrapping and unwrapping 
> is very similar to what will need to be done for SASL-based authentication 
> schemes. We should have a uniform interface that covers both of these and we 
> will need to store the instance in the session with the request. The socket 
> server will have to use this object when reading and writing requests. We 
> will need to take care with the FetchRequests as the current 
> FileChannel.transferTo mechanism will be incompatible with wrap/unwrap so we 
> can only use this optimization for unencrypted sockets that don't require 
> userspace translation (wrapping).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1692) [Java New Producer] IO Thread Name Must include Client ID

2014-10-08 Thread Bhavesh Mistry (JIRA)
Bhavesh Mistry created KAFKA-1692:
-

 Summary: [Java New Producer]  IO Thread Name Must include  Client 
ID
 Key: KAFKA-1692
 URL: https://issues.apache.org/jira/browse/KAFKA-1692
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 0.8.2
Reporter: Bhavesh Mistry
Assignee: Jun Rao
Priority: Trivial


Please add client id so people who are looking at Jconsole or Profile tool can 
see Thread by client id since single JVM can have multiple producer instance.  

org.apache.kafka.clients.producer.KafkaProducer
{code}
String ioThreadName = "kafka-producer-network-thread";
 if(clientId != null){
ioThreadName = ioThreadName  + " | "+clientId; 
}
this.ioThread = new KafkaThread(ioThreadName, this.sender, true);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1692) [Java New Producer] IO Thread Name Must include Client ID

2014-10-08 Thread Bhavesh Mistry (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164275#comment-14164275
 ] 

Bhavesh Mistry commented on KAFKA-1692:
---

Description is just suggestion.  Sorry could not submit path since I have 
release next week.

Thanks,

Bhavesh

> [Java New Producer]  IO Thread Name Must include  Client ID
> ---
>
> Key: KAFKA-1692
> URL: https://issues.apache.org/jira/browse/KAFKA-1692
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Bhavesh Mistry
>Assignee: Jun Rao
>Priority: Trivial
>
> Please add client id so people who are looking at Jconsole or Profile tool 
> can see Thread by client id since single JVM can have multiple producer 
> instance.  
> org.apache.kafka.clients.producer.KafkaProducer
> {code}
> String ioThreadName = "kafka-producer-network-thread";
>  if(clientId != null){
>   ioThreadName = ioThreadName  + " | "+clientId; 
> }
> this.ioThread = new KafkaThread(ioThreadName, this.sender, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Java New Producer IO Thread Name

2014-10-08 Thread Bhavesh Mistry
Hi Jun,

There is ticket with suggestion.
https://issues.apache.org/jira/browse/KAFKA-1692

Thanks for your support.

Thanks,

Bhavesh

On Tue, Oct 7, 2014 at 3:29 PM, Jun Rao  wrote:

> That sounds reasonable. Could you file a jira to track this?
>
> Thanks,
>
> Jun
>
> On Tue, Oct 7, 2014 at 7:47 AM, Bhavesh Mistry  >
> wrote:
>
> > Hi Kafka Dev Team,
> >
> > Since we have multiple instance of producers within one JVM, it would be
> > good idea to  name network IO thread name by associate with client.id
> > configuration.
> >
> > kafka-producer-network-thread + client.tId
> >
> > Thanks,
> > Bhavesh
> >
>


Re: Review Request 26373: Patch for KAFKA-1647

2014-10-08 Thread Jiangjie Qin


> On Oct. 7, 2014, 12:15 a.m., Guozhang Wang wrote:
> > Since now the first iteration of "if" statements is only used for logging, 
> > could we just merge it into the second check?

Guozhang, thanks for the review. I actually thought about it before. I agree 
that the code looks simpler if we just call partition.makeFollower without 
checking whether leader is up or not. The reason I retained the first if 
statement is that it seems more reasonable to create the remote replicas only 
when the leader broker is online, which is done in partition.makeFollower. And 
I'm not 100% sure whether it matters if we just create the remote replicas 
objects without checking the liveliness of leader broker.


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26373/#review55615
---


On Oct. 6, 2014, 5:06 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/26373/
> ---
> 
> (Updated Oct. 6, 2014, 5:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1647
> https://issues.apache.org/jira/browse/KAFKA-1647
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fix for Kafka-1647.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 78b7514cc109547c562e635824684fad581af653 
> 
> Diff: https://reviews.apache.org/r/26373/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



Re: Review Request 26291: Patch for KAFKA-1648

2014-10-08 Thread Mayuresh Gharat


> On Oct. 7, 2014, 9:06 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/consumer/PartitionAssignor.scala, line 76
> > 
> >
> > Did you mean to return this?

yes. I did not want to change the return type. If there are no topics the map 
will be empty


> On Oct. 7, 2014, 9:06 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala, line 
> > 655
> > 
> >
> > I don't think this is required. (Or if it is can you explain?)

If there are no topics the map is empty and we can just return back as there is 
nothing to rebalance.


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26291/#review55702
---


On Oct. 9, 2014, 12:29 a.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/26291/
> ---
> 
> (Updated Oct. 9, 2014, 12:29 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1648
> https://issues.apache.org/jira/browse/KAFKA-1648
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Removed the unnecessary comment
> 
> 
> Made a change to the way the condition for no topics is checked
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
> 8ea7368dc394a497164ea093ff8e9f2e6a94b1de 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fbc680fde21b02f11285a4f4b442987356abd17b 
> 
> Diff: https://reviews.apache.org/r/26291/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Commented] (KAFKA-1648) Round robin consumer balance throws an NPE when there are no topics

2014-10-08 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164465#comment-14164465
 ] 

Mayuresh Gharat commented on KAFKA-1648:


Updated reviewboard https://reviews.apache.org/r/26291/diff/
 against branch origin/trunk

> Round robin consumer balance throws an NPE when there are no topics
> ---
>
> Key: KAFKA-1648
> URL: https://issues.apache.org/jira/browse/KAFKA-1648
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Todd Palino
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Attachments: KAFKA-1648.patch, KAFKA-1648_2014-10-04_17:40:47.patch, 
> KAFKA-1648_2014-10-08_17:29:14.patch
>
>
> If you use the roundrobin rebalance method with a wildcard consumer, and 
> there are no topics in the cluster, rebalance throws a NullPointerException 
> in the consumer and fails. It retries the rebalance, but will continue to 
> throw the NPE.
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared all relevant 
> queues for this fetcher
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared the data 
> chunks in all the consumer message iterators
> 2014/09/23 17:51:16.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Committing all 
> offsets after clearing the fetcher queues
> 2014/09/23 17:51:46.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], begin rebalancing 
> consumer kafka-audit_lva1-app0007.corp-1411494404908-4e620544 try #0
> 2014/09/23 17:51:46.148 ERROR [OffspringServletRuntime] [main] 
> [kafka-console-audit] [] Boot listener 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener failed
> kafka.common.ConsumerRebalanceFailedException: 
> kafka-audit_lva1-app0007.corp-1411494404908-4e620544 can't rebalance after 10 
> retries
>   at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:630)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:897)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:931)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:159)
>   at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:101)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.initWildcardIterators(TrackingConsumerImpl.java:88)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.getWildcardIterators(TrackingConsumerImpl.java:116)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.createAuditThreads(KafkaConsoleAudit.java:59)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.initializeAudit(KafkaConsoleAudit.java:50)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:125)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:14)
>   at com.linkedin.util.factory.Generator.doGetBean(Generator.java:337)
>   at com.linkedin.util.factory.Generator.getBean(Generator.java:270)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener.onBoot(KafkaConsoleAuditBootListener.java:16)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.startGenerator(OffspringServletRuntime.java:147)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.start(OffspringServletRuntime.java:73)
>   at 
> com.linkedin.offspring.servlet.OffspringServletContextListener.contextInitialized(OffspringServletContextListener.java:28)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:771)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:763)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(

[jira] [Updated] (KAFKA-1648) Round robin consumer balance throws an NPE when there are no topics

2014-10-08 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1648:
---
Attachment: KAFKA-1648_2014-10-08_17:29:14.patch

> Round robin consumer balance throws an NPE when there are no topics
> ---
>
> Key: KAFKA-1648
> URL: https://issues.apache.org/jira/browse/KAFKA-1648
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Todd Palino
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Attachments: KAFKA-1648.patch, KAFKA-1648_2014-10-04_17:40:47.patch, 
> KAFKA-1648_2014-10-08_17:29:14.patch
>
>
> If you use the roundrobin rebalance method with a wildcard consumer, and 
> there are no topics in the cluster, rebalance throws a NullPointerException 
> in the consumer and fails. It retries the rebalance, but will continue to 
> throw the NPE.
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared all relevant 
> queues for this fetcher
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared the data 
> chunks in all the consumer message iterators
> 2014/09/23 17:51:16.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Committing all 
> offsets after clearing the fetcher queues
> 2014/09/23 17:51:46.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], begin rebalancing 
> consumer kafka-audit_lva1-app0007.corp-1411494404908-4e620544 try #0
> 2014/09/23 17:51:46.148 ERROR [OffspringServletRuntime] [main] 
> [kafka-console-audit] [] Boot listener 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener failed
> kafka.common.ConsumerRebalanceFailedException: 
> kafka-audit_lva1-app0007.corp-1411494404908-4e620544 can't rebalance after 10 
> retries
>   at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:630)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:897)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:931)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:159)
>   at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:101)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.initWildcardIterators(TrackingConsumerImpl.java:88)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.getWildcardIterators(TrackingConsumerImpl.java:116)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.createAuditThreads(KafkaConsoleAudit.java:59)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.initializeAudit(KafkaConsoleAudit.java:50)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:125)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:14)
>   at com.linkedin.util.factory.Generator.doGetBean(Generator.java:337)
>   at com.linkedin.util.factory.Generator.getBean(Generator.java:270)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener.onBoot(KafkaConsoleAuditBootListener.java:16)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.startGenerator(OffspringServletRuntime.java:147)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.start(OffspringServletRuntime.java:73)
>   at 
> com.linkedin.offspring.servlet.OffspringServletContextListener.contextInitialized(OffspringServletContextListener.java:28)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:771)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:763)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:706)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppCon

Re: Review Request 26291: Patch for KAFKA-1648

2014-10-08 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26291/
---

(Updated Oct. 9, 2014, 12:29 a.m.)


Review request for kafka.


Bugs: KAFKA-1648
https://issues.apache.org/jira/browse/KAFKA-1648


Repository: kafka


Description (updated)
---

Removed the unnecessary comment


Made a change to the way the condition for no topics is checked


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
8ea7368dc394a497164ea093ff8e9f2e6a94b1de 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
fbc680fde21b02f11285a4f4b442987356abd17b 

Diff: https://reviews.apache.org/r/26291/diff/


Testing
---


Thanks,

Mayuresh Gharat



Re: Review Request 26291: Patch for KAFKA-1648

2014-10-08 Thread Mayuresh Gharat


> On Oct. 4, 2014, 11:33 p.m., Neha Narkhede wrote:
> > Can you try to add a unit test for this?

Yes I will try and add unit test for this


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26291/#review55456
---


On Oct. 9, 2014, 12:29 a.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/26291/
> ---
> 
> (Updated Oct. 9, 2014, 12:29 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1648
> https://issues.apache.org/jira/browse/KAFKA-1648
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Removed the unnecessary comment
> 
> 
> Made a change to the way the condition for no topics is checked
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
> 8ea7368dc394a497164ea093ff8e9f2e6a94b1de 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fbc680fde21b02f11285a4f4b442987356abd17b 
> 
> Diff: https://reviews.apache.org/r/26291/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



Re: Review Request 26291: Patch for KAFKA-1648

2014-10-08 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26291/
---

(Updated Oct. 9, 2014, 12:46 a.m.)


Review request for kafka.


Bugs: KAFKA-1648
https://issues.apache.org/jira/browse/KAFKA-1648


Repository: kafka


Description (updated)
---

Removed the unnecessary comment


Made a change to the way the condition for no topics is checked


Cleaned unnecessary code and modified the test case to handle the no topic 
scenario


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
8ea7368dc394a497164ea093ff8e9f2e6a94b1de 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
fbc680fde21b02f11285a4f4b442987356abd17b 
  core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
9ceae222ca5bf63e8131de0db2a94126c8b57b59 

Diff: https://reviews.apache.org/r/26291/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Updated] (KAFKA-1648) Round robin consumer balance throws an NPE when there are no topics

2014-10-08 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1648:
---
Attachment: KAFKA-1648_2014-10-08_17:46:45.patch

> Round robin consumer balance throws an NPE when there are no topics
> ---
>
> Key: KAFKA-1648
> URL: https://issues.apache.org/jira/browse/KAFKA-1648
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Todd Palino
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Attachments: KAFKA-1648.patch, KAFKA-1648_2014-10-04_17:40:47.patch, 
> KAFKA-1648_2014-10-08_17:29:14.patch, KAFKA-1648_2014-10-08_17:46:45.patch
>
>
> If you use the roundrobin rebalance method with a wildcard consumer, and 
> there are no topics in the cluster, rebalance throws a NullPointerException 
> in the consumer and fails. It retries the rebalance, but will continue to 
> throw the NPE.
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared all relevant 
> queues for this fetcher
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared the data 
> chunks in all the consumer message iterators
> 2014/09/23 17:51:16.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Committing all 
> offsets after clearing the fetcher queues
> 2014/09/23 17:51:46.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], begin rebalancing 
> consumer kafka-audit_lva1-app0007.corp-1411494404908-4e620544 try #0
> 2014/09/23 17:51:46.148 ERROR [OffspringServletRuntime] [main] 
> [kafka-console-audit] [] Boot listener 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener failed
> kafka.common.ConsumerRebalanceFailedException: 
> kafka-audit_lva1-app0007.corp-1411494404908-4e620544 can't rebalance after 10 
> retries
>   at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:630)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:897)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:931)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:159)
>   at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:101)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.initWildcardIterators(TrackingConsumerImpl.java:88)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.getWildcardIterators(TrackingConsumerImpl.java:116)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.createAuditThreads(KafkaConsoleAudit.java:59)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.initializeAudit(KafkaConsoleAudit.java:50)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:125)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:14)
>   at com.linkedin.util.factory.Generator.doGetBean(Generator.java:337)
>   at com.linkedin.util.factory.Generator.getBean(Generator.java:270)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener.onBoot(KafkaConsoleAuditBootListener.java:16)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.startGenerator(OffspringServletRuntime.java:147)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.start(OffspringServletRuntime.java:73)
>   at 
> com.linkedin.offspring.servlet.OffspringServletContextListener.contextInitialized(OffspringServletContextListener.java:28)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:771)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:763)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:706)
>   at 
> org.eclipse.jetty.

[jira] [Commented] (KAFKA-1648) Round robin consumer balance throws an NPE when there are no topics

2014-10-08 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164487#comment-14164487
 ] 

Mayuresh Gharat commented on KAFKA-1648:


Updated reviewboard https://reviews.apache.org/r/26291/diff/
 against branch origin/trunk

> Round robin consumer balance throws an NPE when there are no topics
> ---
>
> Key: KAFKA-1648
> URL: https://issues.apache.org/jira/browse/KAFKA-1648
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Todd Palino
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Attachments: KAFKA-1648.patch, KAFKA-1648_2014-10-04_17:40:47.patch, 
> KAFKA-1648_2014-10-08_17:29:14.patch, KAFKA-1648_2014-10-08_17:46:45.patch
>
>
> If you use the roundrobin rebalance method with a wildcard consumer, and 
> there are no topics in the cluster, rebalance throws a NullPointerException 
> in the consumer and fails. It retries the rebalance, but will continue to 
> throw the NPE.
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared all relevant 
> queues for this fetcher
> 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared the data 
> chunks in all the consumer message iterators
> 2014/09/23 17:51:16.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Committing all 
> offsets after clearing the fetcher queues
> 2014/09/23 17:51:46.148 [ZookeeperConsumerConnector] 
> [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], begin rebalancing 
> consumer kafka-audit_lva1-app0007.corp-1411494404908-4e620544 try #0
> 2014/09/23 17:51:46.148 ERROR [OffspringServletRuntime] [main] 
> [kafka-console-audit] [] Boot listener 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener failed
> kafka.common.ConsumerRebalanceFailedException: 
> kafka-audit_lva1-app0007.corp-1411494404908-4e620544 can't rebalance after 10 
> retries
>   at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:630)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:897)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:931)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:159)
>   at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:101)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.initWildcardIterators(TrackingConsumerImpl.java:88)
>   at 
> com.linkedin.tracker.consumer.TrackingConsumerImpl.getWildcardIterators(TrackingConsumerImpl.java:116)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.createAuditThreads(KafkaConsoleAudit.java:59)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.initializeAudit(KafkaConsoleAudit.java:50)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:125)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:20)
>   at 
> com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:14)
>   at com.linkedin.util.factory.Generator.doGetBean(Generator.java:337)
>   at com.linkedin.util.factory.Generator.getBean(Generator.java:270)
>   at 
> com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener.onBoot(KafkaConsoleAuditBootListener.java:16)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.startGenerator(OffspringServletRuntime.java:147)
>   at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.start(OffspringServletRuntime.java:73)
>   at 
> com.linkedin.offspring.servlet.OffspringServletContextListener.contextInitialized(OffspringServletContextListener.java:28)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:771)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:763)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250)
>   at 
> org.eclipse.jetty.

Review Request 26474: KAFKA-1654 Provide a way to override server configuration from command line

2014-10-08 Thread Jarek Cecho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26474/
---

Review request for kafka.


Bugs: SQOOP-1654
https://issues.apache.org/jira/browse/SQOOP-1654


Repository: kafka


Description
---

I'm assuming that we might want to add additional arguments in the future as 
well, so I've added general facility to parse arguments to Kafka main class and 
added argument --set that defines/overrides any property in the config file. 
I've decided to use --set rather then exposing each property that is availalbe 
in KafkaConfig class as it's own argument, so that we don't have to keep those 
two classes always in sync.

This is first "bigger" patch that I've written in Scala, so I'm particularly 
interested to hear feedback on the coding style.


Diffs
-

  core/src/main/scala/kafka/Kafka.scala 2e94fee 
  core/src/test/scala/unit/kafka/KafkaTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/26474/diff/


Testing
---

I've added unit tests and verified the functionality on real "cluster".


Thanks,

Jarek Cecho



[jira] [Updated] (KAFKA-1654) Provide a way to override server configuration from command line

2014-10-08 Thread Jarek Jarcec Cecho (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jarek Jarcec Cecho updated KAFKA-1654:
--
Fix Version/s: 0.8.3
Affects Version/s: 0.8.1.1
   Status: Patch Available  (was: Open)

> Provide a way to override server configuration from command line
> 
>
> Key: KAFKA-1654
> URL: https://issues.apache.org/jira/browse/KAFKA-1654
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 0.8.3
>
> Attachments: KAFKA-1654.patch
>
>
> I've been recently playing with Kafka and I found the current way of server 
> configuration quite inflexible. All the configuration options have to be 
> inside a properties file and there is no way how they can be overridden for 
> execution.  In order to temporarily change one property I had to copy the 
> config file and change the property there. Hence, I'm wondering if people 
> would be open to provide a way how to specify and override the configs from 
> the command line when starting Kafka?
> Something like:
> {code}
> ./bin/kafka-server-start.sh -Dmy.cool.property=X kafka.properties
> {code}
> or 
> {code}
> ./bin/kafka-server-start.sh --set my.cool.property=X kafka.properties
> {code}
> I'm more than happy to take a stab at it, but I would like to see if there is 
> an interest for such capability?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1654) Provide a way to override server configuration from command line

2014-10-08 Thread Jarek Jarcec Cecho (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jarek Jarcec Cecho updated KAFKA-1654:
--
Attachment: KAFKA-1654.patch

Finally attaching the patch. My apologies for the delay [~nehanarkhede] - I've 
been sidetracked by my day job :(

> Provide a way to override server configuration from command line
> 
>
> Key: KAFKA-1654
> URL: https://issues.apache.org/jira/browse/KAFKA-1654
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 0.8.3
>
> Attachments: KAFKA-1654.patch
>
>
> I've been recently playing with Kafka and I found the current way of server 
> configuration quite inflexible. All the configuration options have to be 
> inside a properties file and there is no way how they can be overridden for 
> execution.  In order to temporarily change one property I had to copy the 
> config file and change the property there. Hence, I'm wondering if people 
> would be open to provide a way how to specify and override the configs from 
> the command line when starting Kafka?
> Something like:
> {code}
> ./bin/kafka-server-start.sh -Dmy.cool.property=X kafka.properties
> {code}
> or 
> {code}
> ./bin/kafka-server-start.sh --set my.cool.property=X kafka.properties
> {code}
> I'm more than happy to take a stab at it, but I would like to see if there is 
> an interest for such capability?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2014-10-08 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164557#comment-14164557
 ] 

Jay Kreps commented on KAFKA-1646:
--

Interesting. What is the behavior of RandomAccessFile.setLength(size) when 
setting a size larger than the current file size on Windows? Does it fully 
preallocate all the blocks up to the full length? If so does that cause any 
kind of pause if you are allocating like 1GB? On Linux it does "sparse" 
allocation which doesn't actually assign any blocks it just fills it in with 
fake zeros--so that happens very quickly but doesn't help with getting linear 
reads. Presumably Windows is also treating the unwritten part of the file as 
all zeros.

How do you handle file close? If the broker is running and is hard killed we 
run recovery and would truncate off any trailing zeros in a log segment. 
However if the broker is stopped gracefully I don't see how the trailing zeros 
are truncated off so on restart don't you end up with a bunch of zeros in the 
log?

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25886: KAFKA-1555: provide strong consistency with reasonable availability

2014-10-08 Thread Gwen Shapira


> On Oct. 8, 2014, 2:04 a.m., Joel Koshy wrote:
> > clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java, line 
> > 271
> > 
> >
> > Where is this used?

Used in ProducerConfig line 180:
 .define(ACKS_CONFIG,
Type.STRING,
"1",
in(Arrays.asList("all","-1", "0", "1")),
Importance.HIGH,
ACKS_DOC)

ACKS are defined as a String and not a Short. I don't want to change a public 
API, and there are discussions of turning it into an enum anyway, so validating 
the input against a list of strings seems appropriate.


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25886/#review55745
---


On Oct. 8, 2014, 1:46 a.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25886/
> ---
> 
> (Updated Oct. 8, 2014, 1:46 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1555: provide strong consistency with reasonable availability
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> f9de4af 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java addc906 
>   
> clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java d434f42 
>   core/src/main/scala/kafka/cluster/Partition.scala ff106b4 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 3fae791 
>   
> core/src/main/scala/kafka/common/NotEnoughReplicasAfterAppendException.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/common/NotEnoughReplicasException.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/log/LogConfig.scala 5746ad4 
>   core/src/main/scala/kafka/producer/SyncProducerConfig.scala 69b2d0c 
>   core/src/main/scala/kafka/server/KafkaApis.scala c584b55 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 165c816 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 39f777b 
>   core/src/test/scala/unit/kafka/producer/ProducerTest.scala dd71d81 
>   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 24deea0 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 2dbdd3c 
> 
> Diff: https://reviews.apache.org/r/25886/diff/
> 
> 
> Testing
> ---
> 
> With 3 broker cluster, created 3 topics each with 1 partition and 3 replicas, 
> with 1,3 and 4 min.insync.replicas.
> * min.insync.replicas=1 behaved normally (all writes succeeded as long as a 
> broker was up)
> * min.insync.replicas=3 returned NotEnoughReplicas when required.acks=-1 and 
> one broker was down
> * min.insync.replicas=4 returned NotEnoughReplicas when required.acks=-1
> 
> See notes about retry behavior in the JIRA.
> 
> 
> Thanks,
> 
> Gwen Shapira
> 
>



[jira] [Commented] (KAFKA-1681) Newly elected KafkaController might not start deletion of pending topics

2014-10-08 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164710#comment-14164710
 ] 

Jun Rao commented on KAFKA-1681:


Yes, or just set deleteTopicStateChanged to true instead of calling 
resumeTopicDeletionThread().

> Newly elected KafkaController might not start deletion of pending topics
> 
>
> Key: KAFKA-1681
> URL: https://issues.apache.org/jira/browse/KAFKA-1681
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.2
>
>
> As part of KAFKA-1663 deleteTopicStateChanged.set(true) is removed from 
> start(). This will cause newly elected kafka controller not to process the 
> existing delete topic requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 26373: Patch for KAFKA-1647

2014-10-08 Thread Guozhang Wang


> On Oct. 7, 2014, 12:15 a.m., Guozhang Wang wrote:
> > Since now the first iteration of "if" statements is only used for logging, 
> > could we just merge it into the second check?
> 
> Jiangjie Qin wrote:
> Guozhang, thanks for the review. I actually thought about it before. I 
> agree that the code looks simpler if we just call partition.makeFollower 
> without checking whether leader is up or not. The reason I retained the first 
> if statement is that it seems more reasonable to create the remote replicas 
> only when the leader broker is online, which is done in 
> partition.makeFollower. And I'm not 100% sure whether it matters if we just 
> create the remote replicas objects without checking the liveliness of leader 
> broker.

The checking of the liveness of the leader is just for assertation (i.e. the 
leader should always be alive when the leaderAndISR request is received, 
otherwise something bad has happended and we need to log the errors, but still 
proceed by skiping this request).


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26373/#review55615
---


On Oct. 6, 2014, 5:06 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/26373/
> ---
> 
> (Updated Oct. 6, 2014, 5:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1647
> https://issues.apache.org/jira/browse/KAFKA-1647
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fix for Kafka-1647.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 78b7514cc109547c562e635824684fad581af653 
> 
> Diff: https://reviews.apache.org/r/26373/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Resolved] (KAFKA-1666) Issue for sending more message to Kafka Broker

2014-10-08 Thread rajendram kathees (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajendram kathees resolved KAFKA-1666.
--
Resolution: Won't Fix

> Issue for sending more message to Kafka Broker
> --
>
> Key: KAFKA-1666
> URL: https://issues.apache.org/jira/browse/KAFKA-1666
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.1.1
> Environment: Ubundu 14
>Reporter: rajendram kathees
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 
> messages per thread,one message is 105 byes). After 2100 messages I am 
> getting the following exception and I changed buffer size 
> (socket.request.max.bytes) value in server.properties file but still I am 
> getting same exception. When I send  2000 message,all messages are sent to 
> kafka broker. Can you give a solution?
> [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics 
> [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
> kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
> from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
>   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
>   at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
>   at kafka.utils.Utils$.swallow(Utils.scala:167)
>   at kafka.utils.Logging$class.swallowError(Logging.scala:106)
>   at kafka.utils.Utils$.swallowError(Utils.scala:46)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
>   at kafka.producer.Producer.send(Producer.scala:76)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>   at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
>   at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
>   at 
> org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
>   at 
> org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
>   at 
> org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
>   at 
> org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
>   at 
> org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
>   at 
> org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
>   at 
> org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
>   at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
>   at 
> org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
>   at 
> org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
>   at 
> org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
>   at 
> org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.SocketException: Too many open files
>   at sun.nio.ch.Net.socket0(Native Method)
>   at sun.nio.ch.Net.socket(Net.java:423)
>   at sun.nio.ch.Net.socket(Net.java:416)
>   at sun.nio.ch.SocketChannelImpl.(SocketChannelImpl.java:104)
>   at 
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
>   at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
>   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
>   at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
>   at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>   at kafka.client.ClientUtils$.fetchTopicM

[jira] [Created] (KAFKA-1693) Issue sending more messages to single Kafka server (Load testing for Kafka transport)

2014-10-08 Thread rajendram kathees (JIRA)
rajendram kathees created KAFKA-1693:


 Summary: Issue sending more messages to single Kafka server (Load 
testing for Kafka transport)
 Key: KAFKA-1693
 URL: https://issues.apache.org/jira/browse/KAFKA-1693
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
 Environment: Ubuntu 14, Java 6
Reporter: rajendram kathees


I tried to send 5 messages to single Kafka server.I sent the messages to 
ESB using JMeter and ESB sent to Kafka server. After 28000 message I am getting 
following exception.Do I need to change any parameter value in Kafka 
server.Please give me the solution.
 
[2014-10-06 11:41:05,182] ERROR - Utils$ fetching topic metadata for topics 
[Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at 
kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
at kafka.utils.Utils$.swallow(Utils.scala:167)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:46)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:27)
at 
org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
at org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at 
org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
at 
org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
at 
org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at 
org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
at 
org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at 
org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
at 
org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
at 
org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-1666) Issue for sending more message to Kafka Broker

2014-10-08 Thread rajendram kathees (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajendram kathees closed KAFKA-1666.


Issue is that producer connection was not closed properly.

> Issue for sending more message to Kafka Broker
> --
>
> Key: KAFKA-1666
> URL: https://issues.apache.org/jira/browse/KAFKA-1666
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.1.1
> Environment: Ubundu 14
>Reporter: rajendram kathees
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 
> messages per thread,one message is 105 byes). After 2100 messages I am 
> getting the following exception and I changed buffer size 
> (socket.request.max.bytes) value in server.properties file but still I am 
> getting same exception. When I send  2000 message,all messages are sent to 
> kafka broker. Can you give a solution?
> [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics 
> [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
> kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
> from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
>   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
>   at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
>   at kafka.utils.Utils$.swallow(Utils.scala:167)
>   at kafka.utils.Logging$class.swallowError(Logging.scala:106)
>   at kafka.utils.Utils$.swallowError(Utils.scala:46)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
>   at kafka.producer.Producer.send(Producer.scala:76)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>   at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
>   at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
>   at 
> org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
>   at 
> org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
>   at 
> org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
>   at 
> org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
>   at 
> org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
>   at 
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
>   at 
> org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
>   at 
> org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
>   at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
>   at 
> org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
>   at 
> org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
>   at 
> org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
>   at 
> org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.SocketException: Too many open files
>   at sun.nio.ch.Net.socket0(Native Method)
>   at sun.nio.ch.Net.socket(Net.java:423)
>   at sun.nio.ch.Net.socket(Net.java:416)
>   at sun.nio.ch.SocketChannelImpl.(SocketChannelImpl.java:104)
>   at 
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
>   at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
>   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
>   at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
>   at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>   at kafka.c

[jira] [Comment Edited] (KAFKA-1666) Issue for sending more message to Kafka Broker

2014-10-08 Thread rajendram kathees (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164804#comment-14164804
 ] 

rajendram kathees edited comment on KAFKA-1666 at 10/9/14 5:58 AM:
---

Issue is that producer connection was not closed properly.

Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.Net.socket0(Native Method)
at sun.nio.ch.Net.socket(Net.java:423)

Now I am getting following exception.

[2014-10-06 11:41:05,182] ERROR - Utils$ fetching topic metadata for topics 
[Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at 
kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
at kafka.utils.Utils$.swallow(Utils.scala:167)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:46)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:27)
at 
org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
at org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at 
org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
at 
org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
at 
org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at 
org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
at 
org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at 
org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
at 
org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
at 
org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)


was (Author: kathees):
Issue is that producer connection was not closed properly.

> Issue for sending more message to Kafka Broker
> --
>
> Key: KAFKA-1666
> URL: https://issues.apache.org/jira/browse/KAFKA-1666
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.1.1
> Environment: Ubundu 14
>Reporter: rajendram kathees
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 
> messages per thread,one message is 105 byes). After 2100 messages I am 
> getting the following exception and I changed buffer size 
> (socket.request.max.bytes) value in server.properties file but still I am 
> getting same exception. When I send  2000 message,all messages are sent to 
> kafka broker. Can you give a solution?
> [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics 
> [Set(test1)] f