[jira] [Created] (KAFKA-2842) BrokerEndPoint regex does't support hostname with _

2015-11-16 Thread Sachin Pasalkar (JIRA)
Sachin Pasalkar created KAFKA-2842:
--

 Summary: BrokerEndPoint regex does't support hostname with _
 Key: KAFKA-2842
 URL: https://issues.apache.org/jira/browse/KAFKA-2842
 Project: Kafka
  Issue Type: Bug
Reporter: Sachin Pasalkar
Priority: Minor


If you look at the code of BrokerEndPoint.scala, it has regex uriParseExp. This 
regex is used for validation of brokers. However, it fails to validate hostname 
with _ in it. e.g. adfs_212:9092



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2842) BrokerEndPoint regex does't support hostname with _

2015-11-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15006589#comment-15006589
 ] 

Ismael Juma commented on KAFKA-2842:


I think this is intentional because of RFC952:

 ::= *["."]
  ::= [*[]]
https://tools.ietf.org/html/rfc952

> BrokerEndPoint regex does't support hostname with _
> ---
>
> Key: KAFKA-2842
> URL: https://issues.apache.org/jira/browse/KAFKA-2842
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sachin Pasalkar
>Priority: Minor
>
> If you look at the code of BrokerEndPoint.scala, it has regex uriParseExp. 
> This regex is used for validation of brokers. However, it fails to validate 
> hostname with _ in it. e.g. adfs_212:9092



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2757) Consolidate BrokerEndPoint and EndPoint

2015-11-16 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2757:
---
Summary: Consolidate BrokerEndPoint and EndPoint  (was: Consoliate 
BrokerEndPoint and EndPoint)

> Consolidate BrokerEndPoint and EndPoint
> ---
>
> Key: KAFKA-2757
> URL: https://issues.apache.org/jira/browse/KAFKA-2757
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Jeff Holoman
> Fix For: 0.9.0.1
>
>
> For code simplicity, it's better to consolidate these two classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2842) BrokerEndPoint regex does't support hostname with _

2015-11-16 Thread Sachin Pasalkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15006593#comment-15006593
 ] 

Sachin Pasalkar commented on KAFKA-2842:


Ohh, thanks for this update :)

> BrokerEndPoint regex does't support hostname with _
> ---
>
> Key: KAFKA-2842
> URL: https://issues.apache.org/jira/browse/KAFKA-2842
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sachin Pasalkar
>Priority: Minor
>
> If you look at the code of BrokerEndPoint.scala, it has regex uriParseExp. 
> This regex is used for validation of brokers. However, it fails to validate 
> hostname with _ in it. e.g. adfs_212:9092



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2824: MiniKDC based tests don't run in V...

2015-11-16 Thread benstopford
Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/520


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2824) MiniKDC based tests don't run in VirtualBox

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15006634#comment-15006634
 ] 

ASF GitHub Bot commented on KAFKA-2824:
---

Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/520


> MiniKDC based tests don't run in VirtualBox
> ---
>
> Key: KAFKA-2824
> URL: https://issues.apache.org/jira/browse/KAFKA-2824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> When running system tests in virtualbox the miniKDC server isn't reachable. 
> Works fine in EC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread wanghongjiang (JIRA)
wanghongjiang created KAFKA-2843:


 Summary: when consumer got empty messageset, 
fetchResponse.highWatermark != current_offset?
 Key: KAFKA-2843
 URL: https://issues.apache.org/jira/browse/KAFKA-2843
 Project: Kafka
  Issue Type: Bug
  Components: offset manager
Affects Versions: 0.8.2.1
Reporter: wanghongjiang


I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = lastOffset
val msgSet = fetchResponse.messageSet(topic, partition)

if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
cli.kafkaPartition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread netcafe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

netcafe updated KAFKA-2843:
---
Description: 
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = nextOffset
val msgSet = fetchResponse.messageSet(topic, partition)

if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
cli.kafkaPartition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}

  was:
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = lastOffset
val msgSet = fetchResponse.messageSet(topic, partition)

if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
cli.kafkaPartition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}


> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>
> I use simple consumer fetch message from brokers,when consumer got empty 
> messageSet,e.g :
> val offset = nextOffset
> val msgSet = fetchResponse.messageSet(topic, partition)
> if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
> cli.kafkaPartition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread netcafe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

netcafe updated KAFKA-2843:
---
Description: 
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = nextOffset
val request = buildRequest(offset)
val response = consumer.fetch(request)
val msgSet = fetchResponse.messageSet(topic, partition)

  if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
cli.kafkaPartition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}

  was:
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = nextOffset
val msgSet = fetchResponse.messageSet(topic, partition)

if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
cli.kafkaPartition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}


> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>
> I use simple consumer fetch message from brokers,when consumer got empty 
> messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
> cli.kafkaPartition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2842) BrokerEndPoint regex does't support hostname with _

2015-11-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2842.
-
Resolution: Not A Problem

> BrokerEndPoint regex does't support hostname with _
> ---
>
> Key: KAFKA-2842
> URL: https://issues.apache.org/jira/browse/KAFKA-2842
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sachin Pasalkar
>Priority: Minor
>
> If you look at the code of BrokerEndPoint.scala, it has regex uriParseExp. 
> This regex is used for validation of brokers. However, it fails to validate 
> hostname with _ in it. e.g. adfs_212:9092



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned KAFKA-2843:
---

Assignee: jin xing

> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers,when consumer got empty 
> messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
> cli.kafkaPartition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: setting up kafka github

2015-11-16 Thread Grant Henke
Hi Mojhaha,

You will not have access to the actual Apache Kafka repo. Everyone
contributes via their own fork and asking for the changes to be pulled into
(pull request) the Apache Kafka repo. The guide linked earlier is a great
resource for the Github process.

Thanks,
Grant

On Sat, Nov 14, 2015 at 5:50 AM, mojhaha kiklasds 
wrote:

> Hello,
>
> In this approach, I think setting it up according to the second method that
> I described in my earlier email should work.
> But, is this method what other contributors are also using ?
> Or are they using the first methods that I described?
>
> Thanks,
> Mojhaha
>
> On Sat, Nov 14, 2015 at 4:45 PM, jeanbaptiste lespiau <
> jeanbaptiste.lesp...@gmail.com> wrote:
>
> > Hello,
> >
> > I'm new to kafka too, but I think this page can help you :
> > https://help.github.com/articles/using-pull-requests/
> >
> > It describes exactly the process to follow.
> >
> > Regards.
> >
> > 2015-11-14 11:49 GMT+01:00 mojhaha kiklasds :
> >
> > > Hello,
> > >
> > > I'm new to github usage but I want to contribute to kafka. I am trying
> to
> > > setup my github repo based on the instructions mentioned here:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest
> > >
> > > I have one doubt though. Which repo shall I configure as the remote -
> > > apache-kafka or my fork ?
> > >
> > > If I configure apache-kafka as remote, will I be able to submit pull
> > > requests?
> > >
> > > If I sync my committed changes to my fork (hosted on github), will I
> > issue
> > > pull requests from this fork to apache-kafka ?
> > >
> > > Thanks,
> > > Mojhaha
> > >
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Updated] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread netcafe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

netcafe updated KAFKA-2843:
---
Description: 
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = nextOffset
val request = buildRequest(offset)
val response = consumer.fetch(request)
val msgSet = fetchResponse.messageSet(topic, partition)

  if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(topic, partition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}

  was:
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = nextOffset
val request = buildRequest(offset)
val response = consumer.fetch(request)
val msgSet = fetchResponse.messageSet(topic, partition)

  if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(cli.kafkaTopic, 
cli.kafkaPartition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}


> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers,when consumer got empty 
> messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: setting up kafka github

2015-11-16 Thread mojhaha kiklasds
Hello,

That answers my question. Thanks

Mojhaha

On Mon, Nov 16, 2015 at 7:55 PM, Grant Henke  wrote:

> Hi Mojhaha,
>
> You will not have access to the actual Apache Kafka repo. Everyone
> contributes via their own fork and asking for the changes to be pulled into
> (pull request) the Apache Kafka repo. The guide linked earlier is a great
> resource for the Github process.
>
> Thanks,
> Grant
>
> On Sat, Nov 14, 2015 at 5:50 AM, mojhaha kiklasds  >
> wrote:
>
> > Hello,
> >
> > In this approach, I think setting it up according to the second method
> that
> > I described in my earlier email should work.
> > But, is this method what other contributors are also using ?
> > Or are they using the first methods that I described?
> >
> > Thanks,
> > Mojhaha
> >
> > On Sat, Nov 14, 2015 at 4:45 PM, jeanbaptiste lespiau <
> > jeanbaptiste.lesp...@gmail.com> wrote:
> >
> > > Hello,
> > >
> > > I'm new to kafka too, but I think this page can help you :
> > > https://help.github.com/articles/using-pull-requests/
> > >
> > > It describes exactly the process to follow.
> > >
> > > Regards.
> > >
> > > 2015-11-14 11:49 GMT+01:00 mojhaha kiklasds :
> > >
> > > > Hello,
> > > >
> > > > I'm new to github usage but I want to contribute to kafka. I am
> trying
> > to
> > > > setup my github repo based on the instructions mentioned here:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest
> > > >
> > > > I have one doubt though. Which repo shall I configure as the remote -
> > > > apache-kafka or my fork ?
> > > >
> > > > If I configure apache-kafka as remote, will I be able to submit pull
> > > > requests?
> > > >
> > > > If I sync my committed changes to my fork (hosted on github), will I
> > > issue
> > > > pull requests from this fork to apache-kafka ?
> > > >
> > > > Thanks,
> > > > Mojhaha
> > > >
> > >
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[jira] [Updated] (KAFKA-2732) Add test cases with ZK Auth, SASL and SSL

2015-11-16 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2732:

Description: Add test cases to verify the security functionality being 
added in 0.9.   (was: Extend SaslSslConsumerTest to use ZK Auth and add support 
to enable it to work properly. )

> Add test cases with ZK Auth, SASL and SSL
> -
>
> Key: KAFKA-2732
> URL: https://issues.apache.org/jira/browse/KAFKA-2732
> Project: Kafka
>  Issue Type: Test
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> Add test cases to verify the security functionality being added in 0.9. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2732) Add test cases with ZK Auth, SASL and SSL

2015-11-16 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2732:

Summary: Add test cases with ZK Auth, SASL and SSL  (was: Add support for 
consumer test with ZK Auth, SASL and SSL)

> Add test cases with ZK Auth, SASL and SSL
> -
>
> Key: KAFKA-2732
> URL: https://issues.apache.org/jira/browse/KAFKA-2732
> Project: Kafka
>  Issue Type: Test
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> Extend SaslSslConsumerTest to use ZK Auth and add support to enable it to 
> work properly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2841) Group metadata cache loading is not safe when reloading a partition

2015-11-16 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007035#comment-15007035
 ] 

Jun Rao commented on KAFKA-2841:


I thought the current logic is that if a group is being uploaded, the group is 
not accessible until the upload completes? Once the upload completes, the group 
should have the latest info.

> Group metadata cache loading is not safe when reloading a partition
> ---
>
> Key: KAFKA-2841
> URL: https://issues.apache.org/jira/browse/KAFKA-2841
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> If the coordinator receives a leaderAndIsr request which includes a higher 
> leader epoch for one of the partitions that it owns, then it will reload the 
> offset/metadata for that partition again. This can happen because the leader 
> epoch is incremented for ISR changes which do not result in a new leader for 
> the partition. Currently, the coordinator replaces cached metadata values 
> blindly on reloading, which can result in weird behavior such as unexpected 
> session timeouts or request timeouts while rebalancing.
> To fix this, we need to check that the group being loaded has a higher 
> generation than the cached value before replacing it. Also, if we have to 
> replace a cached value (which shouldn't happen except when loading), we need 
> to be very careful to ensure that any active delayed operations won't affect 
> the group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2746) Add support for using ConsumerGroupCommand on secure install

2015-11-16 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007047#comment-15007047
 ] 

Jun Rao commented on KAFKA-2746:


[~singhashish], do you have time to work on this today? If not, perhaps we can 
have someone else work on this.

> Add support for using ConsumerGroupCommand on secure install
> 
>
> Key: KAFKA-2746
> URL: https://issues.apache.org/jira/browse/KAFKA-2746
> Project: Kafka
>  Issue Type: Task
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KAFKA-2490 adds support for new-consumer to ConsumerGroupCommand. This JIRA 
> intends to make ConsumerGroupCommand work for secure installations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2746) Add support for using ConsumerGroupCommand on secure install

2015-11-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2746:
---
Fix Version/s: 0.9.0.0

> Add support for using ConsumerGroupCommand on secure install
> 
>
> Key: KAFKA-2746
> URL: https://issues.apache.org/jira/browse/KAFKA-2746
> Project: Kafka
>  Issue Type: Task
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KAFKA-2490 adds support for new-consumer to ConsumerGroupCommand. This JIRA 
> intends to make ConsumerGroupCommand work for secure installations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2841) Group metadata cache loading is not safe when reloading a partition

2015-11-16 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007059#comment-15007059
 ] 

Jason Gustafson commented on KAFKA-2841:


[~junrao] That is correct. The problem is that the group may be loaded more 
than once and the cached metadata object which holds group and member state may 
be replaced. When this happens, you can get very strange behavior since 
join/sync response callbacks may be lost and delayed operations (which still 
refer to the original metadata object) can cause conflicts. My patch makes this 
safer by preventing this replacement from taking place when a partition is 
loaded and by cleaning up the group state when the cached metadata is unloaded 
due to partition emigration to a new leader.

> Group metadata cache loading is not safe when reloading a partition
> ---
>
> Key: KAFKA-2841
> URL: https://issues.apache.org/jira/browse/KAFKA-2841
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> If the coordinator receives a leaderAndIsr request which includes a higher 
> leader epoch for one of the partitions that it owns, then it will reload the 
> offset/metadata for that partition again. This can happen because the leader 
> epoch is incremented for ISR changes which do not result in a new leader for 
> the partition. Currently, the coordinator replaces cached metadata values 
> blindly on reloading, which can result in weird behavior such as unexpected 
> session timeouts or request timeouts while rebalancing.
> To fix this, we need to check that the group being loaded has a higher 
> generation than the cached value before replacing it. Also, if we have to 
> replace a cached value (which shouldn't happen except when loading), we need 
> to be very careful to ensure that any active delayed operations won't affect 
> the group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2844) Use different keyTab for client and server in SASL tests

2015-11-16 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2844:
--

 Summary: Use different keyTab for client and server in SASL tests
 Key: KAFKA-2844
 URL: https://issues.apache.org/jira/browse/KAFKA-2844
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Ismael Juma
Assignee: Ismael Juma


We currently use the same keyTab, which could hide problems in the 
implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2844; Separate keyTabs for sasl tests

2015-11-16 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/533

KAFKA-2844; Separate keyTabs for sasl tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka separate-keytabs-for-sasl-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/533.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #533


commit 7cd7fba5f3fb23991b93a0180d41b6fd67cd0b99
Author: Ismael Juma 
Date:   2015-11-16T17:10:37Z

Move `FourLetterWords` to its own file and clean-up its usage

commit 4f423a8d6e14660ae25b7c2eba2901e51e75b00e
Author: Ismael Juma 
Date:   2015-11-16T18:51:51Z

Use a different keyTab for server and client in SASL tests

In order to do this, replace templated `kafka_jaas.conf` file
with programmatic approach.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2844) Use different keyTab for client and server in SASL tests

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007118#comment-15007118
 ] 

ASF GitHub Bot commented on KAFKA-2844:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/533

KAFKA-2844; Separate keyTabs for sasl tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka separate-keytabs-for-sasl-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/533.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #533


commit 7cd7fba5f3fb23991b93a0180d41b6fd67cd0b99
Author: Ismael Juma 
Date:   2015-11-16T17:10:37Z

Move `FourLetterWords` to its own file and clean-up its usage

commit 4f423a8d6e14660ae25b7c2eba2901e51e75b00e
Author: Ismael Juma 
Date:   2015-11-16T18:51:51Z

Use a different keyTab for server and client in SASL tests

In order to do this, replace templated `kafka_jaas.conf` file
with programmatic approach.




> Use different keyTab for client and server in SASL tests
> 
>
> Key: KAFKA-2844
> URL: https://issues.apache.org/jira/browse/KAFKA-2844
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> We currently use the same keyTab, which could hide problems in the 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2844) Use different keyTab for client and server in SASL tests

2015-11-16 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2844:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> Use different keyTab for client and server in SASL tests
> 
>
> Key: KAFKA-2844
> URL: https://issues.apache.org/jira/browse/KAFKA-2844
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> We currently use the same keyTab, which could hide problems in the 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Kafka 2746

2015-11-16 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/534

Kafka 2746



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2746

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/534.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #534


commit 3c33198d9cdb731ae6273c80c0d81699c8e92ce0
Author: Ismael Juma 
Date:   2015-11-13T21:43:34Z

Do not use ZKUtils in `ConsumerGroupCommand` if `new-consumer` is used

commit 28d5992bf91388623c3b4ed91f947dbc9b37a8da
Author: Ismael Juma 
Date:   2015-11-13T22:31:14Z

Reuse consumer and accept consumer.config option

Also improve error message if delete is used with new-consumer.

commit 2bc64b56493935eab5c704ef13751ceb6a8eb57c
Author: Ismael Juma 
Date:   2015-11-13T22:49:57Z

Remove `consumer.config` for now

It may be better to reuse `config`.

commit 0e586a6b766cb9674586e346fdd29154162f3fa1
Author: Ismael Juma 
Date:   2015-11-13T23:12:43Z

Fix NPE when committed returns null

commit fe6070408f6042d22033c4d31a604ef0b3cbaa01
Author: Ismael Juma 
Date:   2015-11-13T23:48:12Z

Set GROUP_ID_CONFIG in consumer correctly based on received option

Bug spotted by Jason.

commit d9fe30ed24eaaeb59c3aabaff736359f3b38a375
Author: Ismael Juma 
Date:   2015-11-14T11:26:41Z

Create consumer lazily in `KafkaConsumerGroupService`

We need a group-id to create a consumer, but a consumer is not
needed for `list()`. This avoids a NPE while trying to get a non-existent
group-id during `list()` spotted by Jun.

commit 426bbf032a95f974e2929f3ef080a41f060399e2
Author: Ashish Singh 
Date:   2015-11-16T00:32:01Z

KAFKA-2746: Add support for using ConsumerGroupCommand on secure install




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2746) Add support for using ConsumerGroupCommand on secure install

2015-11-16 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007139#comment-15007139
 ] 

Ashish K Singh commented on KAFKA-2746:
---

[~junrao], [~ijuma] PR is up at https://github.com/apache/kafka/pull/534.

> Add support for using ConsumerGroupCommand on secure install
> 
>
> Key: KAFKA-2746
> URL: https://issues.apache.org/jira/browse/KAFKA-2746
> Project: Kafka
>  Issue Type: Task
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KAFKA-2490 adds support for new-consumer to ConsumerGroupCommand. This JIRA 
> intends to make ConsumerGroupCommand work for secure installations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2811: add standby tasks

2015-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/526


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2811) Add standby tasks

2015-11-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2811.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 526
[https://github.com/apache/kafka/pull/526]

> Add standby tasks
> -
>
> Key: KAFKA-2811
> URL: https://issues.apache.org/jira/browse/KAFKA-2811
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Restoring local state from state change-log topics can be expensive. To 
> alleviate this, we want to have an option to keep replications of local 
> states that are kept up to date. The task assignment logic should be aware of 
> existence of such replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2811) Add standby tasks

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007355#comment-15007355
 ] 

ASF GitHub Bot commented on KAFKA-2811:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/526


> Add standby tasks
> -
>
> Key: KAFKA-2811
> URL: https://issues.apache.org/jira/browse/KAFKA-2811
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Restoring local state from state change-log topics can be expensive. To 
> alleviate this, we want to have an option to keep replications of local 
> states that are kept up to date. The task assignment logic should be aware of 
> existence of such replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: do not create a StandbyTask if there is...

2015-11-16 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/535

MINOR: do not create a StandbyTask if there is no state store in the task

@guozhangwang 
An optimization which may reduce unnecessary poll for standby tasks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka remove_empty_standby_task

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/535.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #535


commit b24fc838fb8201e1d162904e4d9388c3057d493b
Author: Yasuhiro Matsuda 
Date:   2015-11-16T21:38:45Z

do not create a StandbyTask if there is no state store in the task




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: add KStream merge operator

2015-11-16 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/536

MINOR: add KStream merge operator

@guozhangwang 

Added KStreamBuilder.merge(KStream...).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka kstream_merge_operator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/536.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #536


commit 9fd01e9d9e2afa435816c903d32cd011c0796b11
Author: Yasuhiro Matsuda 
Date:   2015-11-05T22:36:02Z

kstream merge

commit d62f1335628aaa2de41434c00b6870235e1f9d49
Author: Yasuhiro Matsuda 
Date:   2015-11-16T21:46:06Z

Merge branch 'trunk' of github.com:apache/kafka into kstream_merge_operator

commit a7ce2995f15e6a3dbb35f77f47ba88317e4eee2a
Author: Yasuhiro Matsuda 
Date:   2015-11-16T21:57:14Z

add test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: do not create a StandbyTask if there is...

2015-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/535


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2831; Do not use ZKUtils in `ConsumerGro...

2015-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/528


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2831) kafka-consumer-groups requires zookeeper url when using the new-consumer option

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007436#comment-15007436
 ] 

ASF GitHub Bot commented on KAFKA-2831:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/528


> kafka-consumer-groups requires zookeeper url when using the new-consumer 
> option
> ---
>
> Key: KAFKA-2831
> URL: https://issues.apache.org/jira/browse/KAFKA-2831
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group 
> test-consumer-group --new-consumer --describe
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.zookeeper.client.ConnectStringParser.(ConnectStringParser.java:50)
>   at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:443)
>   at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
>   at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:69)
>   at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1218)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
>   at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
>   at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
>   at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:54)
>   at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2831) kafka-consumer-groups requires zookeeper url when using the new-consumer option

2015-11-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2831:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> kafka-consumer-groups requires zookeeper url when using the new-consumer 
> option
> ---
>
> Key: KAFKA-2831
> URL: https://issues.apache.org/jira/browse/KAFKA-2831
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group 
> test-consumer-group --new-consumer --describe
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.zookeeper.client.ConnectStringParser.(ConnectStringParser.java:50)
>   at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:443)
>   at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
>   at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:69)
>   at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1218)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
>   at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
>   at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
>   at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:54)
>   at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #156

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2811: add standby tasks

[wangguoz] MINOR: do not create a StandbyTask if there is no state store in the

[junrao] KAFKA-2831; Do not use ZKUtils in `ConsumerGroupCommand` if

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5fc4546de7f238a8ee9c6f0b4fe276f0da47707c 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5fc4546de7f238a8ee9c6f0b4fe276f0da47707c
 > git rev-list 356544caba6448c6ba3bcdb38bea787e1fbc277b # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4829036269632803092.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.059 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson897203572076301643.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 11.649 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[GitHub] kafka pull request: KAFKA-2809: Improve documentation linking

2015-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/498


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2809) Improve documentation linking

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007469#comment-15007469
 ] 

ASF GitHub Bot commented on KAFKA-2809:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/498


> Improve documentation linking
> -
>
> Key: KAFKA-2809
> URL: https://issues.apache.org/jira/browse/KAFKA-2809
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.0
>
>
> Often it is useful to link to a specific header within the documentation. 
> Especially when referencing docs in the mailing lists. 
> This Jira is to add anchors and links for all headers in the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2809) Improve documentation linking

2015-11-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2809:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 498
[https://github.com/apache/kafka/pull/498]

> Improve documentation linking
> -
>
> Key: KAFKA-2809
> URL: https://issues.apache.org/jira/browse/KAFKA-2809
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.0
>
>
> Often it is useful to link to a specific header within the documentation. 
> Especially when referencing docs in the mailing lists. 
> This Jira is to add anchors and links for all headers in the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-11-16 Thread Will Funnell (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007477#comment-15007477
 ] 

Will Funnell commented on KAFKA-2500:
-

[~hachikuji] Thanks for the update and for the workaround, which I will try out 
when we upgrade.

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Will Funnell
>Assignee: Jason Gustafson
>Priority: Critical
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-16 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2845:
-

 Summary: Add 0.9 clients vs 0.8 brokers compatibility test
 Key: KAFKA-2845
 URL: https://issues.apache.org/jira/browse/KAFKA-2845
 Project: Kafka
  Issue Type: Task
Reporter: Geoff Anderson
Assignee: Geoff Anderson


Add a simple test or two to document and understand what behavior to expect if 
users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
against an 0.8.X broker cluster




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2845: new client old broker compatibilit...

2015-11-16 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/537

KAFKA-2845: new client old broker compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2845-new-client-old-broker-compatibility

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/537.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #537


commit f8cb7f9ba8f4c6f08d15af4c1df8d439739e15a9
Author: Geoff Anderson 
Date:   2015-10-29T21:43:40Z

Sketch of compatibility test

commit 2c3b65ef7c26fef035b83e44d6ed1f301cc80a34
Author: Geoff Anderson 
Date:   2015-11-16T22:15:46Z

Add test for 0.9 consumer against 0.8 brokers

commit 924a281eeace248f852551d30bf55e8c5d79a7b8
Author: Geoff Anderson 
Date:   2015-11-16T22:40:03Z

Added compatibility test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-16 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2845:
--
Reviewer: Ewen Cheslack-Postava

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007529#comment-15007529
 ] 

ASF GitHub Bot commented on KAFKA-2845:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/537

KAFKA-2845: new client old broker compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2845-new-client-old-broker-compatibility

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/537.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #537


commit f8cb7f9ba8f4c6f08d15af4c1df8d439739e15a9
Author: Geoff Anderson 
Date:   2015-10-29T21:43:40Z

Sketch of compatibility test

commit 2c3b65ef7c26fef035b83e44d6ed1f301cc80a34
Author: Geoff Anderson 
Date:   2015-11-16T22:15:46Z

Add test for 0.9 consumer against 0.8 brokers

commit 924a281eeace248f852551d30bf55e8c5d79a7b8
Author: Geoff Anderson 
Date:   2015-11-16T22:40:03Z

Added compatibility test




> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-16 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2845:
--
Status: Patch Available  (was: Open)

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #823

2015-11-16 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-16 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007575#comment-15007575
 ] 

Grant Henke commented on KAFKA-2845:


This is a great test to have. I recently tested this manually and found the 
same result. 

A few questions based on the expected result and the perspective of Kafka 
clients not being backward compatible with old brokers:

* I understand our current perspective is that clients do not need to be 
backward compatible with brokers in major releases. However, that doesn't mean 
that we _have_ to break unnecessarily. The buffer underflow is due to an extra 
field added for quotas. Is there a way we could handle things more gracefully 
instead of breaking? If so I am happy to brainstorm and help.
* If we can't handle things more gracefully, I think we should handle the issue 
more gracefully and improve the error messaging. I can open a Jira/pr for that 
as well.

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #157

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2809; Improve documentation linking

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 6cbd97597ccf456a4f01f19553da5a03e12c9366 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6cbd97597ccf456a4f01f19553da5a03e12c9366
 > git rev-list 5fc4546de7f238a8ee9c6f0b4fe276f0da47707c # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3099543273398265855.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.208 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson9053469917474443552.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 652525 found in cache 
> '

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 16.042 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Created] (KAFKA-2846) Add Ducktape test for kafka-consumer-groups

2015-11-16 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2846:
-

 Summary: Add Ducktape test for kafka-consumer-groups
 Key: KAFKA-2846
 URL: https://issues.apache.org/jira/browse/KAFKA-2846
 Project: Kafka
  Issue Type: Test
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.9.1.0


kafka-consumer-groups is a user facing tool. Having system tests will make sure 
that we are not changing its behavior unintentionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka_0.9.0_jdk7 #25

2015-11-16 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #824

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: do not create a StandbyTask if there is no state store in the

[junrao] KAFKA-2831; Do not use ZKUtils in `ConsumerGroupCommand` if

[junrao] KAFKA-2809; Improve documentation linking

--
[...truncated 2809 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.Log

[jira] [Resolved] (KAFKA-2721) Avoid handling duplicate LeaderAndISR requests

2015-11-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2721.

   Resolution: Fixed
Fix Version/s: (was: 0.9.0.1)
   0.9.0.0

Issue resolved by pull request 436
[https://github.com/apache/kafka/pull/436]

> Avoid handling duplicate LeaderAndISR requests
> --
>
> Key: KAFKA-2721
> URL: https://issues.apache.org/jira/browse/KAFKA-2721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Dong Lin
> Fix For: 0.9.0.0
>
>
> Upon controller migration, the new controller will try to resend all the 
> LeaderAndISR requests for any on-going partition reassignments. This can then 
> lead to duplicate such requests sent to the same broker.
> Upon receiving such requests, today brokers do not detect if, for example, it 
> is already the leader for the requested becoming-leader-partition, and does 
> the logic such as 1) stop fetching 2) coordinator migration, etc which is not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2721; Avoid handling duplicate LeaderAnd...

2015-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/436


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2721) Avoid handling duplicate LeaderAndISR requests

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007647#comment-15007647
 ] 

ASF GitHub Bot commented on KAFKA-2721:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/436


> Avoid handling duplicate LeaderAndISR requests
> --
>
> Key: KAFKA-2721
> URL: https://issues.apache.org/jira/browse/KAFKA-2721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Dong Lin
> Fix For: 0.9.0.0
>
>
> Upon controller migration, the new controller will try to resend all the 
> LeaderAndISR requests for any on-going partition reassignments. This can then 
> lead to duplicate such requests sent to the same broker.
> Upon receiving such requests, today brokers do not detect if, for example, it 
> is already the leader for the requested becoming-leader-partition, and does 
> the logic such as 1) stop fetching 2) coordinator migration, etc which is not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-16 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007673#comment-15007673
 ] 

Geoff Anderson commented on KAFKA-2845:
---

[~granthenke] credit to [~nehanarkhede] for pushing for these tests

There seem to be a couple issues at play here:
- If you poke around RequestKeys.scala, KafkaApis.scala, and 
RequestChannel.scala, it seems that neither 0.8.X nor 0.9.X really seems to do 
any version validation on the broker side, and instead sends back the latest 
request response version it knows. 
In the case of the producer test, the producer sends a v1 produce request, the 
broker returns a v0 response, and then the producer tries to parse the response 
as v1, and fails with BufferUnderflowException. Similarly, if the consumer 
issues a v1 fetch request it receives a v0 response from the old broker, 
resulting in a parse error on the client side.

- Protocol.java at least tries to handle this by closing the connection on the 
broker side, but not all requests have been ported to Protocol.java.

- Note that there is a proposal on the table to improve how both brokers and 
clients handle unsupported requests and request versions:
[KIP-35|https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version]
 - See "Improved handling of unsupported requests on broker"


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #158

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2721; Avoid handling duplicate LeaderAndISR requests

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 6df9e7ff2c6cfb3c7ca16f94928d0e86f3d087e2 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6df9e7ff2c6cfb3c7ca16f94928d0e86f3d087e2
 > git rev-list 6cbd97597ccf456a4f01f19553da5a03e12c9366 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1334187287611214393.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 7.836 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7287262904411475806.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 8.839 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Build failed in Jenkins: kafka_0.9.0_jdk7 #26

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2721; Avoid handling duplicate LeaderAndISR requests

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (docker Ubuntu ubuntu ubuntu1) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision 21ea9cbc0de08c303a9d12b2d36e2f7ee38ff113 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 21ea9cbc0de08c303a9d12b2d36e2f7ee38ff113
 > git rev-list 99d9ddc8e0e82c2e0ea579c85462f5d701319c81 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson9001606973235403748.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.108 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson6429456144431844125.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 651053 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.writeFile(DefaultFileLockManager.java:163)
at 
org.gradle.cache.internal.DefaultCacheAccess$UnitOfWorkFileAccess.writeFile(DefaultCacheAccess.java:404)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache.put(DefaultMultiProcessSafePersistentIndexedCache.java:49)
at 
org.gradle.api.internal.changedetect

Re: JIRA contributor list

2015-11-16 Thread Guozhang Wang
Hi Konrad,

I have added you to the list.

Cheers,
Guozhang


On Sun, Nov 15, 2015 at 9:08 AM, Konrad Kalita  wrote:

> Hi,
> I'm interested in contributing to Kafka, could you add my account
> (username: Konrad Kalita, email: konkal...@gmail.com) to contributor list?
>



-- 
-- Guozhang


[GitHub] kafka pull request: KAFKA-2820: systest log level

2015-11-16 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/538

KAFKA-2820: systest log level

This restores control over log level in KafkaService, and adds SASL debug 
logging when SASL is enabled

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2820-systest-log-level

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/538.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #538


commit 0f27b3b547226d20dc7161acf02753ebbe9b5235
Author: Geoff Anderson 
Date:   2015-11-17T00:38:44Z

Add debug logging for SASL

commit 0fc43371e3094cdb9df336430ed5145490b0010c
Author: Geoff Anderson 
Date:   2015-11-17T00:39:04Z

Restore control over log level in KafkaService




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2820) System tests: log level is no longer propagating from service classes

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007706#comment-15007706
 ] 

ASF GitHub Bot commented on KAFKA-2820:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/538

KAFKA-2820: systest log level

This restores control over log level in KafkaService, and adds SASL debug 
logging when SASL is enabled

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2820-systest-log-level

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/538.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #538


commit 0f27b3b547226d20dc7161acf02753ebbe9b5235
Author: Geoff Anderson 
Date:   2015-11-17T00:38:44Z

Add debug logging for SASL

commit 0fc43371e3094cdb9df336430ed5145490b0010c
Author: Geoff Anderson 
Date:   2015-11-17T00:39:04Z

Restore control over log level in KafkaService




> System tests: log level is no longer propagating from service classes
> -
>
> Key: KAFKA-2820
> URL: https://issues.apache.org/jira/browse/KAFKA-2820
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> Many system test service classes specify a log level which should be 
> reflected in the log4j output of the corresponding kafka tools etc.
> However, at least some these log levels are no longer propagating, which 
> makes tests much harder to debug after they have run.
> E.g. KafkaService specifies a DEBUG log level, but all collected log output 
> from brokers is at INFO level or above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2847) remove principal.builder.class from client configs

2015-11-16 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2847:
--

 Summary: remove principal.builder.class from client configs
 Key: KAFKA-2847
 URL: https://issues.apache.org/jira/browse/KAFKA-2847
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0
Reporter: Jun Rao


Since only the broker needs to know the principal name, we can remove 
principal.builder.class from both the producer and the consumer client config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #825

2015-11-16 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka_0.9.0_jdk7 #27

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] trivial doc change for building customized user name

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (docker Ubuntu ubuntu ubuntu1) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision 1a7f37bcafae47a7f38e96c62d39c38d5479a776 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1a7f37bcafae47a7f38e96c62d39c38d5479a776
 > git rev-list 21ea9cbc0de08c303a9d12b2d36e2f7ee38ff113 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson3198554452388130013.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 10.069 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson6884522510736361508.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 651053 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.writeFile(DefaultFileLockManager.java:163)
at 
org.gradle.cache.internal.DefaultCacheAccess$UnitOfWorkFileAccess.writeFile(DefaultCacheAccess.java:404)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache.put(DefaultMultiProcessSafePersistentIndexedCache.java:49)
at 
org.gradle.api.internal.changedetection.st

[jira] [Created] (KAFKA-2848) Use withClientSslSupport/withClientSaslSupport in Kafka Connect

2015-11-16 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2848:


 Summary: Use withClientSslSupport/withClientSaslSupport in Kafka 
Connect
 Key: KAFKA-2848
 URL: https://issues.apache.org/jira/browse/KAFKA-2848
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.1.0


These were introduced in KAFKA-2687 and applied to the ProducerConfig and 
ConsumerConfig classes, but did not get applied to Kafka Connect configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2848: Use client SSL/SASL config utiliti...

2015-11-16 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/539

KAFKA-2848: Use client SSL/SASL config utilities in Kafka Connect to avoid 
duplication of configs.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2848-reuse-ssl-sasl-client-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/539.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #539


commit 752d208cb3b607150cd016cc2776008dda0984b8
Author: Ewen Cheslack-Postava 
Date:   2015-11-17T02:45:53Z

KAFKA-2848: Use client SSL/SASL config utilities in Kafka Connect to avoid 
duplication of configs.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2848) Use withClientSslSupport/withClientSaslSupport in Kafka Connect

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007911#comment-15007911
 ] 

ASF GitHub Bot commented on KAFKA-2848:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/539

KAFKA-2848: Use client SSL/SASL config utilities in Kafka Connect to avoid 
duplication of configs.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2848-reuse-ssl-sasl-client-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/539.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #539


commit 752d208cb3b607150cd016cc2776008dda0984b8
Author: Ewen Cheslack-Postava 
Date:   2015-11-17T02:45:53Z

KAFKA-2848: Use client SSL/SASL config utilities in Kafka Connect to avoid 
duplication of configs.




> Use withClientSslSupport/withClientSaslSupport in Kafka Connect
> ---
>
> Key: KAFKA-2848
> URL: https://issues.apache.org/jira/browse/KAFKA-2848
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> These were introduced in KAFKA-2687 and applied to the ProducerConfig and 
> ConsumerConfig classes, but did not get applied to Kafka Connect configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread netcafe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

netcafe updated KAFKA-2843:
---
Description: 
I use simple consumer fetch message from brokers (fetchSize > messageSize),when 
consumer got empty messageSet,e.g :

val offset = nextOffset
val request = buildRequest(offset)
val response = consumer.fetch(request)
val msgSet = fetchResponse.messageSet(topic, partition)

  if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(topic, partition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}

  was:
I use simple consumer fetch message from brokers,when consumer got empty 
messageSet,e.g :

val offset = nextOffset
val request = buildRequest(offset)
val response = consumer.fetch(request)
val msgSet = fetchResponse.messageSet(topic, partition)

  if (msgSet.isEmpty) {
  val hwOffset = fetchResponse.highWatermark(topic, partition)
  
  if (offset == hwOffset) {
 // ok, doSomething...
  } else {  
 // in our scene, i found highWatermark may not equals current offset 
,but we did not reproduced it.
// Is this case could happen ?  if could, why ?
  }
}


> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers (fetchSize > 
> messageSize),when consumer got empty messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #159

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] trivial doc change for building customized user name

--
[...truncated 2799 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups 

[jira] [Assigned] (KAFKA-2847) remove principal.builder.class from client configs

2015-11-16 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2847:
--

Assignee: Ismael Juma

> remove principal.builder.class from client configs
> --
>
> Key: KAFKA-2847
> URL: https://issues.apache.org/jira/browse/KAFKA-2847
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> Since only the broker needs to know the principal name, we can remove 
> principal.builder.class from both the producer and the consumer client config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15008054#comment-15008054
 ] 

Jun Rao commented on KAFKA-2843:


Did you check the error code in the response? For example, if the fetch request 
is sent to to non-leader replica, you will get an error code and an empty 
response.

Thanks,

Jun

> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers (fetchSize > 
> messageSize),when consumer got empty messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2848) Use withClientSslSupport/withClientSaslSupport in Kafka Connect

2015-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15008106#comment-15008106
 ] 

ASF GitHub Bot commented on KAFKA-2848:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/539


> Use withClientSslSupport/withClientSaslSupport in Kafka Connect
> ---
>
> Key: KAFKA-2848
> URL: https://issues.apache.org/jira/browse/KAFKA-2848
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> These were introduced in KAFKA-2687 and applied to the ProducerConfig and 
> ConsumerConfig classes, but did not get applied to Kafka Connect configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2848: Use client SSL/SASL config utiliti...

2015-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/539


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2848) Use withClientSslSupport/withClientSaslSupport in Kafka Connect

2015-11-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2848.

   Resolution: Fixed
Fix Version/s: (was: 0.9.1.0)
   0.9.0.0

Issue resolved by pull request 539
[https://github.com/apache/kafka/pull/539]

> Use withClientSslSupport/withClientSaslSupport in Kafka Connect
> ---
>
> Key: KAFKA-2848
> URL: https://issues.apache.org/jira/browse/KAFKA-2848
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> These were introduced in KAFKA-2687 and applied to the ProducerConfig and 
> ConsumerConfig classes, but did not get applied to Kafka Connect configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Merge pull request #1 from apache/trunk

2015-11-16 Thread prabcs
GitHub user prabcs opened a pull request:

https://github.com/apache/kafka/pull/540

Merge pull request #1 from apache/trunk

Merging the kafka latest to my fork

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/prabcs/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/540.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #540


commit 98c3d774a2c452314631b80a618f728345a96053
Author: prabcs 
Date:   2015-11-14T11:28:12Z

Merge pull request #1 from apache/trunk

Merging the kafka latest to my fork




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Merge pull request #1 from apache/trunk

2015-11-16 Thread prabcs
Github user prabcs closed the pull request at:

https://github.com/apache/kafka/pull/540


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-16 Thread netcafe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15008112#comment-15008112
 ] 

netcafe commented on KAFKA-2843:


yes, before parse fetchResponse,I had checked  response.hasError, there was no 
error. 
if fetchResponse has error,our logic would not go on . 

> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers (fetchSize > 
> messageSize),when consumer got empty messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #160

2015-11-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2848; Use client SSL/SASL config utilities in Kafka Connect to

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision f1169f1da8728db842aca23dcb6fde740a400699 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f1169f1da8728db842aca23dcb6fde740a400699
 > git rev-list ae315264dbc8a9efc5b5fbb1967a8a5df76dddb3 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1428757820591630108.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 20.078 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7022509435532923263.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {
 

Re: [gradle build] The wrapper should be in the repository

2015-11-16 Thread Ewen Cheslack-Postava
Hi,

Those instructions simply fell out of date -- you can see in the README in
the repository that the first step after checkout is to bootstrap the
gradle wrapper. The wrapper is not included due to licensing issues when
creating packages from the repository. I've updated the wiki to explain how
to generate the wrapper before building (which is still useful since you
can generate the wrapper with different versions of gradle, but it will
build with the version specified by the project).

-Ewen

On Fri, Nov 13, 2015 at 3:46 PM, jeanbaptiste lespiau <
jeanbaptiste.lesp...@gmail.com> wrote:

> Hi everyone,
>
> When following the setup page [
> https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup]
> running :
>
> ./gradlew eclipse
>
> I hit an error :
>
> Could not find or load main class org.gradle.wrapper.GradleWrapperMain
>
> When looking at the gradle documentation [
> https://docs.gradle.org/current/userguide/gradle_wrapper.html]
>
> the explanation is simple : gradle/wrapper/ *should* be submitted to your
> version control system.
>
> Thus, 2 solutions:
> - adding the gradle/wrapper/* files and precise the gradle version in
> build.gradle
> - remove the wrapper, since it is of no use, because one has to run for
> instance gradle wrapper --gradle-version 2.0 , which means he already has
> gradle.
>
> The first solution seems to be the one to take.
>
> I can do the modification and push a pull request, but I still want to be
> sure to check with you before doing so.
>
> Regards.
>



-- 
Thanks,
Ewen