[GitHub] kafka pull request: HOTFIX: wrong keyvalue equals logic when keys ...

2016-04-30 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/1293


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.10.0.0 RC2

2016-04-30 Thread Ben Davison
Hi Gwen,

The release notes lead to a 404, this is the correct url:
http://home.apache.org/~gwenshap/0.10.0.0-rc2/RELEASE_NOTES.html

Thanks for leading the RC effort.

Regards,

Ben

On Sat, Apr 30, 2016 at 1:01 AM, Gwen Shapira  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.10.0.0. This
> is a major release that includes: (1) New message format including
> timestamps (2) client interceptor API (3) Kafka Streams. (4)
> Configurable SASL authentication mechanisms (5) API for retrieving
> protocol versions supported by the broker.
>
> Since this is a major release, we will give people more time to try it
> out and give feedback.
>
> Contributions that are especially welcome are:
> * Critical bugs found while testing
> * Especially testing related to the new functionality
> * More tests
> * Better docs
> * Doc reviews related to new functionality and upgrade
>
> Release notes for the 0.10.0.0 release:
> http://home.apache.org/~gwenshap/0.10.0.0-rc2/RELEASE_NOTES.HTML
>
> Release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.0
>
> *** Please download, test and vote by Monday, May 9, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/0.10.0.0-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> http://home.apache.org/~gwenshap/0.10.0.0-rc2/scaladoc
>
> * java-doc
> http://home.apache.org/~gwenshap/0.10.0.0-rc2/javadoc/
>
> * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0-rc2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=da2745e104ba31fc980265ad835d9233652c
>
> * Documentation:
> http://kafka.apache.org/0100/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> /**
>
> Thanks,
>
> Gwen
>

-- 


This email, including attachments, is private and confidential. If you have 
received this email in error please notify the sender and delete it from 
your system. Emails are not secure and may contain viruses. No liability 
can be accepted for viruses that might be transferred by this email or any 
attachment. Any unauthorised copying of this message or unauthorised 
distribution and publication of the information contained herein are 
prohibited.

7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
Registered in England and Wales. Registered No. 04843573.


Re: [VOTE] KIP-45: Standardize KafkaConsumer API to use Collection

2016-04-30 Thread Harsha
Hi Jason,
Yes I am in favor removing them 0.11 and it definitely gives
everyone one major version to update their clients to remove
the deprecated commands.

Thanks,
Harsha

On Fri, Apr 29, 2016, at 11:02 PM, Ewen Cheslack-Postava wrote:
> I agree with Grant that we really need to indicate to consumers of APIs
> that when we mark it as unstable, it *really* means unstable. This is a
> more general problem of needing to define our APIs and stability -- but
> I's
> say that while we probably were too hasty in adding APIs, it was probably
> better to add *some* indication of stability and support than just add
> APIs
> with not promises.
> 
> On the other hand, since I helped introduce the Unstable annotation, even
> then it wasn't entirely clear what it meant, and I am a firm believer in
> attempting to provide *some* migration period for incompatible changes, I
> would be more than happy to adapt the public API to provide backwards
> compatibility for those APIs for *at least* one release.
> 
> Is there a strong reason for not doing this that isn't incompatible?
> 
> And shouldn't we try to be as helpful to consumers of our *new* APIs as
> possible -- we want them to adopt new APIs! If there's a small amount of
> effort on our part that keeps things compatible, at least over the course
> of a major release, it encourages downstream projects to try our APIs
> earlier, and that's a good thing. It won't always be perfect; sometimes
> we'll need to break major new features in a minor release; but in
> general,
> won't it be better?
> 
> We should be very clear that we are going to remove these APIs with the
> 0.11 release, which should hopefully make it clear what Storm can expect
> from us in terms of compatibility (noting, of course, that we make no
> real
> promises currently about how long 0.10.x releases will be made! we
> already
> make few guarantees about long term support).
> 
> I know it would be ideal if all "external" stakeholders could get their
> vote in with the KIP, but it's probably unrealistic to expect that to
> happen any time soon -- not everyone will see developments in the Kafka
> project. I think we should give *a bit* of flexibility, especially for
> stuff we were all on the fence about, when these types of issues come up.
> 
> Everyone seemed to be on the fence previously. Is there a good reason not
> to adopt the suggested changes, that cost of a bit of compatibility pain?
> 
> -Ewen
> 
> 
> On Fri, Apr 29, 2016 at 7:58 PM, Jason Gustafson 
> wrote:
> 
> > Hey Harsha,
> >
> > Just to clarify, are you ok with removing the methods in a later release
> > (say 0.11)? As I mentioned above, the only weird ones are subscribe() and
> > assign(), which will have a deprecated version which accepts List. Users
> > will have to change their code to use another collection type or a typecast
> > to avoid deprecation warnings. That's annoying, but maybe better than
> > breaking compatibility. Does it make sense to update the KIP with your
> > proposal and request a new vote?
> >
> > Thanks,
> > Jason
> >
> > On Fri, Apr 29, 2016 at 4:25 PM, Harsha  wrote:
> >
> > > Grant,
> > >  I am sure this is discussed and voted. I've seen the
> > >  discussion. Given that there is an opportunity to make it less
> > >  painful for the users who shipped consumers using the 0.9.x we
> > >  should consider that.
> > > ". However, for now the documentation of
> > > > > the Unstable annotation says, "No guarantee is provided as to
> > > reliability
> > > > > or stability across any level of release granularity."  If we can't
> > > > > leverage the Unstable annotation to make breaking changes where
> > > necessary,
> > > > > it will be tough to vet new apis without generating a lot of
> > deprecated
> > > > > code."
> > > Yes we can tell everyone thats because we marked api unstable we gonna
> > > break it in future release and not even consider make it compatible.
> > > With this approach I am sure no one would be interested in writing or
> > > using any of the api's until they are stable and thats not way to vet
> > > new apis.
> > >
> > > -Harsha
> > >
> > >
> > >
> > > On Fri, Apr 29, 2016, at 10:39 AM, Grant Henke wrote:
> > > > If anyone wants to review the KIP call discussion we had on this just
> > > > before the vote, here is a link to the relevant session (6 minutes in):
> > > > https://youtu.be/Hcjur17TjBE?t=6m
> > > >
> > > > On Fri, Apr 29, 2016 at 12:21 PM, Grant Henke 
> > > > wrote:
> > > >
> > > > > I think you are right Jason. People were definitely on the fence
> > about
> > > > > this and we went back and forth for quite some time.
> > > > >
> > > > > I think the main point in the KIP discussion that made this decision,
> > > is
> > > > > that the Consumer was annotated with the Unstable annotation. Given
> > how
> > > > > new the Consumer is, we wanted to leverage that to make sure the
> > > interface
> > > > > is clean. The same 

[jira] [Created] (KAFKA-3645) NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a secure env

2016-04-30 Thread Arun Mahadevan (JIRA)
Arun Mahadevan created KAFKA-3645:
-

 Summary: NPE in ConsumerGroupCommand and ConsumerOffsetChecker 
when running in a secure env
 Key: KAFKA-3645
 URL: https://issues.apache.org/jira/browse/KAFKA-3645
 Project: Kafka
  Issue Type: Bug
Reporter: Arun Mahadevan
Priority: Minor


The host and port entries under /brokers/ids/ gets filled only for 
PLAINTEXT security protocol. For other protocols the host is null and the 
actual endpoint is under "endpoints". This causes NPE when running the consumer 
group and offset checker scripts in a kerberized env. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3645: Fix NPE in ConsumerGroupCommand an...

2016-04-30 Thread arunmahadevan
GitHub user arunmahadevan opened a pull request:

https://github.com/apache/kafka/pull/1301

KAFKA-3645: Fix NPE in ConsumerGroupCommand and ConsumerOffsetChecker

The host and port entries under /brokers/ids/ gets filled only for 
PLAINTEXT security protocol. For other protocols the host is null and the 
actual endpoint is under "endpoints". This causes NPE when running the consumer 
group and offset checker scripts in a kerberized env. By always reading the 
host and port values from the "endpoint", a more meaningful exception would be 
thrown rather than a NPE.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arunmahadevan/kafka cg_kerb_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1301.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1301


commit c3dd8d506095314f5e124656d994e1f22af5b4ed
Author: Arun Mahadevan 
Date:   2016-04-30T17:44:36Z

Fix NPE in ConsumerGroupCommand and ConsumerOffsetChecker

The host and port entries under /brokers/ids/ gets filled only for 
PLAINTEXT security protocol. For other protocols the host is null
and the actual endpoint is under "endpoints". This causes NPE when running 
the consumer group and offset checker scripts in a
kerberized env. By always reading the host and port values from the 
"endpoint", a more meaningful exception would be thrown rather than a NPE.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3645) NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a secure env

2016-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265400#comment-15265400
 ] 

ASF GitHub Bot commented on KAFKA-3645:
---

GitHub user arunmahadevan opened a pull request:

https://github.com/apache/kafka/pull/1301

KAFKA-3645: Fix NPE in ConsumerGroupCommand and ConsumerOffsetChecker

The host and port entries under /brokers/ids/ gets filled only for 
PLAINTEXT security protocol. For other protocols the host is null and the 
actual endpoint is under "endpoints". This causes NPE when running the consumer 
group and offset checker scripts in a kerberized env. By always reading the 
host and port values from the "endpoint", a more meaningful exception would be 
thrown rather than a NPE.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arunmahadevan/kafka cg_kerb_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1301.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1301


commit c3dd8d506095314f5e124656d994e1f22af5b4ed
Author: Arun Mahadevan 
Date:   2016-04-30T17:44:36Z

Fix NPE in ConsumerGroupCommand and ConsumerOffsetChecker

The host and port entries under /brokers/ids/ gets filled only for 
PLAINTEXT security protocol. For other protocols the host is null
and the actual endpoint is under "endpoints". This causes NPE when running 
the consumer group and offset checker scripts in a
kerberized env. By always reading the host and port values from the 
"endpoint", a more meaningful exception would be thrown rather than a NPE.




> NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a 
> secure env
> --
>
> Key: KAFKA-3645
> URL: https://issues.apache.org/jira/browse/KAFKA-3645
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Priority: Minor
>
> The host and port entries under /brokers/ids/ gets filled only for 
> PLAINTEXT security protocol. For other protocols the host is null and the 
> actual endpoint is under "endpoints". This causes NPE when running the 
> consumer group and offset checker scripts in a kerberized env. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3645) NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a secure env

2016-04-30 Thread Arun Mahadevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Mahadevan updated KAFKA-3645:
--
Status: Patch Available  (was: Open)

> NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a 
> secure env
> --
>
> Key: KAFKA-3645
> URL: https://issues.apache.org/jira/browse/KAFKA-3645
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Priority: Minor
>
> The host and port entries under /brokers/ids/ gets filled only for 
> PLAINTEXT security protocol. For other protocols the host is null and the 
> actual endpoint is under "endpoints". This causes NPE when running the 
> consumer group and offset checker scripts in a kerberized env. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3155) Transient Failure in kafka.integration.PlaintextTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack

2016-04-30 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265501#comment-15265501
 ] 

Ewen Cheslack-Postava commented on KAFKA-3155:
--

Not sure if it's related, but I'm also seeing this failure (0.10.0 branch 
currently):

{quote}
kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack FAILED
java.lang.AssertionError: Topic metadata is not correctly updated for 
broker kafka.server.KafkaServer@6d2297ef.
Expected ISR: List(BrokerEndPoint(0,localhost,41470), 
BrokerEndPoint(1,localhost,48151))
Actual ISR  : 
{quote}

> Transient Failure in 
> kafka.integration.PlaintextTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack
> 
>
> Key: KAFKA-3155
> URL: https://issues.apache.org/jira/browse/KAFKA-3155
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> java.lang.AssertionError: No request is complete.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.api.BaseProducerSendTest$$anonfun$testFlush$1.apply$mcVI$sp(BaseProducerSendTest.scala:275)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>   at 
> kafka.api.BaseProducerSendTest.testFlush(BaseProducerSendTest.scala:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(R

[jira] [Resolved] (KAFKA-2398) Transient test failure for SocketServerTest - Socket closed.

2016-04-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2398.
--
Resolution: Fixed
  Assignee: Ewen Cheslack-Postava

Resolving as the exact error that now shows up is different than this report. 
The new issue is addressed in KAFKA-3182.

> Transient test failure for SocketServerTest - Socket closed.
> 
>
> Key: KAFKA-2398
> URL: https://issues.apache.org/jira/browse/KAFKA-2398
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Ewen Cheslack-Postava
>  Labels: transient-unit-test-failure
>
> See the following transient test failure for SocketServerTest.
> kafka.network.SocketServerTest > simpleRequest FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest.simpleRequest(SocketServerTest.scala:94)
> kafka.network.SocketServerTest > tooBigRequestIsRejected FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest.tooBigRequestIsRejected(SocketServerTest.scala:124)
> kafka.network.SocketServerTest > testSocketsCloseOnShutdown FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest.testSocketsCloseOnShutdown(SocketServerTest.scala:136)
> kafka.network.SocketServerTest > testMaxConnectionsPerIp FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest$$anonfun$1.apply(SocketServerTest.scala:170)
> at 
> kafka.network.SocketServerTest$$anonfun$1.apply(SocketServerTest.scala:170)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Range.foreach(Range.scala:141)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collecti

[jira] [Commented] (KAFKA-3182) Failure in kafka.network.SocketServerTest.testSocketsCloseOnShutdown

2016-04-30 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265526#comment-15265526
 ] 

Ewen Cheslack-Postava commented on KAFKA-3182:
--

Is the assertion made by this test actually safe wrt the mapping to the 
underlying TCP implementation details? There's impedance mismatch between TCP's 
half-close approach and sockets (see 
https://docs.oracle.com/javase/8/docs/technotes/guides/net/articles/connection_release.html),
 which at a minimum makes things confusing. In addition, based on the docs for 
Socket/SocketChannel, I'm unclear just how much of the TCP FIN/RST exchange is 
guaranteed to have occurred. It seems like the other side of the connection 
(for which we're asserting that we should see an exception) could possibly not 
have seen the relevant packet yet, in which case we *wouldn't* expect an 
exception.

It seems to me that it's unlikely that closing Sockets and SocketChannels 
actually guarantee any synchronous operation -- if you have a network 
partition, you could block on the FIN's ACK for a really long time.

Relatedly, I suspect we're probably too aggressive in using close -- its 
possible we should be shutting things down in each direction and carefully 
handling the results if we unexpectedly see continued input after 
shutdownOutput() has been invoked...

> Failure in kafka.network.SocketServerTest.testSocketsCloseOnShutdown
> 
>
> Key: KAFKA-3182
> URL: https://issues.apache.org/jira/browse/KAFKA-3182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> org.scalatest.junit.JUnitTestFailedError: expected exception when writing to 
> closed trace socket
>   at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
>   at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:79)
>   at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
>   at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:79)
>   at 
> kafka.network.SocketServerTest.testSocketsCloseOnShutdown(SocketServerTest.scala:180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messag

[jira] [Commented] (KAFKA-3565) Producer's throughput lower with compressed data after KIP-31/32

2016-04-30 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265529#comment-15265529
 ] 

Jiangjie Qin commented on KAFKA-3565:
-

[~gwenshap] I ran a few tests and it seems the performance change but was not 
able to reproduce the problem. It is not clear yet at this point what caused 
the performance gap we saw. [~ijuma], do you have any update?

> Producer's throughput lower with compressed data after KIP-31/32
> 
>
> Key: KAFKA-3565
> URL: https://issues.apache.org/jira/browse/KAFKA-3565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Relative offsets were introduced by KIP-31 so that the broker does not have 
> to recompress data (this was previously required after offsets were 
> assigned). The implicit assumption is that reducing CPU usage required by 
> recompression would mean that producer throughput for compressed data would 
> increase.
> However, this doesn't seem to be the case:
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--012.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   59.030 seconds
> {"records_per_sec": 519418.343653, "mb_per_sec": 49.54}
> {code}
> Full results: https://gist.github.com/ijuma/0afada4ff51ad6a5ac2125714d748292
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--013.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   1 minute 0.243 seconds
> {"records_per_sec": 427308.818848, "mb_per_sec": 40.75}
> {code}
> Full results: https://gist.github.com/ijuma/e49430f0548c4de5691ad47696f5c87d
> The difference for the uncompressed case is smaller (and within what one 
> would expect given the additional size overhead caused by the timestamp 
> field):
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--010.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 4.176 seconds
> {"records_per_sec": 321018.17747, "mb_per_sec": 30.61}
> {code}
> Full results: https://gist.github.com/ijuma/5fec369d686751a2d84debae8f324d4f
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--014.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 5.079 seconds
> {"records_per_sec": 291777.608696, "mb_per_sec": 27.83}
> {code}
> Full results: https://gist.github.com/ijuma/1d35bd831ff9931448b0294bd9b787ed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-3565) Producer's throughput lower with compressed data after KIP-31/32

2016-04-30 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3565:

Comment: was deleted

(was: [~gwenshap] I ran a few tests and it seems the performance change but was 
not able to reproduce the problem. It is not clear yet at this point what 
caused the performance gap we saw. [~ijuma], do you have any update?)

> Producer's throughput lower with compressed data after KIP-31/32
> 
>
> Key: KAFKA-3565
> URL: https://issues.apache.org/jira/browse/KAFKA-3565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Relative offsets were introduced by KIP-31 so that the broker does not have 
> to recompress data (this was previously required after offsets were 
> assigned). The implicit assumption is that reducing CPU usage required by 
> recompression would mean that producer throughput for compressed data would 
> increase.
> However, this doesn't seem to be the case:
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--012.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   59.030 seconds
> {"records_per_sec": 519418.343653, "mb_per_sec": 49.54}
> {code}
> Full results: https://gist.github.com/ijuma/0afada4ff51ad6a5ac2125714d748292
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--013.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   1 minute 0.243 seconds
> {"records_per_sec": 427308.818848, "mb_per_sec": 40.75}
> {code}
> Full results: https://gist.github.com/ijuma/e49430f0548c4de5691ad47696f5c87d
> The difference for the uncompressed case is smaller (and within what one 
> would expect given the additional size overhead caused by the timestamp 
> field):
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--010.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 4.176 seconds
> {"records_per_sec": 321018.17747, "mb_per_sec": 30.61}
> {code}
> Full results: https://gist.github.com/ijuma/5fec369d686751a2d84debae8f324d4f
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--014.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 5.079 seconds
> {"records_per_sec": 291777.608696, "mb_per_sec": 27.83}
> {code}
> Full results: https://gist.github.com/ijuma/1d35bd831ff9931448b0294bd9b787ed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3565) Producer's throughput lower with compressed data after KIP-31/32

2016-04-30 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265530#comment-15265530
 ] 

Jiangjie Qin commented on KAFKA-3565:
-

[ [~gwenshap] I ran a few tests and was not able to reproduce the issue. It is 
not clear yet at this point what caused the performance gap we saw. [~ijuma], 
do you have any update? ]

> Producer's throughput lower with compressed data after KIP-31/32
> 
>
> Key: KAFKA-3565
> URL: https://issues.apache.org/jira/browse/KAFKA-3565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Relative offsets were introduced by KIP-31 so that the broker does not have 
> to recompress data (this was previously required after offsets were 
> assigned). The implicit assumption is that reducing CPU usage required by 
> recompression would mean that producer throughput for compressed data would 
> increase.
> However, this doesn't seem to be the case:
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--012.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   59.030 seconds
> {"records_per_sec": 519418.343653, "mb_per_sec": 49.54}
> {code}
> Full results: https://gist.github.com/ijuma/0afada4ff51ad6a5ac2125714d748292
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--013.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   1 minute 0.243 seconds
> {"records_per_sec": 427308.818848, "mb_per_sec": 40.75}
> {code}
> Full results: https://gist.github.com/ijuma/e49430f0548c4de5691ad47696f5c87d
> The difference for the uncompressed case is smaller (and within what one 
> would expect given the additional size overhead caused by the timestamp 
> field):
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--010.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 4.176 seconds
> {"records_per_sec": 321018.17747, "mb_per_sec": 30.61}
> {code}
> Full results: https://gist.github.com/ijuma/5fec369d686751a2d84debae8f324d4f
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--014.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 5.079 seconds
> {"records_per_sec": 291777.608696, "mb_per_sec": 27.83}
> {code}
> Full results: https://gist.github.com/ijuma/1d35bd831ff9931448b0294bd9b787ed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-30 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265561#comment-15265561
 ] 

Dana Powers commented on KAFKA-3615:


This PR has a bug that breaks the classpath setup for bin scripts in the rc2 
release. Should we reopen this and follow up, or open a new issue?

> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0, 0.10.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Fix main classpath libs glob for release (fixu...

2016-04-30 Thread dpkp
GitHub user dpkp opened a pull request:

https://github.com/apache/kafka/pull/1302

Fix main classpath libs glob for release (fixup KAFKA-3615 regression)

bin/kafka-run-class.sh does not correctly setup the CLASSPATH in release 
rc2.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dpkp/kafka KAFKA-3615-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1302.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1302


commit fb67470da3e8487f30cca4f186df6f80ba7272a6
Author: Dana Powers 
Date:   2016-05-01T01:05:56Z

Fix main classpath libs glob for release (fixup KAFKA-3615 regression)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265563#comment-15265563
 ] 

ASF GitHub Bot commented on KAFKA-3615:
---

GitHub user dpkp opened a pull request:

https://github.com/apache/kafka/pull/1302

Fix main classpath libs glob for release (fixup KAFKA-3615 regression)

bin/kafka-run-class.sh does not correctly setup the CLASSPATH in release 
rc2.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dpkp/kafka KAFKA-3615-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1302.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1302


commit fb67470da3e8487f30cca4f186df6f80ba7272a6
Author: Dana Powers 
Date:   2016-05-01T01:05:56Z

Fix main classpath libs glob for release (fixup KAFKA-3615 regression)




> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0, 0.10.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC2

2016-04-30 Thread Gwen Shapira
Thanks for the correction :)

On Sat, Apr 30, 2016 at 2:30 AM, Ben Davison  wrote:
> Hi Gwen,
>
> The release notes lead to a 404, this is the correct url:
> http://home.apache.org/~gwenshap/0.10.0.0-rc2/RELEASE_NOTES.html
>
> Thanks for leading the RC effort.
>
> Regards,
>
> Ben
>
> On Sat, Apr 30, 2016 at 1:01 AM, Gwen Shapira  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the first candidate for release of Apache Kafka 0.10.0.0. This
>> is a major release that includes: (1) New message format including
>> timestamps (2) client interceptor API (3) Kafka Streams. (4)
>> Configurable SASL authentication mechanisms (5) API for retrieving
>> protocol versions supported by the broker.
>>
>> Since this is a major release, we will give people more time to try it
>> out and give feedback.
>>
>> Contributions that are especially welcome are:
>> * Critical bugs found while testing
>> * Especially testing related to the new functionality
>> * More tests
>> * Better docs
>> * Doc reviews related to new functionality and upgrade
>>
>> Release notes for the 0.10.0.0 release:
>> http://home.apache.org/~gwenshap/0.10.0.0-rc2/RELEASE_NOTES.HTML
>>
>> Release plan:
>> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.0
>>
>> *** Please download, test and vote by Monday, May 9, 9am PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~gwenshap/0.10.0.0-rc2/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/
>>
>> * scala-doc
>> http://home.apache.org/~gwenshap/0.10.0.0-rc2/scaladoc
>>
>> * java-doc
>> http://home.apache.org/~gwenshap/0.10.0.0-rc2/javadoc/
>>
>> * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0-rc2 tag:
>>
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=da2745e104ba31fc980265ad835d9233652c
>>
>> * Documentation:
>> http://kafka.apache.org/0100/documentation.html
>>
>> * Protocol:
>> http://kafka.apache.org/0100/protocol.html
>>
>> /**
>>
>> Thanks,
>>
>> Gwen
>>
>
> --
>
>
> This email, including attachments, is private and confidential. If you have
> received this email in error please notify the sender and delete it from
> your system. Emails are not secure and may contain viruses. No liability
> can be accepted for viruses that might be transferred by this email or any
> attachment. Any unauthorised copying of this message or unauthorised
> distribution and publication of the information contained herein are
> prohibited.
>
> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
> Registered in England and Wales. Registered No. 04843573.


[GitHub] kafka pull request: Fix main classpath libs glob for release (fixu...

2016-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1302


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265575#comment-15265575
 ] 

ASF GitHub Bot commented on KAFKA-3615:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1302


> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0, 0.10.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KIP-57: Interoperable LZ4 Framing

2016-04-30 Thread Dana Powers
On Fri, Apr 29, 2016 at 6:29 PM, Ewen Cheslack-Postava
 wrote:
> Two questions:
>
> 1. My understanding based on KIP-35 is that this won't be a problem for
> clients that want to support older broker versions since they will use v0
> produce requests with broken checksum to send to those, and any broker
> advertising support for v1 produce requests will also support valid
> checksums? In other words, the KIP is structured in terms of Java client
> versions, but I'd like to make sure we have the compatibility path for
> non-Java clients cleanly mapped out. (And think we do, especially given
> that Dana is proposing, but still would like an ack on that.)

Yes, I'm treating these as the same:

broker/client <= 0.9
messages == v0
Fetch api version <= 1
Produce api version <= 1

broker/client >= 0.10
messages >= v1
Fetch api version >= 2
Produce api version >= 2

I dont think there will be any problem for clients that want to
support both encodings.

> 2. We're completely disabling checksumming of the compressed payload on
> consumption. Normally you'd want to validate each level of framing for
> correct end-to-end validation. You could still do this (albeit more weakly)
> by validating the checksum is one of the two potentially valid values
> (correct checksum or old, incorrect checksum). This obviously has
> computational cost. Are we sure the tradeoff we're going with makes sense?

Yes, to be honest, not validating on consumption is mostly because I just
haven't dug into the bowels of the java client compressor / memory records
call chains. It seems non-trivial to switch validation based on the message
version in the consumer code. I did not opt for the weak validation that you
suggest because I think the broker should always validate v1 messages on
produce, and that piece shares the same code path within the lz4 java classes.
I suppose we could make the default to raise an error on checksums that fail
weak validation, and then switch to strong validation in the broker.
Alternately,
if you have suggestions on how to wire up the consumer code to switch lz4
behavior based on message version, I would be happy to run with that.

-Dana


Build failed in Jenkins: kafka-trunk-jdk8 #575

2016-04-30 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: Fix main classpath libs glob for release (fixup KAFKA-3615

--
[...truncated 1691 lines...]

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSessionTimeout PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatMaintainsSession 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitMaintainsSession 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
tes

Build failed in Jenkins: kafka-0.10.0-jdk7 #38

2016-04-30 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: Fix main classpath libs glob for release (fixup KAFKA-3615

--
[...truncated 2786 lines...]

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > 
testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslMultiMechanismConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslMultiMechanismConsumerTest > testListTopics PASSED

kafka.api.SaslMultiMechanismConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslMultiMechanismConsumerTest > testPartitionReassignmentCallback 
PASSED

kafka.api.SaslMultiMechanismConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooL

Build failed in Jenkins: kafka-trunk-jdk7 #1238

2016-04-30 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: Fix main classpath libs glob for release (fixup KAFKA-3615

--
[...truncated 2373 lines...]

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.ServerStartupTest > testBrokerSelfAware PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.MetadataRequestTest > testReplicaDownResponse PASSED

kafka.server.MetadataRequestTest > testRack PASSED

kafka.server.MetadataRequestTest > testIsInternal PASSED

kafka.server.MetadataRequestTest > testControllerId PASSED

kafka.server.MetadataRequestTest > testAllTopicsRequest PASSED

kafka.server.MetadataRequestTest > testNoTopicsRequest PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslSslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslSslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslSslConsumerTest > testListTopics PASSED

kafka.api.SaslSslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslSslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslSslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslSslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndT