[jira] [Commented] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15479337#comment-15479337
 ] 

ASF GitHub Bot commented on KAFKA-4145:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1842


> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1842: KAFKA-4145: Avoid redundant integration testing in...

2016-09-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1842


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4145:
---
   Resolution: Fixed
 Reviewer: Ismael Juma
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0
>
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4145:
---
Issue Type: Test  (was: Improvement)

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0
>
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Kafka 0.10.1.0 Release Plan

2016-09-10 Thread Ismael Juma
Jason, thanks for putting this together and driving the release. Your
proposal sounds good to me. It would be nice to create a wiki page with the
information in this email. See the following for the one that Gwen put
together for 0.10.0:

https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.0

Also, you merged KIP-70 recently so that can be moved to the completed
section.

Ismael

On Fri, Sep 9, 2016 at 11:45 PM, Jason Gustafson  wrote:

> Hi All,
>
> I've volunteered to be release manager for the upcoming 0.10.1 release and
> would like to propose the following timeline:
>
> Feature Freeze (Sep. 19): The 0.10.1 release branch will be created.
> Code Freeze (Oct. 3): The first RC will go out.
> Final Release (~Oct. 17): Assuming no blocking issues remain, the final
> release will be cut.
>
> The purpose of the time between the feature freeze and code freeze is to
> stabilize the set of release features. We will continue to accept bug fixes
> during this time and new system tests, but no new features will be merged
> into the release branch (they will continue to be accepted in trunk,
> however). After the code freeze, only blocking bug fixes will be accepted.
> Features which cannot be completed in time will have to await the next
> release cycle.
>
> This is the first iteration of the time-based release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan.
> Note
> that the final release is scheduled for October 17, so we have a little
> more than a month to prepare.
>
> Features which have already been merged to trunk and will be included in
> this release include the following:
>
> KIP-4 (partial): Add request APIs to create and delete topics
> KIP-33: Add time-based index
> KIP-60: Make Java client classloading more flexible
> KIP-62: Allow consumer to send heartbeats from a background thread
> KIP-65: Expose timestamps to Connect
> KIP-67: Queryable state for Kafka Streams
> KIP-71: Enable log compaction and deletion to co-exist
> KIP-75 - Add per-connector Converters
>
> Since this is the first time-based release, we propose to also include the
> following KIPs which already have a patch available and have undergone some
> review:
>
> KIP-58: Make log compaction point configurable
> KIP-63: Unify store and downstream caching in streams
> KIP-70: Revise consumer partition assignment semantics
> KIP-73: Replication quotas
> KIP-74: Add fetch response size limit in bytes
> KIP-78: Add clusterId
>
> One of the goals of time-based releases is to avoid the rush to get
> unstable features in before the release deadline. If a feature is not ready
> now, the next release window is never far away. This helps to ensure the
> overall quality of the release. We've drawn the line for this release based
> on feature progress and code review. For features which can't get in this
> time, don't worry since January will be here soon!
>
> Please let me know if you have any feedback on this plan.
>
> Thanks!
> Jason
>


Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-10 Thread Ismael Juma
+1 from me too.

Thanks to everyone that voted and participated in the discussion. The KIP
passed with 7 binding +1s (Sriram, Neha, Jason, Guozhang, Gwen, Jun,
myself) and 2 non-binding +1s (Rajini, Grant).

Ismael

On Fri, Sep 9, 2016 at 1:32 AM, Jun Rao  wrote:

> Thanks for the writeup. +1.
>
> Jun
>
> On Tue, Sep 6, 2016 at 7:46 PM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > I would like to (re)initiate[1] the voting process for KIP-78 Cluster Id:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> >
> > As explained in the KIP and discussion thread, we see this as a good
> first
> > step that can serve as a foundation for future improvements.
> >
> > Thanks,
> > Ismael
> >
> > [1] Even though I created a new vote thread, Gmail placed the messages in
> > the discuss thread, making it not as visible as required. It's important
> to
> > mention that two +1s were cast by Gwen and Sriram:
> >
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> > mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> > 40mail.gmail.com%3E
> >
>


[jira] [Updated] (KAFKA-4093) Cluster id

2016-09-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4093:
---
Status: Patch Available  (was: Open)

> Cluster id
> --
>
> Key: KAFKA-4093
> URL: https://issues.apache.org/jira/browse/KAFKA-4093
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Sumit Arrawatia
>
> The details can be found in the Cluster Id KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4093) Cluster id

2016-09-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15479361#comment-15479361
 ] 

Ismael Juma commented on KAFKA-4093:


Pull request: https://github.com/apache/kafka/pull/1830

> Cluster id
> --
>
> Key: KAFKA-4093
> URL: https://issues.apache.org/jira/browse/KAFKA-4093
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Sumit Arrawatia
>
> The details can be found in the Cluster Id KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4093) Cluster id

2016-09-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15479360#comment-15479360
 ] 

Ismael Juma commented on KAFKA-4093:


Pull request: https://github.com/apache/kafka/pull/1830

> Cluster id
> --
>
> Key: KAFKA-4093
> URL: https://issues.apache.org/jira/browse/KAFKA-4093
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Sumit Arrawatia
>
> The details can be found in the Cluster Id KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4093) Cluster id

2016-09-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4093:
---
Fix Version/s: 0.10.1.0

> Cluster id
> --
>
> Key: KAFKA-4093
> URL: https://issues.apache.org/jira/browse/KAFKA-4093
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Sumit Arrawatia
> Fix For: 0.10.1.0
>
>
> The details can be found in the Cluster Id KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-4093) Cluster id

2016-09-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4093:
---
Comment: was deleted

(was: Pull request: https://github.com/apache/kafka/pull/1830)

> Cluster id
> --
>
> Key: KAFKA-4093
> URL: https://issues.apache.org/jira/browse/KAFKA-4093
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Sumit Arrawatia
> Fix For: 0.10.1.0
>
>
> The details can be found in the Cluster Id KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-10 Thread Ismael Juma
Becket, comments inline.

On Fri, Sep 9, 2016 at 6:55 PM, Becket Qin  wrote:

> Completely agree that we should have a consistent representation of
> something missing/unknown in the code.
>
> My understanding of the convention is that -1 means "not
> available"/"unknown". For example, when producer request failed, the offset
> we used in the callback is -1. And Message.NoTimestamp is also -1.


ProducerRecord uses a nullable timestamp instead of Record.NO_TIMESTAMP to
indicate a missing value though. That was the example in my original
message.

Using -1
> instead of null has at least two benefits.
> 1) it works for primitive type as well as classes.
>

This isn't a benefit outside of very specific use-cases (were the cost of
boxing is a problem). The wrapper classes can be used, after all. The
downsides of using -1 is that the type system doesn't help you (there's a
reason why they're called magic values). If you see a `java.lang.Long`, you
have a hint that `null` is a valid value. The other problem with a -1 for a
timestamp is that you may do nonsensical comparisons against it without
error. When using `null`, it will fail-fast with a NPE so that you can fix
it (even better would be to get compile-time errors, but let's leave that
out of this discussion for now).


> 2) it is easy to send via wire protocols
>

I think it's important to distinguish what we do in the wire protocols
(which may be more low-level) with what we do in user facing APIs (where
usability and safety are very important).

Ismael


Re: [DISCUSS] Kafka 0.10.1.0 Release Plan

2016-09-10 Thread Rajini Sivaram
Would it be possible to include KIP-55: Secure Quotas

as
well? The KIP was approved a while ago and the PR was submitted several
weeks ago. I was hoping it would get reviewed in time for the next release.
Jun had said he would take a look.


Thank you,

Rajini

On Sat, Sep 10, 2016 at 8:26 AM, Ismael Juma  wrote:

> Jason, thanks for putting this together and driving the release. Your
> proposal sounds good to me. It would be nice to create a wiki page with the
> information in this email. See the following for the one that Gwen put
> together for 0.10.0:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.0
>
> Also, you merged KIP-70 recently so that can be moved to the completed
> section.
>
> Ismael
>
> On Fri, Sep 9, 2016 at 11:45 PM, Jason Gustafson 
> wrote:
>
> > Hi All,
> >
> > I've volunteered to be release manager for the upcoming 0.10.1 release
> and
> > would like to propose the following timeline:
> >
> > Feature Freeze (Sep. 19): The 0.10.1 release branch will be created.
> > Code Freeze (Oct. 3): The first RC will go out.
> > Final Release (~Oct. 17): Assuming no blocking issues remain, the final
> > release will be cut.
> >
> > The purpose of the time between the feature freeze and code freeze is to
> > stabilize the set of release features. We will continue to accept bug
> fixes
> > during this time and new system tests, but no new features will be merged
> > into the release branch (they will continue to be accepted in trunk,
> > however). After the code freeze, only blocking bug fixes will be
> accepted.
> > Features which cannot be completed in time will have to await the next
> > release cycle.
> >
> > This is the first iteration of the time-based release plan:
> > https://cwiki.apache.org/confluence/display/KAFKA/Time+
> Based+Release+Plan.
> > Note
> > that the final release is scheduled for October 17, so we have a little
> > more than a month to prepare.
> >
> > Features which have already been merged to trunk and will be included in
> > this release include the following:
> >
> > KIP-4 (partial): Add request APIs to create and delete topics
> > KIP-33: Add time-based index
> > KIP-60: Make Java client classloading more flexible
> > KIP-62: Allow consumer to send heartbeats from a background thread
> > KIP-65: Expose timestamps to Connect
> > KIP-67: Queryable state for Kafka Streams
> > KIP-71: Enable log compaction and deletion to co-exist
> > KIP-75 - Add per-connector Converters
> >
> > Since this is the first time-based release, we propose to also include
> the
> > following KIPs which already have a patch available and have undergone
> some
> > review:
> >
> > KIP-58: Make log compaction point configurable
> > KIP-63: Unify store and downstream caching in streams
> > KIP-70: Revise consumer partition assignment semantics
> > KIP-73: Replication quotas
> > KIP-74: Add fetch response size limit in bytes
> > KIP-78: Add clusterId
> >
> > One of the goals of time-based releases is to avoid the rush to get
> > unstable features in before the release deadline. If a feature is not
> ready
> > now, the next release window is never far away. This helps to ensure the
> > overall quality of the release. We've drawn the line for this release
> based
> > on feature progress and code review. For features which can't get in this
> > time, don't worry since January will be here soon!
> >
> > Please let me know if you have any feedback on this plan.
> >
> > Thanks!
> > Jason
> >
>



-- 
Regards,

Rajini


Build failed in Jenkins: kafka-trunk-jdk7 #1531

2016-09-10 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4145; Avoid redundant integration testing in ProducerSendTests

--
[...truncated 3422 lines...]
kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack FAILED
java.lang.AssertionError: Topic metadata is not correctly updated for 
broker kafka.server.KafkaServer@5b35fc8a.
Expected ISR: List(BrokerEndPoint(0,localhost,36796), 
BrokerEndPoint(1,localhost,50542))
Actual ISR  : 

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBroker

Build failed in Jenkins: kafka-trunk-jdk8 #874

2016-09-10 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4145; Avoid redundant integration testing in ProducerSendTests

--
[...truncated 13777 lines...]
org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus STARTED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus STARTED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus STARTED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus STARTED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeWithNoPreviousValueIsPropagated STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeWithNoPreviousValueIsPropagated PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateRetriableFailure STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeOverridesValueSetBySameWorker STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeOverridesValueSetBySameWorker PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState STARTED

org.apache.kafka.

[jira] [Created] (KAFKA-4149) java.lang.NoSuchMethodError when running streams tests

2016-09-10 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4149:
--

 Summary: java.lang.NoSuchMethodError when running streams tests
 Key: KAFKA-4149
 URL: https://issues.apache.org/jira/browse/KAFKA-4149
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma


This started happening recently, may be related to upgrading to Gradle 3:

{code}
java.lang.NoSuchMethodError: 
scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
at kafka.utils.MockScheduler.(MockScheduler.scala:38)
at kafka.utils.MockTime.(MockTime.scala:35)
at kafka.utils.MockTime.(MockTime.scala:37)
at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.(EmbeddedKafkaCluster.java:44)
at 
org.apache.kafka.streams.KafkaStreamsTest.(KafkaStreamsTest.java:42)
{code}

https://builds.apache.org/job/kafka-trunk-jdk7/1530/testReport/junit/org.apache.kafka.streams/KafkaStreamsTest/classMethod/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4149) java.lang.NoSuchMethodError when running streams tests

2016-09-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15479599#comment-15479599
 ] 

Ismael Juma commented on KAFKA-4149:


The problem seems to go away if the `jarAll` task before `testAll` is removed, 
which I have done in the Jenkins jobs for trunk (jdk7 and jdk8) and PRs.

> java.lang.NoSuchMethodError when running streams tests
> --
>
> Key: KAFKA-4149
> URL: https://issues.apache.org/jira/browse/KAFKA-4149
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>
> This started happening recently, may be related to upgrading to Gradle 3:
> {code}
> java.lang.NoSuchMethodError: 
> scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
>   at kafka.utils.MockScheduler.(MockScheduler.scala:38)
>   at kafka.utils.MockTime.(MockTime.scala:35)
>   at kafka.utils.MockTime.(MockTime.scala:37)
>   at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.(EmbeddedKafkaCluster.java:44)
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.(KafkaStreamsTest.java:42)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk7/1530/testReport/junit/org.apache.kafka.streams/KafkaStreamsTest/classMethod/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-10 Thread Becket Qin
Hi Ismael,

Got it. I agree that it is safer to use java.lang.Long instead of primitive
long. Returning a null sounds reasonable in our case where performance is
not a major concern. I will make the change in the KIP wiki.

Regarding the consistency, I am not sure how big impact it would be if we
get rid all the primitive type fields in the messages (which is a backwards
incompatible change). If it is a problem, we may still have the
inconsistent representation of the missing values.

Jiangjie (Becket) Qin

On Sat, Sep 10, 2016 at 1:01 AM, Ismael Juma  wrote:

> Becket, comments inline.
>
> On Fri, Sep 9, 2016 at 6:55 PM, Becket Qin  wrote:
>
> > Completely agree that we should have a consistent representation of
> > something missing/unknown in the code.
> >
> > My understanding of the convention is that -1 means "not
> > available"/"unknown". For example, when producer request failed, the
> offset
> > we used in the callback is -1. And Message.NoTimestamp is also -1.
>
>
> ProducerRecord uses a nullable timestamp instead of Record.NO_TIMESTAMP to
> indicate a missing value though. That was the example in my original
> message.
>
> Using -1
> > instead of null has at least two benefits.
> > 1) it works for primitive type as well as classes.
> >
>
> This isn't a benefit outside of very specific use-cases (were the cost of
> boxing is a problem). The wrapper classes can be used, after all. The
> downsides of using -1 is that the type system doesn't help you (there's a
> reason why they're called magic values). If you see a `java.lang.Long`, you
> have a hint that `null` is a valid value. The other problem with a -1 for a
> timestamp is that you may do nonsensical comparisons against it without
> error. When using `null`, it will fail-fast with a NPE so that you can fix
> it (even better would be to get compile-time errors, but let's leave that
> out of this discussion for now).
>
>
> > 2) it is easy to send via wire protocols
> >
>
> I think it's important to distinguish what we do in the wire protocols
> (which may be more low-level) with what we do in user facing APIs (where
> usability and safety are very important).
>
> Ismael
>