[jira] [Created] (KAFKA-6653) Delayed operations may not be completed when there is lock contention

2018-03-14 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6653:
-

 Summary: Delayed operations may not be completed when there is 
lock contention
 Key: KAFKA-6653
 URL: https://issues.apache.org/jira/browse/KAFKA-6653
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.0.1, 0.11.0.2, 1.1.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


If there is lock contention while multiple threads check if a delayed operation 
may be completed (e.g. a produce request with acks=-1), only the thread that 
acquires the lock without blocking attempts to complete the operation. This 
change was made to avoid deadlocks under KAFKA-5970. But this leaves a timing 
window when an operation becomes ready to complete after another thread has 
acquired the lock and performed the check for completion, but not yet released 
the lock. In this case, the operation may never be completed and will timeout 
unless there are other operations with the same key. The timeout was observed 
in a failed system test where a produce request timed out, causing the test 
failure.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Add me to contributor list

2018-03-14 Thread Jimin Hsieh
Hi,

Would it be possible to add me to the contributor list? My jira username:
JiminHsieh

Thanks,
Jimin Hsieh


Re: [VOTE] 1.1.0 RC2

2018-03-14 Thread Damian Guy
Thanks for pointing out Satish. Links updated:



Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 1.1.0.

This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546

A few highlights:

* Significant Controller improvements (much faster and session expiration
edge cases fixed)
* Data balancing across log directories (JBOD)
* More efficient replication when the number of partitions is large
* Dynamic Broker Configs
* Delegation tokens (KIP-48)
* Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)

Release notes for the 1.1.0 release:
http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Friday, March 16, 1pm PDT>

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~damianguy/kafka-1.1.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/

* Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
https://github.com/apache/kafka/tree/1.1.0-rc2


* Documentation:
http://kafka.apache.org/11/documentation.html


* Protocol:
http://kafka.apache.org/11/protocol.html


* Successful Jenkins builds for the 1.1 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
System tests: https://jenkins.confluent.io/job/system-test-kafka/job/1.1/38/

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546

On Wed, 14 Mar 2018 at 04:41 Satish Duggana 
wrote:

> Hi Damian,
> Thanks for starting vote thread for 1.1.0 release.
>
> There may be a typo on the tag to be voted upon for this release candidate.
> I guess it should be https://github.com/apache/kafka/tree/1.1.0-rc2
> instead
> of https://github.com/apache/kafka/tree/1.1.0-rc.
>
> On Wed, Mar 14, 2018 at 8:27 AM, Satish Duggana 
> wrote:
>
> > Hi Damian,
> > Given release plan link in earlier mail is about 1.0 release. You may
> want
> > to replace that with 1.1.0 release plan link[1].
> >
> > 1 - https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=75957546
> >
> > Thanks,
> > Satish.
> >
> > On Wed, Mar 14, 2018 at 12:47 AM, Damian Guy 
> wrote:
> >
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the third candidate for release of Apache Kafka 1.1.0.
> >>
> >> This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
> >> Please see the release plan for more details:
> >>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
> >>
> >> A few highlights:
> >>
> >> * Significant Controller improvements (much faster and session
> expiration
> >> edge cases fixed)
> >> * Data balancing across log directories (JBOD)
> >> * More efficient replication when the number of partitions is large
> >> * Dynamic Broker Configs
> >> * Delegation tokens (KIP-48)
> >> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> >>
> >> Release notes for the 1.1.0 release:
> >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
> >>
> >> *** Please download, test and vote by Friday, March 16, 1pm PDT>
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> http://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging/
> >>
> >> * Javadoc:
> >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
> >>
> >> * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> >> https://github.com/apache/kafka/tree/1.1.0-rc
> >> 2
> >>
> >>
> >> * Documentation:
> >> http://kafka.apache.org/11/documentation.html
> >> 
> >>
> >> * Protocol:
> >> http://kafka.apache.org/11/protocol.html
> >> 
> >>
> >> * Successful Jenkins builds for the 1.1 branch:
> >> Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
> >> System tests: https://jenkins.confluent.io/j
> >> ob/system-test-kafka/job/1.1/38/
> >>
> >> /**
> >>
> >> Thanks,
> >> Damian
> >>
> >>
> >> *
> >>
> >
> >
>


[jira] [Created] (KAFKA-6654) Customize SSLContext creation

2018-03-14 Thread Robert Wruck (JIRA)
Robert Wruck created KAFKA-6654:
---

 Summary: Customize SSLContext creation
 Key: KAFKA-6654
 URL: https://issues.apache.org/jira/browse/KAFKA-6654
 Project: Kafka
  Issue Type: Improvement
  Components: config
Affects Versions: 1.0.0
Reporter: Robert Wruck


Currently, loading of SSL keystore and truststore always uses a FileInputStream 
(SslFactory.SecurityStore) and cannot be changed to load keystores from other 
locations such as the classpath, raw byte arrays etc.

Furthermore, passwords for the key stores have to be provided as plaintext 
configuration properties.

Delegating the creation of an SSLContext to a customizable implementation might 
solve some more issues such as KAFKA-5519, KAFKA-4933, KAFKA-4294, KAFKA-2629 
by enabling Kafka users to implement their own.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.0-jdk7 #168

2018-03-14 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Remove kafka-consumer-offset-checker.bat for KAFKA-3356 (#4703)

[jason] KAFKA-3978; Ensure high watermark is always positive (#4695)

--
[...truncated 372.02 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange 

Zookeeper dependency

2018-03-14 Thread Tom Bentley
I saw a comment in https://issues.apache.org/jira/browse/KAFKA-6598 about
the possibility of removing Kafka's dependency on Zookeeper.

While I'm sure the promised KIP will have a lot of the interesting
technical details, I was wondering if anyone was able to shed any light on
this in terms of what sort of timeframe any removal might happen in. Or any
idea about when the KIP itself might be published?

Many thanks,

Tom


[jira] [Created] (KAFKA-6655) CleanupThread] Failed to lock the state directory due to an unexpected exception

2018-03-14 Thread Srini (JIRA)
Srini created KAFKA-6655:


 Summary: CleanupThread] Failed to lock the state directory due to 
an unexpected exception
 Key: KAFKA-6655
 URL: https://issues.apache.org/jira/browse/KAFKA-6655
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.1
Reporter: Srini


CleanupThread] Failed to lock the state directory due to an unexpected exception

java.nio.file.DirectoryNotEmptyException: \tmp\kafka-streams\srini-20171208\0_9
 at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
 at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 at java.nio.file.Files.delete(Files.java:1126)
 at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:636)
 at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:619)
 at java.nio.file.Files.walkFileTree(Files.java:2688)
 at java.nio.file.Files.walkFileTree(Files.java:2742)
 at org.apache.kafka.common.utils.Utils.delete(Utils.java:619)
 at 
org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:245)
 at org.apache.kafka.streams.KafkaStreams$3.run(KafkaStreams.java:761)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.1.0 RC2

2018-03-14 Thread Ted Yu
+1

Ran test suite - passed (apart from testMetricsLeak which is flaky).

On Wed, Mar 14, 2018 at 3:30 AM, Damian Guy  wrote:

> Thanks for pointing out Satish. Links updated:
>
> 
>
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.1.0.
>
> This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
> Please see the release plan for more details:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
>
> A few highlights:
>
> * Significant Controller improvements (much faster and session expiration
> edge cases fixed)
> * Data balancing across log directories (JBOD)
> * More efficient replication when the number of partitions is large
> * Dynamic Broker Configs
> * Delegation tokens (KIP-48)
> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
>
> Release notes for the 1.1.0 release:
> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, March 16, 1pm PDT>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
>
> * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> https://github.com/apache/kafka/tree/1.1.0-rc2
>
>
> * Documentation:
> http://kafka.apache.org/11/documentation.html
> 
>
> * Protocol:
> http://kafka.apache.org/11/protocol.html
> 
>
> * Successful Jenkins builds for the 1.1 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
> System tests: https://jenkins.confluent.io/job/system-test-kafka/job/1.1/
> 38/
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
>
> On Wed, 14 Mar 2018 at 04:41 Satish Duggana 
> wrote:
>
> > Hi Damian,
> > Thanks for starting vote thread for 1.1.0 release.
> >
> > There may be a typo on the tag to be voted upon for this release
> candidate.
> > I guess it should be https://github.com/apache/kafka/tree/1.1.0-rc2
> > instead
> > of https://github.com/apache/kafka/tree/1.1.0-rc.
> >
> > On Wed, Mar 14, 2018 at 8:27 AM, Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Hi Damian,
> > > Given release plan link in earlier mail is about 1.0 release. You may
> > want
> > > to replace that with 1.1.0 release plan link[1].
> > >
> > > 1 - https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=75957546
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Wed, Mar 14, 2018 at 12:47 AM, Damian Guy 
> > wrote:
> > >
> > >> Hello Kafka users, developers and client-developers,
> > >>
> > >> This is the third candidate for release of Apache Kafka 1.1.0.
> > >>
> > >> This is minor version release of Apache Kakfa. It Includes 29 new
> KIPs.
> > >> Please see the release plan for more details:
> > >>
> > >>
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=71764913
> > >>
> > >> A few highlights:
> > >>
> > >> * Significant Controller improvements (much faster and session
> > expiration
> > >> edge cases fixed)
> > >> * Data balancing across log directories (JBOD)
> > >> * More efficient replication when the number of partitions is large
> > >> * Dynamic Broker Configs
> > >> * Delegation tokens (KIP-48)
> > >> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> > >>
> > >> Release notes for the 1.1.0 release:
> > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
> > >>
> > >> *** Please download, test and vote by Friday, March 16, 1pm PDT>
> > >>
> > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > >> http://kafka.apache.org/KEYS
> > >>
> > >> * Release artifacts to be voted upon (source and binary):
> > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
> > >>
> > >> * Maven artifacts to be voted upon:
> > >> https://repository.apache.org/content/groups/staging/
> > >>
> > >> * Javadoc:
> > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
> > >>
> > >> * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> > >> https://github.com/apache/kafka/tree/1.1.0-rc
> > >> 2
> > >>
> > >>
> > >> * Documentation:
> > >> http://kafka.apache.org/11/documentation.html
> > >> 
> > >>
> > >> * Protocol:
> > >> http://kafka.apache.org/11/protocol.html
> > >> 
> > >>
> > >> * Successful Jenkins builds for the 1.1 branch:
> > >> Unit/integration tests: https://builds.apache.org/job/
> kafka-1.1-jdk7/78
> > >> System tests: https://jenki

[jira] [Created] (KAFKA-6656) Use non-zero status code when kafka-configs.sh fails

2018-03-14 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6656:
--

 Summary: Use non-zero status code when kafka-configs.sh fails
 Key: KAFKA-6656
 URL: https://issues.apache.org/jira/browse/KAFKA-6656
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Currently we return status 0 from kafka-configs.sh even if the command raises 
an error. It would be better to use a non-zero status code so that it can be 
scripted more easily



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2475

2018-03-14 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Correct "exeption" to "exception" in connect docs (#4709)

--
[...truncated 416.62 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtoc

Re: Add me to contributor list

2018-03-14 Thread Jun Rao
Hi, Jimin,

Thanks for your interest. Added you to the contributor list.

Jun

On Wed, Mar 14, 2018 at 3:17 AM, Jimin Hsieh 
wrote:

> Hi,
>
> Would it be possible to add me to the contributor list? My jira username:
> JiminHsieh
>
> Thanks,
> Jimin Hsieh
>


[jira] [Created] (KAFKA-6657) Add StreamsConfig prefix for different consumers

2018-03-14 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6657:
--

 Summary: Add StreamsConfig prefix for different consumers
 Key: KAFKA-6657
 URL: https://issues.apache.org/jira/browse/KAFKA-6657
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 1.1.0
Reporter: Matthias J. Sax


Kafka Streams allows to pass in different configs for different clients by 
prefixing the corresponding parameter with `producer.` or `consumer.`.

However, Kafka Streams internally uses multiple consumers, (1) the main 
consumer (2) the restore consumer and (3) the global consumer (that is a 
restore consumer as well atm).

For some use cases, it's required to set different configs for different 
consumers. Thus, we should add two new prefix for restore and global consumer. 
We might also consider to extend `KafkaClientSupplier` and add a 
`getGlobalConsumer()` method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6658) Fix RoundTripWorkload and make k/v generation configurable

2018-03-14 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-6658:
--

 Summary: Fix RoundTripWorkload and make k/v generation configurable
 Key: KAFKA-6658
 URL: https://issues.apache.org/jira/browse/KAFKA-6658
 Project: Kafka
  Issue Type: Bug
  Components: system tests, unit tests
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Fixes RoundTripWorkload. Currently RoundTripWorkload is unable to get the 
sequence number of the keys that it produced.

Also, make PayloadGenerator an interface which can have multiple 
implementations: constant, uniform random, sequential, and allow different 
payload generators to be used for keys and values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Add me to the contributor list

2018-03-14 Thread Sirisha Sindiri
Hi

Could you please add me to the contributor list?
My jira name is Sindiri.

Thanks,
Sirisha Sindiri


[jira] [Created] (KAFKA-6659) Improve error message if state store is not found

2018-03-14 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6659:
--

 Summary: Improve error message if state store is not found
 Key: KAFKA-6659
 URL: https://issues.apache.org/jira/browse/KAFKA-6659
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


If a processor tries to access a store but the store is not connected to the 
processor, Streams fails with
{quote}Caused by: org.apache.kafka.streams.errors.TopologyBuilderException: 
Invalid topology building: Processor KSTREAM-TRANSFORM-36 has no access 
to StateStore questions-awaiting-answers-store
{quote}
We should improve this error message and give a hint to the user how to fix the 
issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Add me to the contributor list

2018-03-14 Thread Jun Rao
Hi, Sirisha,

Thanks for your interest. Just added you to the contributor list.

Jun

On Wed, Mar 14, 2018 at 12:05 PM, Sirisha Sindiri  wrote:

> Hi
>
> Could you please add me to the contributor list?
> My jira name is Sindiri.
>
> Thanks,
> Sirisha Sindiri
>


[jira] [Created] (KAFKA-6660) Monitoring number of messages expired due to retention policy

2018-03-14 Thread Matt Garbis (JIRA)
Matt Garbis created KAFKA-6660:
--

 Summary: Monitoring number of messages expired due to retention 
policy
 Key: KAFKA-6660
 URL: https://issues.apache.org/jira/browse/KAFKA-6660
 Project: Kafka
  Issue Type: Improvement
Reporter: Matt Garbis


I have not been able to find this out, but is there a way to monitor how many 
messages were expired based on the retention policy? If not, JMX metrics like 
this would be very useful for my team. I would like to be able to filter this 
by topic and broker. Something like:
{code:java}
kafka.server:type=BrokerTopicMetrics,name=MessagesExpiredPerSec{code}
 

Additionally taking this one step further, it would be cool to be able to 
monitor how many messages a consumer group did not consume before they were 
expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Add me to the contributor list

2018-03-14 Thread Sirisha Sindiri
Thank you Jun !!
I am trying to assign https://issues.apache.org/jira/browse/SPARK-23685 to
myself but I could not.
Will you be able to help me out?

Thanks,
Sirisha

On Wed, Mar 14, 2018 at 3:20 PM, Jun Rao  wrote:

> Hi, Sirisha,
>
> Thanks for your interest. Just added you to the contributor list.
>
> Jun
>
> On Wed, Mar 14, 2018 at 12:05 PM, Sirisha Sindiri <
> sirisha.sind...@gmail.com
> > wrote:
>
> > Hi
> >
> > Could you please add me to the contributor list?
> > My jira name is Sindiri.
> >
> > Thanks,
> > Sirisha Sindiri
> >
>


Re: Add me to the contributor list

2018-03-14 Thread Jun Rao
Hi, Sirisha,

I could only add you to the Kafka contributor list. For Spark jiras, you
would need to ask permission from Spark committers.

Thanks,

Jun

On Wed, Mar 14, 2018 at 1:49 PM, Sirisha Sindiri 
wrote:

> Thank you Jun !!
> I am trying to assign https://issues.apache.org/jira/browse/SPARK-23685 to
> myself but I could not.
> Will you be able to help me out?
>
> Thanks,
> Sirisha
>
> On Wed, Mar 14, 2018 at 3:20 PM, Jun Rao  wrote:
>
> > Hi, Sirisha,
> >
> > Thanks for your interest. Just added you to the contributor list.
> >
> > Jun
> >
> > On Wed, Mar 14, 2018 at 12:05 PM, Sirisha Sindiri <
> > sirisha.sind...@gmail.com
> > > wrote:
> >
> > > Hi
> > >
> > > Could you please add me to the contributor list?
> > > My jira name is Sindiri.
> > >
> > > Thanks,
> > > Sirisha Sindiri
> > >
> >
>


Re: Add me to the contributor list

2018-03-14 Thread Sirisha Sindiri
Sure Thank you!!

On Wed, Mar 14, 2018 at 3:51 PM, Jun Rao  wrote:

> Hi, Sirisha,
>
> I could only add you to the Kafka contributor list. For Spark jiras, you
> would need to ask permission from Spark committers.
>
> Thanks,
>
> Jun
>
> On Wed, Mar 14, 2018 at 1:49 PM, Sirisha Sindiri <
> sirisha.sind...@gmail.com>
> wrote:
>
> > Thank you Jun !!
> > I am trying to assign https://issues.apache.org/jira/browse/SPARK-23685
> to
> > myself but I could not.
> > Will you be able to help me out?
> >
> > Thanks,
> > Sirisha
> >
> > On Wed, Mar 14, 2018 at 3:20 PM, Jun Rao  wrote:
> >
> > > Hi, Sirisha,
> > >
> > > Thanks for your interest. Just added you to the contributor list.
> > >
> > > Jun
> > >
> > > On Wed, Mar 14, 2018 at 12:05 PM, Sirisha Sindiri <
> > > sirisha.sind...@gmail.com
> > > > wrote:
> > >
> > > > Hi
> > > >
> > > > Could you please add me to the contributor list?
> > > > My jira name is Sindiri.
> > > >
> > > > Thanks,
> > > > Sirisha Sindiri
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-6655) CleanupThread] Failed to lock the state directory due to an unexpected exception

2018-03-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6655.

Resolution: Duplicate

> CleanupThread] Failed to lock the state directory due to an unexpected 
> exception
> 
>
> Key: KAFKA-6655
> URL: https://issues.apache.org/jira/browse/KAFKA-6655
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.1
>Reporter: Srini
>Priority: Major
>
> CleanupThread] Failed to lock the state directory due to an unexpected 
> exception
> java.nio.file.DirectoryNotEmptyException: 
> \tmp\kafka-streams\srini-20171208\0_9
>  at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
>  at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>  at java.nio.file.Files.delete(Files.java:1126)
>  at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:636)
>  at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:619)
>  at java.nio.file.Files.walkFileTree(Files.java:2688)
>  at java.nio.file.Files.walkFileTree(Files.java:2742)
>  at org.apache.kafka.common.utils.Utils.delete(Utils.java:619)
>  at 
> org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:245)
>  at org.apache.kafka.streams.KafkaStreams$3.run(KafkaStreams.java:761)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.1.0 RC2

2018-03-14 Thread Harsha
+1
Ran tests
Ran a 3 node cluster to test basic operations.

Thanks,
Harsha

On Wed, Mar 14, 2018, at 9:04 AM, Ted Yu wrote:
> +1
> 
> Ran test suite - passed (apart from testMetricsLeak which is flaky).
> 
> On Wed, Mar 14, 2018 at 3:30 AM, Damian Guy  wrote:
> 
> > Thanks for pointing out Satish. Links updated:
> >
> > 
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 1.1.0.
> >
> > This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
> > Please see the release plan for more details:
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
> >
> > A few highlights:
> >
> > * Significant Controller improvements (much faster and session expiration
> > edge cases fixed)
> > * Data balancing across log directories (JBOD)
> > * More efficient replication when the number of partitions is large
> > * Dynamic Broker Configs
> > * Delegation tokens (KIP-48)
> > * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> >
> > Release notes for the 1.1.0 release:
> > http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Friday, March 16, 1pm PDT>
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> > https://github.com/apache/kafka/tree/1.1.0-rc2
> >
> >
> > * Documentation:
> > http://kafka.apache.org/11/documentation.html
> > 
> >
> > * Protocol:
> > http://kafka.apache.org/11/protocol.html
> > 
> >
> > * Successful Jenkins builds for the 1.1 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
> > System tests: https://jenkins.confluent.io/job/system-test-kafka/job/1.1/
> > 38/
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
> >
> > On Wed, 14 Mar 2018 at 04:41 Satish Duggana 
> > wrote:
> >
> > > Hi Damian,
> > > Thanks for starting vote thread for 1.1.0 release.
> > >
> > > There may be a typo on the tag to be voted upon for this release
> > candidate.
> > > I guess it should be https://github.com/apache/kafka/tree/1.1.0-rc2
> > > instead
> > > of https://github.com/apache/kafka/tree/1.1.0-rc.
> > >
> > > On Wed, Mar 14, 2018 at 8:27 AM, Satish Duggana <
> > satish.dugg...@gmail.com>
> > > wrote:
> > >
> > > > Hi Damian,
> > > > Given release plan link in earlier mail is about 1.0 release. You may
> > > want
> > > > to replace that with 1.1.0 release plan link[1].
> > > >
> > > > 1 - https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=75957546
> > > >
> > > > Thanks,
> > > > Satish.
> > > >
> > > > On Wed, Mar 14, 2018 at 12:47 AM, Damian Guy 
> > > wrote:
> > > >
> > > >> Hello Kafka users, developers and client-developers,
> > > >>
> > > >> This is the third candidate for release of Apache Kafka 1.1.0.
> > > >>
> > > >> This is minor version release of Apache Kakfa. It Includes 29 new
> > KIPs.
> > > >> Please see the release plan for more details:
> > > >>
> > > >>
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=71764913
> > > >>
> > > >> A few highlights:
> > > >>
> > > >> * Significant Controller improvements (much faster and session
> > > expiration
> > > >> edge cases fixed)
> > > >> * Data balancing across log directories (JBOD)
> > > >> * More efficient replication when the number of partitions is large
> > > >> * Dynamic Broker Configs
> > > >> * Delegation tokens (KIP-48)
> > > >> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> > > >>
> > > >> Release notes for the 1.1.0 release:
> > > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
> > > >>
> > > >> *** Please download, test and vote by Friday, March 16, 1pm PDT>
> > > >>
> > > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > > >> http://kafka.apache.org/KEYS
> > > >>
> > > >> * Release artifacts to be voted upon (source and binary):
> > > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
> > > >>
> > > >> * Maven artifacts to be voted upon:
> > > >> https://repository.apache.org/content/groups/staging/
> > > >>
> > > >> * Javadoc:
> > > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
> > > >>
> > > >> * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> > > >> https://github.com/apache/kafka/tree/1.1.0-rc
> > > >> 2
> > > >>
> > > >>
> > > >> * Documentation:
>

Re: [DISCUSS] KIP-257 - Configurable Quota Management

2018-03-14 Thread Jun Rao
Hi, Rajini,

Thanks for the KIP. Looks good overall. A few comments below.

10. "If  quota config is used, *user* tag is set to user principal of
the session and *client-id* tag is set to empty string. " Could we just
omit such a tag if the value is empty?

11. I think Viktor has a valid point on handling partition removal.
Currently, we use -2 as the leader to signal the deletion of a partition.
Not sure if we want to depend on that in the interface since it's an
internal value.

12. Could you explain a bit more the need for quotaLimit()? This is called
after the updateQuota() call. Could we just let updateQuota do what
quotaLimit()
does?

Jun

On Wed, Feb 21, 2018 at 10:57 AM, Rajini Sivaram 
wrote:

> Hi all,
>
> I have submitted KIP-257 to enable customisation of client quota
> computation:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 257+-+Configurable+Quota+Management
>
>
> The KIP proposes to make quota management pluggable to enable group-based
> and partition-based quotas for clients.
>
> Feedback and suggestions are welcome.
>
> Thank you...
>
> Regards,
>
> Rajini
>


[jira] [Resolved] (KAFKA-6640) Improve efficiency of KafkaAdminClient.describeTopics()

2018-03-14 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6640.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Improve efficiency of KafkaAdminClient.describeTopics()
> ---
>
> Key: KAFKA-6640
> URL: https://issues.apache.org/jira/browse/KAFKA-6640
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>Priority: Major
> Fix For: 1.2.0
>
>
> Currently in KafkaAdminClient.describeTopics(), for each topic in the 
> request, an complete map of cluster and errors will be constructed for every 
> topic and partition. This unnecessarily increases the complexity of 
> describeTopics() to O(n^2).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6661) Sink connectors that explicitly 'resume' topic partitions can resume a paused task

2018-03-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-6661:


 Summary: Sink connectors that explicitly 'resume' topic partitions 
can resume a paused task
 Key: KAFKA-6661
 URL: https://issues.apache.org/jira/browse/KAFKA-6661
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0, 0.11.0.0, 0.10.0.0, 0.9.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch


Sink connectors are allowed to use the {{SinkTaskContext}}'s methods to 
explicitly pause and resume topic partitions. This is useful when connectors 
need additional time processing the records for specific topic partitions 
(e.g., the external system has an outage).

However, when the sink connector has been paused via the REST API, the worker 
for the sink tasks pause the consumer. When the connector is polled, the poll 
request might timeout and return no records. Connect then calls the task's 
{{put(...)}} method (with no records), and this allows the task to optionally 
call any of the {{SinkTaskContext}}'s pause or resume methods. If it calls 
resume, this will unexpectedly resume the paused consumer, causing the consumer 
to return messages and the connector to process those messages --  despite the 
connector still being paused.

This is reported against 1.0, but the affected code has not been changed since 
at least 0.9.0.0.

A workaround is to remove rather than pause a connector. It's inconvenient, but 
it works.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-257 - Configurable Quota Management

2018-03-14 Thread Rajini Sivaram
Hi Jun,

Thank you for reviewing the KIP.

10. This is the current behaviour (this KIP retains the same behaviour for
the default quota callback). We include 'user' and 'client-id' tags in all
the quota metrics, rather than omit tags at the moment.

11. Ah, I hadn't realised that. I wasn't expecting to include deleted
partitions in updatePartitionMetadata. I have updated the Javadoc in the
KIP to reflect that.

12. When quotas are updated as a result of `updateQuota` or `
updatePartitionMetadata`, we may need to update quota bound for one or more
existing metrics. I didn't want to expose metrics to the callback. So `
quotaLimit` was providing the new quotas corresponding to existing metrics.
But perhaps a neater way to do this is to return updated quotas as the
return value of `updateQuota` and `updatePartitionMetadata` so that the
quota manager can handle metrics updates for those. I have updated the KIP.


On Wed, Mar 14, 2018 at 9:57 PM, Jun Rao  wrote:

> Hi, Rajini,
>
> Thanks for the KIP. Looks good overall. A few comments below.
>
> 10. "If  quota config is used, *user* tag is set to user principal of
> the session and *client-id* tag is set to empty string. " Could we just
> omit such a tag if the value is empty?
>
> 11. I think Viktor has a valid point on handling partition removal.
> Currently, we use -2 as the leader to signal the deletion of a partition.
> Not sure if we want to depend on that in the interface since it's an
> internal value.
>
> 12. Could you explain a bit more the need for quotaLimit()? This is called
> after the updateQuota() call. Could we just let updateQuota do what
> quotaLimit()
> does?
>
> Jun
>
> On Wed, Feb 21, 2018 at 10:57 AM, Rajini Sivaram 
> wrote:
>
> > Hi all,
> >
> > I have submitted KIP-257 to enable customisation of client quota
> > computation:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 257+-+Configurable+Quota+Management
> >
> >
> > The KIP proposes to make quota management pluggable to enable group-based
> > and partition-based quotas for clients.
> >
> > Feedback and suggestions are welcome.
> >
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>


Re: [DISCUSSION] KIP-266: Add TimeoutException to KafkaConsumer#position()

2018-03-14 Thread Richard Yu
Note to all: I have included bounding commitSync() and committed() in this
KIP.

On Sun, Mar 11, 2018 at 5:05 PM, Richard Yu 
wrote:

> Hi all,
>
> I updated the KIP where overloading position() is now the favored approach.
> Bounding position() using requestTimeoutMs has been listed as rejected.
>
> Any thoughts?
>
> On Tue, Mar 6, 2018 at 6:00 PM, Guozhang Wang  wrote:
>
>> I agree that adding the overloads is most flexible. But going for that
>> direction we'd do that for all the blocking call that I've listed above,
>> with this timeout value covering the end-to-end waiting time.
>>
>>
>> Guozhang
>>
>> On Tue, Mar 6, 2018 at 10:02 AM, Ted Yu  wrote:
>>
>> > bq. The most flexible option is to add overloads to the consumer
>> >
>> > This option is flexible.
>> >
>> > Looking at the tail of SPARK-18057, Spark dev voiced the same choice.
>> >
>> > +1 for adding overload with timeout parameter.
>> >
>> > Cheers
>> >
>> > On Mon, Mar 5, 2018 at 2:42 PM, Jason Gustafson 
>> > wrote:
>> >
>> > > @Guozhang I probably have suggested all options at some point or
>> another,
>> > > including most recently, the current KIP! I was thinking that
>> practically
>> > > speaking, the request timeout defines how long the user is willing to
>> > wait
>> > > for a response. The consumer doesn't really have a complex send
>> process
>> > > like the producer for any of these APIs, so I wasn't sure how much
>> > benefit
>> > > there would be from having more granular control over timeouts (in the
>> > end,
>> > > KIP-91 just adds a single timeout to control the whole send). That
>> said,
>> > it
>> > > might indeed be better to avoid overloading the config as you suggest
>> > since
>> > > at least it avoids inconsistency with the producer's usage.
>> > >
>> > > The most flexible option is to add overloads to the consumer so that
>> > users
>> > > can pass the timeout directly. I'm not sure if that is more or less
>> > > annoying than a new config, but I've found config timeouts a little
>> > > constraining in practice. For example, I could imagine users wanting
>> to
>> > > wait longer for an offset commit operation than a position lookup; if
>> the
>> > > latter isn't timely, users can just pause the partition and continue
>> > > fetching on others. If you cannot commit offsets, however, it might be
>> > > safer for an application to wait availability of the coordinator than
>> > > continuing.
>> > >
>> > > -Jason
>> > >
>> > > On Sun, Mar 4, 2018 at 10:14 PM, Guozhang Wang 
>> > wrote:
>> > >
>> > > > Hello Richard,
>> > > >
>> > > > Thanks for the proposed KIP. I have a couple of general comments:
>> > > >
>> > > > 1. I'm not sure if piggy-backing the timeout exception on the
>> > > > existing requestTimeoutMs configured in "request.timeout.ms" is a
>> good
>> > > > idea
>> > > > since a) it is a general config that applies for all types of
>> requests,
>> > > and
>> > > > 2) using it to cover all the phases of an API call, including
>> network
>> > > round
>> > > > trip and potential metadata refresh is shown to not be a good idea,
>> as
>> > > > illustrated in KIP-91:
>> > > >
>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > 91+Provide+Intuitive+User+Timeouts+in+The+Producer
>> > > >
>> > > > In fact, I think in KAFKA-4879 which is aimed for the same issue as
>> > > > KAFKA-6608,
>> > > > Jason has suggested we use a new config for the API. Maybe this
>> would
>> > be
>> > > a
>> > > > more intuitive manner than reusing the request.timeout.ms config.
>> > > >
>> > > >
>> > > > 2. Besides the Consumer.position() call, there are a couple of more
>> > > > blocking calls today that could result in infinite blocking:
>> > > > Consumer.commitSync() and Consumer.committed(), should they be
>> > considered
>> > > > in this KIP as well?
>> > > >
>> > > > 3. There are a few other APIs that are today relying on
>> > > request.timeout.ms
>> > > > already for breaking the infinite blocking, namely
>> > > Consumer.partitionFor(),
>> > > > Consumer.OffsetAndTimestamp() and Consumer.listTopics(), if we are
>> > making
>> > > > the other blocking calls to be relying a new config as suggested in
>> 1)
>> > > > above, should we also change the semantics of these API functions
>> for
>> > > > consistency?
>> > > >
>> > > >
>> > > > Guozhang
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Sun, Mar 4, 2018 at 11:13 AM, Richard Yu <
>> > yohan.richard...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Hi all,
>> > > > >
>> > > > > I would like to discuss a potential change which would be made to
>> > > > > KafkaConsumer:
>> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
>> > > > action?pageId=75974886
>> > > > >
>> > > > > Thanks,
>> > > > > Richard Yu
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > -- Guozhang
>> > > >
>> > >
>> >
>>
>>
>>
>> --
>> -- Guozhang
>>
>
>


[jira] [Reopened] (KAFKA-6592) NullPointerException thrown when executing ConsoleCosumer with deserializer set to `WindowedDeserializer`

2018-03-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-6592:
--

> NullPointerException thrown when executing ConsoleCosumer with deserializer 
> set to `WindowedDeserializer`
> -
>
> Key: KAFKA-6592
> URL: https://issues.apache.org/jira/browse/KAFKA-6592
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 1.0.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Minor
>
> When reading streams app's output topic with WindowedDeserializer deserilizer 
> using kafka-console-consumer.sh, NullPointerException was thrown due to the 
> fact that the inner deserializer was not initialized since there is no place 
> in ConsoleConsumer to set this class.
> Complete stack trace is shown below:
> {code:java}
> [2018-02-26 14:56:04,736] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.kstream.internals.WindowedDeserializer.deserialize(WindowedDeserializer.java:89)
> at 
> org.apache.kafka.streams.kstream.internals.WindowedDeserializer.deserialize(WindowedDeserializer.java:35)
> at 
> kafka.tools.DefaultMessageFormatter.$anonfun$writeTo$2(ConsoleConsumer.scala:544)
> at scala.Option.map(Option.scala:146)
> at kafka.tools.DefaultMessageFormatter.write$1(ConsoleConsumer.scala:545)
> at kafka.tools.DefaultMessageFormatter.writeTo(ConsoleConsumer.scala:560)
> at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:147)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:84)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk9 #475

2018-03-14 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-1.1-jdk7 #80

2018-03-14 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6640; Improve efficiency of KafkaAdminClient.describeTopics()

[matthias] MINOR: fixing streams test-util compilation errors in Eclipse (#4631)

--
[...truncated 1.52 MB...]
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:895)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:458)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:666)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:631)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1384)
at hudson.model.AbstractProject.poll(AbstractProject.java:1287)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:640)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR: Could not install GRADLE_3_5_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:895)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:458)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:666)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:631)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1384)
at hudson.model.AbstractProject.poll(AbstractProject.java:1287)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:640)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integrati

Build failed in Jenkins: kafka-trunk-jdk7 #3255

2018-03-14 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6640; Improve efficiency of KafkaAdminClient.describeTopics()

[mjsax] MINOR: fixing streams test-util compilation errors in Eclipse (#4631)

--
[...truncated 1.53 MB...]
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] PA

Re: [VOTE] 1.1.0 RC2

2018-03-14 Thread Satish Duggana
+1 (non-binding)

- Ran testAll/releaseTarGzAll on 1.1.0-RC2 tag
- Ran through quickstart of core/streams
- Ran a cluster from binaries built form sources
  - with multiple operations.

Thanks,
Satish.


On Thu, Mar 15, 2018 at 3:25 AM, Harsha  wrote:

> +1
> Ran tests
> Ran a 3 node cluster to test basic operations.
>
> Thanks,
> Harsha
>
> On Wed, Mar 14, 2018, at 9:04 AM, Ted Yu wrote:
> > +1
> >
> > Ran test suite - passed (apart from testMetricsLeak which is flaky).
> >
> > On Wed, Mar 14, 2018 at 3:30 AM, Damian Guy 
> wrote:
> >
> > > Thanks for pointing out Satish. Links updated:
> > >
> > > 
> > >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the third candidate for release of Apache Kafka 1.1.0.
> > >
> > > This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
> > > Please see the release plan for more details:
> > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=75957546
> > >
> > > A few highlights:
> > >
> > > * Significant Controller improvements (much faster and session
> expiration
> > > edge cases fixed)
> > > * Data balancing across log directories (JBOD)
> > > * More efficient replication when the number of partitions is large
> > > * Dynamic Broker Configs
> > > * Delegation tokens (KIP-48)
> > > * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> > >
> > > Release notes for the 1.1.0 release:
> > > http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Friday, March 16, 1pm PDT>
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> > > https://github.com/apache/kafka/tree/1.1.0-rc2
> > >
> > >
> > > * Documentation:
> > > http://kafka.apache.org/11/documentation.html
> > > 
> > >
> > > * Protocol:
> > > http://kafka.apache.org/11/protocol.html
> > > 
> > >
> > > * Successful Jenkins builds for the 1.1 branch:
> > > Unit/integration tests: https://builds.apache.org/job/
> kafka-1.1-jdk7/78
> > > System tests: https://jenkins.confluent.io/
> job/system-test-kafka/job/1.1/
> > > 38/
> > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=75957546
> > >
> > > On Wed, 14 Mar 2018 at 04:41 Satish Duggana 
> > > wrote:
> > >
> > > > Hi Damian,
> > > > Thanks for starting vote thread for 1.1.0 release.
> > > >
> > > > There may be a typo on the tag to be voted upon for this release
> > > candidate.
> > > > I guess it should be https://github.com/apache/kafka/tree/1.1.0-rc2
> > > > instead
> > > > of https://github.com/apache/kafka/tree/1.1.0-rc.
> > > >
> > > > On Wed, Mar 14, 2018 at 8:27 AM, Satish Duggana <
> > > satish.dugg...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Damian,
> > > > > Given release plan link in earlier mail is about 1.0 release. You
> may
> > > > want
> > > > > to replace that with 1.1.0 release plan link[1].
> > > > >
> > > > > 1 - https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > action?pageId=75957546
> > > > >
> > > > > Thanks,
> > > > > Satish.
> > > > >
> > > > > On Wed, Mar 14, 2018 at 12:47 AM, Damian Guy  >
> > > > wrote:
> > > > >
> > > > >> Hello Kafka users, developers and client-developers,
> > > > >>
> > > > >> This is the third candidate for release of Apache Kafka 1.1.0.
> > > > >>
> > > > >> This is minor version release of Apache Kakfa. It Includes 29 new
> > > KIPs.
> > > > >> Please see the release plan for more details:
> > > > >>
> > > > >>
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=71764913
> > > > >>
> > > > >> A few highlights:
> > > > >>
> > > > >> * Significant Controller improvements (much faster and session
> > > > expiration
> > > > >> edge cases fixed)
> > > > >> * Data balancing across log directories (JBOD)
> > > > >> * More efficient replication when the number of partitions is
> large
> > > > >> * Dynamic Broker Configs
> > > > >> * Delegation tokens (KIP-48)
> > > > >> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> > > > >>
> > > > >> Release notes for the 1.1.0 release:
> > > > >> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
> RELEASE_NOTES.html
> > > > >>
> > > > >> *** Please download, test and vote by Friday, March 16, 1pm PDT>
> > > > >>
> > > > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > >> http://kafka.apache.org/KEYS
> > > > >>
> > > > >> * Release artifacts 

Jenkins build is back to normal : kafka-trunk-jdk8 #2476

2018-03-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #3256

2018-03-14 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-4831: add documentation for KIP-265 (#4686)

--
[...truncated 419.46 KB...]

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSe

Re: [DISCUSS] KIP-258: Allow to Store Record Timestamps in RocksDB

2018-03-14 Thread Matthias J. Sax
After some more thoughts, I want to follow John's suggestion and split
upgrading the rebalance metadata from the store upgrade.

I extracted the metadata upgrade into it's own KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-268%3A+Simplify+Kafka+Streams+Rebalance+Metadata+Upgrade

I'll update this KIP accordingly shortly. I also want to consider to
make the store format upgrade more flexible/generic. Atm, the KIP is too
much tailored to the DSL IMHO and does not encounter PAPI users that we
should not force to upgrade the stores. I need to figure out the details
and follow up later.

Please give feedback for the new KIP-268 on the corresponding discussion
thread.

@James: unfortunately, for upgrading to 1.2 I couldn't figure out a way
for a single rolling bounce upgrade. But KIP-268 proposes a fix for
future upgrades. Please share your thoughts.

Thanks for all your feedback!

-Matthias

On 3/12/18 11:56 PM, Matthias J. Sax wrote:
> @John: yes, we would throw if configs are missing (it's an
> implementation details IMHO and thus I did not include it in the KIP)
> 
> @Guozhang:
> 
> 1) I understand know what you mean. We can certainly, allow all values
> "0.10.0.x", "0.10.1.x", "0.10.2.x", ... "1.1.x" for `upgrade.from`
> parameter. I had a similar though once but decided to collapse them into
> one -- will update the KIP accordingly.
> 
> 2) The idea to avoid any config would be, to always send both request.
> If we add a config to eventually disable the old request, we don't gain
> anything with this approach. The question is really, if we are willing
> to pay this overhead from 1.2 on -- note, it would be limited to 2
> versions and not grow further in future releases. More details in (3)
> 
> 3) Yes, this approach subsumes (2) for later releases and allows us to
> stay with 2 "assignment strategies" we need to register, as the new
> assignment strategy will allow to "upgrade itself" via "version
> probing". Thus, (2) would only be a workaround to avoid a config if
> people upgrade from pre-1.2 releases.
> 
> Thus, I don't think we need to register new "assignment strategies" and
> send empty subscriptions for older version.
> 
> 4) I agree that this is a tricky thing to get right with a single
> rebalance. I share the concern that an application might never catch up
> and thus the hot standby will never be ready.
> 
> Maybe it's better to go with 2 rebalances for store upgrades. If we do
> this, we also don't need to go with (2) and can get (3) in place for
> future upgrades. I also think that changes to the metadata are more
> likely and thus allowing for single rolling bounce for this case is more
> important anyway. If we assume that store upgrade a rare, it might be ok
> to sacrifice two rolling bounced for this case. It was just an idea I
> wanted to share (even if I see the issues).
> 
> 
> -Matthias
> 
> 
> 
> On 3/12/18 11:45 AM, Guozhang Wang wrote:
>> Hello Matthias, thanks for your replies.
>>
>>
>> 1) About the config names: actually I was trying to not expose
>> implementation details :) My main concern was that in your proposal the
>> values need to cover the span of all the versions that are actually using
>> the same version, i.e. "0.10.1.x-1.1.x". So if I (as a user) am upgrading
>> from any versions within this range I need to remember to use the value
>> "0.10.1.x-1.1.x" than just specifying my old version. In my suggestion I
>> was trying to argue the benefit of just letting users to specify the actual
>> Kafka version she's trying to upgrade from, than specifying a range of
>> versions. I was not suggesting to use "v1, v2, v3" etc as the values, but
>> still using Kafka versions like broker's `internal.version` config. But if
>> you were suggesting the same thing, i.e. by "0.10.1.x-1.1.x" you meant to
>> say users can just specify "0.10.1" or "0.10.2" or "0.11.0" or "1.1" which
>> are all recognizable config values then I think we are actually on the same
>> page.
>>
>> 2) About the "multi-assignment" idea: yes it would increase the network
>> footprint, but not the message size, IF I'm not mis-understanding your idea
>> of registering multiple assignment. More details:
>>
>> In the JoinGroupRequest, in the protocols field we can encode multiple
>> protocols each with their different metadata. The coordinator will pick the
>> common one that everyone supports (if there are no common one, it will send
>> an error back; if there are multiple ones, it will pick the one with most
>> votes, i.e. the one which was earlier in the encoded list). Since our
>> current Streams rebalance protocol is still based on the consumer
>> coordinator, it means our protocol_type would be "consumer", but instead
>> the protocol type we can have multiple protocols like "streams",
>> "streams_v2", "streams_v3" etc. The downside is that we need to implement a
>> different assignor class for each version and register all of them in
>> consumer's PARTITION_ASSIGNMENT_STRATEGY_CONFIG. In the future i

[DISCUSS] KIP-268: Simplify Kafka Streams Rebalance Metadata Upgrade

2018-03-14 Thread Matthias J. Sax
Hi,

I want to propose KIP-268 to allow rebalance metadata version upgrades
in Kafka Streams:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-268%3A+Simplify+Kafka+Streams+Rebalance+Metadata+Upgrade

Looking forward to your feedback.


-Matthias



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-6662) Consumer use offsetsForTimes() get offset return None.

2018-03-14 Thread Matt Wang (JIRA)
Matt Wang created KAFKA-6662:


 Summary: Consumer use offsetsForTimes() get offset return None.
 Key: KAFKA-6662
 URL: https://issues.apache.org/jira/browse/KAFKA-6662
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.2.0
Reporter: Matt Wang


When we use Consumer's method  offsetsForTimes()  to get the topic-partition 
offset, sometimes it will return null. Print the client log
{code:java}
// 2018-03-15 11:54:05,239] DEBUG Collector TraceCollector dispatcher loop 
interval 256 upload 0 retry 0 fail 0 
(com.meituan.mtrace.collector.sg.AbstractCollector)
[2018-03-15 11:54:05,241] DEBUG Set SASL client state to INITIAL 
(org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2018-03-15 11:54:05,241] DEBUG Set SASL client state to INTERMEDIATE 
(org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2018-03-15 11:54:05,247] DEBUG Set SASL client state to COMPLETE 
(org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2018-03-15 11:54:05,247] DEBUG Initiating API versions fetch from node 53. 
(org.apache.kafka.clients.NetworkClient)
[2018-03-15 11:54:05,253] DEBUG Recorded API versions for node 53: (Produce(0): 
0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 
1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], 
StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], 
ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], 
OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], 
JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], 
LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], 
DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], 
SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], 
CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 
(org.apache.kafka.clients.NetworkClient)
[2018-03-15 11:54:05,315] DEBUG Handling ListOffsetResponse response for 
org.matt_test2-0. Fetched offset -1, timestamp -1 
(org.apache.kafka.clients.consumer.internals.Fetcher){code}
>From the log, we find broker return the offset, but it's value is -1, this 
>value will be removed in Fetcher.handleListOffsetResponse(),
{code:java}
// // Handle v1 and later response
log.debug("Handling ListOffsetResponse response for {}. Fetched offset {}, 
timestamp {}",
topicPartition, partitionData.offset, partitionData.timestamp);
if (partitionData.offset != ListOffsetResponse.UNKNOWN_OFFSET) {
OffsetData offsetData = new OffsetData(partitionData.offset, 
partitionData.timestamp);
timestampOffsetMap.put(topicPartition, offsetData);
}{code}
We test several situations, and we found that in the following two cases it 
will return none.
 # The topic-partition msg number is 0, when we use offsetsForTimes() to get 
the offset, the offset will retuan -1;
 #  The targetTime we use to find offset is larger than the partition 
active_segment's largestTimestamp, the offset will return -1;

If the offset is set -1, it will not be return to consumer client. I think in 
these situation, it should be return the latest offset, and it's also defined 
in kafka/core annotation.
{code:java}
// /**
 * Search the message offset based on timestamp.
 * This method returns an option of TimestampOffset. The offset is the offset 
of the first message whose timestamp is
 * greater than or equals to the target timestamp.
 *
 * If all the message in the segment have smaller timestamps, the returned 
offset will be last offset + 1 and the
 * timestamp will be max timestamp in the segment.
 *
 * If all the messages in the segment have larger timestamps, or no message in 
the segment has a timestamp,
 * the returned the offset will be the base offset of the segment and the 
timestamp will be Message.NoTimestamp.
 *
 * This methods only returns None when the log is not empty but we did not see 
any messages when scanning the log
 * from the indexed position. This could happen if the log is truncated after 
we get the indexed position but
 * before we scan the log from there. In this case we simply return None and 
the caller will need to check on
 * the truncated log and maybe retry or even do the search on another log 
segment.
 *
 * @param timestamp The timestamp to search for.
 * @return the timestamp and offset of the first message whose timestamp is 
larger than or equals to the
 * target timestamp. None will be returned if there is no such message.
 */
def findOffsetByTimestamp(timestamp: Long): Option[TimestampOffset] = {
  // Get the index entry with a timestamp less than or equal to the target 
timestamp
  val timestampOffset = timeIndex.lookup(timestamp)
  val position = index.lookup(timestampOffset.offset).position
  // Search the timestamp
  log.searchForTimestamp(timestamp, positi