Jenkins build is back to normal : kafka-trunk-jdk8 #2948

2018-09-09 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-7333) Protocol changes for KIP-320

2018-09-09 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-7333.
-
Resolution: Fixed

> Protocol changes for KIP-320
> 
>
> Key: KAFKA-7333
> URL: https://issues.apache.org/jira/browse/KAFKA-7333
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.1.0
>
>
> Implement protocol changes for 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-320%3A+Allow+fetchers+to+detect+and+handle+log+truncation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7286) Loading offsets and group metadata hangs with large group metadata records

2018-09-09 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7286.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Loading offsets and group metadata hangs with large group metadata records
> --
>
> Key: KAFKA-7286
> URL: https://issues.apache.org/jira/browse/KAFKA-7286
> Project: Kafka
>  Issue Type: Bug
>Reporter: Flavien Raynaud
>Assignee: Flavien Raynaud
>Priority: Minor
> Fix For: 2.1.0
>
>
> When a (Kafka-based) consumer group contains many members, group metadata 
> records (in the {{__consumer-offsets}} topic) may happen to be quite large.
> Increasing the {{message.max.bytes}} makes storing these records possible.
>  Loading them when a broker restart is done via 
> [doLoadGroupsAndOffsets|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L504].
>  However, this method relies on the {{offsets.load.buffer.size}} 
> configuration to create a 
> [buffer|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L513]
>  that will contain the records being loaded.
> If a group metadata record is too large for this buffer, the loading method 
> will get stuck trying to load records (in a tight loop) into a buffer that 
> cannot accommodate a single record.
> 
> For example, if the {{__consumer-offsets-9}} partition contains a record 
> smaller than {{message.max.bytes}} but larger than 
> {{offsets.load.buffer.size}}, logs would indicate the following:
> {noformat}
> ...
> [2018-08-13 21:00:21,073] INFO [GroupMetadataManager brokerId=0] Scheduling 
> loading of offsets and group metadata from __consumer_offsets-9 
> (kafka.coordinator.group.GroupMetadataManager)
> ...
> {noformat}
> But logs will never contain the expected {{Finished loading offsets and group 
> metadata from ...}} line.
> Consumers whose group are assigned to this partition will see {{Marking the 
> coordinator dead}} and will never be able to stabilize and make progress.
> 
> From what I could gather in the code, it seems that:
>  - 
> [fetchDataInfo|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L522]
>  returns at least one record (even if larger than 
> {{offsets.load.buffer.size}}, thanks to {{minOneMessage = true}})
>  - No fully-readable record is stored in the buffer with 
> [fileRecords.readInto(buffer, 
> 0)|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L528]
>  (too large to fit in the buffer)
>  - 
> [memRecords.batches|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L532]
>  returns an empty iterator
>  - 
> [currOffset|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L590]
>  never advances, hence loading the partition hangs forever.
> 
> It would be great to let the partition load even if a record is larger than 
> the configured {{offsets.load.buffer.size}} limit. The fact that 
> {{minOneMessage = true}} when reading records seems to indicate it might be a 
> good idea for the buffer to accommodate at least one record.
> If you think the limit should stay a hard limit, then at least adding a log 
> line indicating {{offsets.load.buffer.size}} is not large enough and should 
> be increased. Otherwise, one can only guess and dig through the code to 
> figure out what is happening :)
> I will try to open a PR with the first idea (allowing large records to be 
> read when needed) soon, but any feedback from anyone who also had the same 
> issue in the past would be appreciated :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #460

2018-09-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7286; Avoid getting stuck loading large metadata records (#5500)

--
[...truncated 2.17 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system

Build failed in Jenkins: kafka-trunk-jdk10 #461

2018-09-09 Thread Apache Jenkins Server
See 


Changes:

[lindong28] KAFKA-7333; Protocol changes for KIP-320

--
[...truncated 2.21 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.

Re: [DISCUSS] KIP-367 Introduce close(Duration) to Producer instead of close(long, TimeUnit)

2018-09-09 Thread Chia-Ping Tsai
Thanks for your comments!

>  - showing the signatures of methods is sufficient (no need to have
> implementation details in the KIP).
> 
>  - `KafkaAdminClient#close(long duration, TimeUnit unit)` should be
> added as deprecated in the KIP, too
> 
>  - `AdminClient#close(long duration, TimeUnit unit)` is `abstract` and
> the KIP should contain this.
I have addressed all comments. Please take a look.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89070496

--
Chia-Ping

On 2018/09/08 19:52:38, "Matthias J. Sax"  wrote: 
> Thanks for updating the KIP!
> 
> Some more nits:
> 
>  - showing the signatures of methods is sufficient (no need to have
> implementation details in the KIP).
> 
>  - `KafkaAdminClient#close(long duration, TimeUnit unit)` should be
> added as deprecated in the KIP, too
> 
>  - `AdminClient#close(long duration, TimeUnit unit)` is `abstract` and
> the KIP should contain this.
> 
> 
> 
> -Matthias
> 
> 
> 
> On 9/8/18 11:22 AM, Chia-Ping Tsai wrote:
> >> It's a little hard to read -- it's easier if you just list the methods
> >> (without JavaDocs) and indicate if the get deprecated or added. Please
> >> don't show a diff as in a patch :)
> >>
> >> Is there already a JIRA for this? If not, please create on and link it
> >> in the KIP.
> >>
> >> Besides this, I think you can start a VOTE.
> > 
> > Thanks for your suggestions! I have updated KIP-367.
> > 
> > On 2018/09/07 04:27:26, "Matthias J. Sax"  wrote: 
> >> Thanks for the KIP.
> >>
> >> It's a little hard to read -- it's easier if you just list the methods
> >> (without JavaDocs) and indicate if the get deprecated or added. Please
> >> don't show a diff as in a patch :)
> >>
> >> Is there already a JIRA for this? If not, please create on and link it
> >> in the KIP.
> >>
> >> Besides this, I think you can start a VOTE.
> >>
> >>
> >>
> >> -Matthias
> >>
> >> On 9/3/18 11:28 PM, Chia-Ping Tsai wrote:
> >>> hi Jason
> >>>
>  Thanks for the KIP. Makes sense to me. Should we make a similar change to
>  AdminClient?
> >>>
> >>> I have updated KIP-367 to address your comment. Could you please take a 
> >>> look?
> >>>
> >>> On 2018/08/28 20:13:19, Jason Gustafson  wrote: 
>  Thanks for the KIP. Makes sense to me. Should we make a similar change to
>  AdminClient?
> 
>  -Jason
> 
>  On Tue, Aug 28, 2018 at 2:32 AM, Chia-Ping Tsai  
>  wrote:
> 
> > (re-start the thread for KIP-367 because I enter the incorrect topic in
> > first post)
> >
> > hi all
> >
> > I would like to start a discussion of KIP-367 [1]. It is similar to
> > KIP-358 and KIP-266 which is trying to substitute Duration for (long,
> > TimeUnit).
> >
> > [1] https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=89070496
> >
> > --
> > Chia-Ping
> >
> 
> >>
> >>
> 
> 


[jira] [Created] (KAFKA-7392) Allow to specify subnet for Docker containers using standard CIDR notation

2018-09-09 Thread Attila Sasvari (JIRA)
Attila Sasvari created KAFKA-7392:
-

 Summary: Allow to specify subnet for Docker containers using 
standard CIDR notation
 Key: KAFKA-7392
 URL: https://issues.apache.org/jira/browse/KAFKA-7392
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Attila Sasvari


During Kafka system test execution, the IP range of the Docker subnet, 
'ducknet' is allocated by Docker.
{code}
docker network inspect ducknet
[
{
"Name": "ducknet",
"Id": 
"f4325c524feee777817b9cc57b91634e20de96127409c1906c2c156bfeb4beeb",
"Created": "2018-09-09T11:53:40.4332613Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
{code}

The default bridge (docker0) can be controlled 
[externally|https://success.docker.com/article/how-do-i-configure-the-default-bridge-docker0-network-for-docker-engine-to-a-different-subnet]
 through etc/docker/daemon.json, however, subnet created by ducknet is not. It 
might be a problem as many businesses make extensive use of the 
[RFC1918|https://tools.ietf.org/html/rfc1918] private address space (such as 
172.16.0.0/12 : 172.16.0.0 - 172.31.255.255) for internal networks (e.g. VPN).

h4. Proposed changes:
- Introduce a new subnet argument that can be used by {{ducker-ak up}} to 
specify IP range using standard CIDR, extend help message with the following:
{code}
If --subnet is specified, default Docker subnet is overriden by given IP 
address and netmask,
using standard CIDR notation. For example: 192.168.1.5/24.
{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2950

2018-09-09 Thread Apache Jenkins Server
See 


Changes:

[lindong28] KAFKA-7333; Protocol changes for KIP-320

--
[...truncated 2.68 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForC

Re: [DISCUSS] KIP-367 Introduce close(Duration) to Producer instead of close(long, TimeUnit)

2018-09-09 Thread Matthias J. Sax
Thanks you!

On 9/9/18 4:54 AM, Chia-Ping Tsai wrote:
> Thanks for your comments!
> 
>>  - showing the signatures of methods is sufficient (no need to have
>> implementation details in the KIP).
>>
>>  - `KafkaAdminClient#close(long duration, TimeUnit unit)` should be
>> added as deprecated in the KIP, too
>>
>>  - `AdminClient#close(long duration, TimeUnit unit)` is `abstract` and
>> the KIP should contain this.
> I have addressed all comments. Please take a look.
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89070496
> 
> --
> Chia-Ping
> 
> On 2018/09/08 19:52:38, "Matthias J. Sax"  wrote: 
>> Thanks for updating the KIP!
>>
>> Some more nits:
>>
>>  - showing the signatures of methods is sufficient (no need to have
>> implementation details in the KIP).
>>
>>  - `KafkaAdminClient#close(long duration, TimeUnit unit)` should be
>> added as deprecated in the KIP, too
>>
>>  - `AdminClient#close(long duration, TimeUnit unit)` is `abstract` and
>> the KIP should contain this.
>>
>>
>>
>> -Matthias
>>
>>
>>
>> On 9/8/18 11:22 AM, Chia-Ping Tsai wrote:
 It's a little hard to read -- it's easier if you just list the methods
 (without JavaDocs) and indicate if the get deprecated or added. Please
 don't show a diff as in a patch :)

 Is there already a JIRA for this? If not, please create on and link it
 in the KIP.

 Besides this, I think you can start a VOTE.
>>>
>>> Thanks for your suggestions! I have updated KIP-367.
>>>
>>> On 2018/09/07 04:27:26, "Matthias J. Sax"  wrote: 
 Thanks for the KIP.

 It's a little hard to read -- it's easier if you just list the methods
 (without JavaDocs) and indicate if the get deprecated or added. Please
 don't show a diff as in a patch :)

 Is there already a JIRA for this? If not, please create on and link it
 in the KIP.

 Besides this, I think you can start a VOTE.



 -Matthias

 On 9/3/18 11:28 PM, Chia-Ping Tsai wrote:
> hi Jason
>
>> Thanks for the KIP. Makes sense to me. Should we make a similar change to
>> AdminClient?
>
> I have updated KIP-367 to address your comment. Could you please take a 
> look?
>
> On 2018/08/28 20:13:19, Jason Gustafson  wrote: 
>> Thanks for the KIP. Makes sense to me. Should we make a similar change to
>> AdminClient?
>>
>> -Jason
>>
>> On Tue, Aug 28, 2018 at 2:32 AM, Chia-Ping Tsai  
>> wrote:
>>
>>> (re-start the thread for KIP-367 because I enter the incorrect topic in
>>> first post)
>>>
>>> hi all
>>>
>>> I would like to start a discussion of KIP-367 [1]. It is similar to
>>> KIP-358 and KIP-266 which is trying to substitute Duration for (long,
>>> TimeUnit).
>>>
>>> [1] https://cwiki.apache.org/confluence/pages/viewpage.
>>> action?pageId=89070496
>>>
>>> --
>>> Chia-Ping
>>>
>>


>>
>>



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-367 Introduce close(Duration) to Producer and AdminClient instead of close(long, TimeUnit)

2018-09-09 Thread Matthias J. Sax
Thanks a lot for the KIP.

+1 (binding)


-Matthias


On 9/8/18 11:27 AM, Chia-Ping Tsai wrote:
> Hi All,
> 
> I'd like to put KIP-367 to the vote.
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89070496
> 
> --
> Chia-Ping
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Resolved] (KAFKA-4932) Add UUID Serde

2018-09-09 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4932.
--
Resolution: Fixed

Issue resolved by pull request 4438
[https://github.com/apache/kafka/pull/4438]

> Add UUID Serde
> --
>
> Key: KAFKA-4932
> URL: https://issues.apache.org/jira/browse/KAFKA-4932
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Jeff Klukas
>Assignee: Brandon Kirchner
>Priority: Minor
>  Labels: needs-kip, newbie
> Fix For: 2.1.0
>
>
> I propose adding serializers and deserializers for the java.util.UUID class.
> I have many use cases where I want to set the key of a Kafka message to be a 
> UUID. Currently, I need to turn UUIDs into strings or byte arrays and use 
> their associated Serdes, but it would be more convenient to serialize and 
> deserialize UUIDs directly.
> I'd propose that the serializer and deserializer use the 36-byte string 
> representation, calling UUID.toString and UUID.fromString, and then using the 
> existing StringSerializer / StringDeserializer to finish the job. We would 
> also wrap these in a Serde and modify the streams Serdes class to include 
> this in the list of supported types.
> Optionally, we could have the deserializer support a 16-byte representation 
> and it would check the size of the input byte array to determine whether it's 
> a binary or string representation of the UUID. It's not well defined whether 
> the most significant bits or least significant go first, so this deserializer 
> would have to support only one or the other.
> Similary, if the deserializer supported a 16-byte representation, there could 
> be two variants of the serializer, a UUIDStringSerializer and a 
> UUIDBytesSerializer.
> I would be willing to write this PR, but am looking for feedback about 
> whether there are significant concerns here around ambiguity of what the byte 
> representation of a UUID should be, or if there's desire to keep to list of 
> built-in Serdes minimal such that a PR would be unlikely to be accepted.
> KIP Link: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-206%3A+Add+support+for+UUID+serialization+and+deserialization



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] Apache Kafka 2.1.0 Release Plan

2018-09-09 Thread Dong Lin
Hi all,

I would like to be the release manager for our next time-based feature
release 2.1.0.

The recent Kafka release history can be found at
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan. The
release plan (with open issues and planned KIPs) for 2.1.0 can be found at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044.

Here are the dates we have planned for Apache Kafka 2.1.0 release:

1) KIP Freeze: Sep 24, 2018.
A KIP must be accepted by this date in order to be considered for this
release)

2) Feature Freeze: Oct 1, 2018
Major features merged & working on stabilization, minor features have PR,
release branch cut; anything not in this state will be automatically moved
to the next release in JIRA.

3) Code Freeze: Oct 15, 2018 (Tentatively)

The KIP and feature freeze date is about 3-4 weeks from now. Please plan
accordingly for the features you want push into Apache Kafka 2.1.0 release.


Cheers,
Dong


Jenkins build is back to normal : kafka-trunk-jdk10 #462

2018-09-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 2.1.0 Release Plan

2018-09-09 Thread Ismael Juma
Thanks for volunteering Dong!

Ismael

On Sun, 9 Sep 2018, 17:32 Dong Lin,  wrote:

> Hi all,
>
> I would like to be the release manager for our next time-based feature
> release 2.1.0.
>
> The recent Kafka release history can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan. The
> release plan (with open issues and planned KIPs) for 2.1.0 can be found at
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044.
>
> Here are the dates we have planned for Apache Kafka 2.1.0 release:
>
> 1) KIP Freeze: Sep 24, 2018.
> A KIP must be accepted by this date in order to be considered for this
> release)
>
> 2) Feature Freeze: Oct 1, 2018
> Major features merged & working on stabilization, minor features have PR,
> release branch cut; anything not in this state will be automatically moved
> to the next release in JIRA.
>
> 3) Code Freeze: Oct 15, 2018 (Tentatively)
>
> The KIP and feature freeze date is about 3-4 weeks from now. Please plan
> accordingly for the features you want push into Apache Kafka 2.1.0 release.
>
>
> Cheers,
> Dong
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2951

2018-09-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 2.1.0 Release Plan

2018-09-09 Thread Guozhang Wang
Dong, thanks for driving the release!

On Sun, Sep 9, 2018 at 5:57 PM, Ismael Juma  wrote:

> Thanks for volunteering Dong!
>
> Ismael
>
> On Sun, 9 Sep 2018, 17:32 Dong Lin,  wrote:
>
> > Hi all,
> >
> > I would like to be the release manager for our next time-based feature
> > release 2.1.0.
> >
> > The recent Kafka release history can be found at
> > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
> The
> > release plan (with open issues and planned KIPs) for 2.1.0 can be found
> at
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=91554044.
> >
> > Here are the dates we have planned for Apache Kafka 2.1.0 release:
> >
> > 1) KIP Freeze: Sep 24, 2018.
> > A KIP must be accepted by this date in order to be considered for this
> > release)
> >
> > 2) Feature Freeze: Oct 1, 2018
> > Major features merged & working on stabilization, minor features have PR,
> > release branch cut; anything not in this state will be automatically
> moved
> > to the next release in JIRA.
> >
> > 3) Code Freeze: Oct 15, 2018 (Tentatively)
> >
> > The KIP and feature freeze date is about 3-4 weeks from now. Please plan
> > accordingly for the features you want push into Apache Kafka 2.1.0
> release.
> >
> >
> > Cheers,
> > Dong
> >
>



-- 
-- Guozhang


Re: [VOTE] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2018-09-09 Thread Gwen Shapira
+1
Useful improvement, thanks Randall.


On Fri, Sep 7, 2018, 3:28 PM Randall Hauch  wrote:

> I believe the feedback on KIP-158 has been addressed. I'd like to start a
> vote.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics
>
> Discussion Thread:
> https://www.mail-archive.com/dev@kafka.apache.org/msg73775.html
>
> Thanks!
>
> Randall
>


Re: [DISCUSS] Apache Kafka 2.1.0 Release Plan

2018-09-09 Thread Matthias J. Sax
Thanks a lot! You are on a run after 1.1.1 release.

I see something coming up for myself in 4 month. :)

On 9/9/18 6:15 PM, Guozhang Wang wrote:
> Dong, thanks for driving the release!
> 
> On Sun, Sep 9, 2018 at 5:57 PM, Ismael Juma  wrote:
> 
>> Thanks for volunteering Dong!
>>
>> Ismael
>>
>> On Sun, 9 Sep 2018, 17:32 Dong Lin,  wrote:
>>
>>> Hi all,
>>>
>>> I would like to be the release manager for our next time-based feature
>>> release 2.1.0.
>>>
>>> The recent Kafka release history can be found at
>>> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
>> The
>>> release plan (with open issues and planned KIPs) for 2.1.0 can be found
>> at
>>> https://cwiki.apache.org/confluence/pages/viewpage.
>> action?pageId=91554044.
>>>
>>> Here are the dates we have planned for Apache Kafka 2.1.0 release:
>>>
>>> 1) KIP Freeze: Sep 24, 2018.
>>> A KIP must be accepted by this date in order to be considered for this
>>> release)
>>>
>>> 2) Feature Freeze: Oct 1, 2018
>>> Major features merged & working on stabilization, minor features have PR,
>>> release branch cut; anything not in this state will be automatically
>> moved
>>> to the next release in JIRA.
>>>
>>> 3) Code Freeze: Oct 15, 2018 (Tentatively)
>>>
>>> The KIP and feature freeze date is about 3-4 weeks from now. Please plan
>>> accordingly for the features you want push into Apache Kafka 2.1.0
>> release.
>>>
>>>
>>> Cheers,
>>> Dong
>>>
>>
> 
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression (Updated)

2018-09-09 Thread Dongjin Lee
Hi Jason,

You are right. Explicit statements are always better. I updated the
document following your suggestion.

@Magnus

Thanks for the inspection. It seems like a new error code is not a problem.

Thanks,
Dongjin

On Fri, Sep 7, 2018 at 3:23 AM Jason Gustafson  wrote:

> Hi Dongjin,
>
> The KIP looks good to me. I'd suggest starting a vote. A couple minor
> points that might be worth calling out explicitly in the compatibility
> section:
>
> 1. Zstd will only be allowed for the bumped produce API. For older
> versions, we return UNSUPPORTED_COMPRESSION_TYPE regardless of the message
> format.
> 2. Down-conversion of zstd-compressed records will not be supported.
> Instead we will return UNSUPPORTED_COMPRESSION_TYPE.
>
> Does that sound right?
>
> Thanks,
> Jason
>
> On Thu, Sep 6, 2018 at 1:45 AM, Magnus Edenhill 
> wrote:
>
> > > Ismael wrote:
> > > Jason, that's an interesting point regarding the Java client. Do we
> know
> > > what clients in other languages do in these cases?
> >
> > librdkafka (and its bindings) passes unknown/future errors through to the
> > application, the error code remains intact while
> > the error string will be set to something like "Err-123?", which isn't
> very
> > helpful to the user but it at least
> > preserves the original error code for further troubleshooting.
> > For the producer any unknown error returned in the ProduceResponse will
> be
> > considered a permanent delivery failure (no retries),
> > and for the consumer any unknown FetchResponse errors will propagate
> > directly to the application, trigger a fetch backoff, and then
> > continue fetching past that offset.
> >
> > So, from the client's perspective it is not really a problem if new error
> > codes are added to older API versions.
> >
> > /Magnus
> >
> >
> > Den tors 6 sep. 2018 kl 09:45 skrev Dongjin Lee :
> >
> > > I updated the KIP page
> > > <
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 110%3A+Add+Codec+for+ZStandard+Compression
> > > >
> > > following the discussion here. Please take a look when you are free.
> > > If you have any opinion, don't hesitate to give me a message.
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Fri, Aug 31, 2018 at 11:35 PM Dongjin Lee 
> wrote:
> > >
> > > > I just updated the draft implementation[^1], rebasing against the
> > latest
> > > > trunk and implementing error routine (i.e., Error code 74 for
> > > > UnsupportedCompressionTypeException.) Since we decided to disallow
> all
> > > > fetch request below version 2.1.0 for the topics specifying
> ZStandard,
> > I
> > > > added an error logic only.
> > > >
> > > > Please have a look when you are free.
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > > [^1]: Please check the last commit here:
> > > > https://github.com/apache/kafka/pull/2267
> > > >
> > > > On Thu, Aug 23, 2018, 8:55 AM Dongjin Lee 
> wrote:
> > > >
> > > >> Jason,
> > > >>
> > > >> Great. +1 for UNSUPPORTED_COMPRESSION_TYPE.
> > > >>
> > > >> Best,
> > > >> Dongjin
> > > >>
> > > >> On Thu, Aug 23, 2018 at 8:19 AM Jason Gustafson  >
> > > >> wrote:
> > > >>
> > > >>> Hey Dongjin,
> > > >>>
> > > >>> Yeah that's right. For what it's worth, librdkafka also appears to
> > > handle
> > > >>> unexpected error codes. I expect that most client implementations
> > would
> > > >>> either pass through the raw type or convert to an enum using
> > something
> > > >>> like
> > > >>> what the java client does. Since we're expecting the client to fail
> > > >>> anyway,
> > > >>> I'm probably in favor of using the UNSUPPORTED_COMPRESSION_TYPE
> error
> > > >>> code.
> > > >>>
> > > >>> -Jason
> > > >>>
> > > >>> On Wed, Aug 22, 2018 at 1:46 AM, Dongjin Lee 
> > > wrote:
> > > >>>
> > > >>> > Jason and Ismael,
> > > >>> >
> > > >>> > It seems like the only thing we need to regard if we define a new
> > > error
> > > >>> > code (i.e., UNSUPPORTED_COMPRESSION_TYPE) would be the
> > implementation
> > > >>> of
> > > >>> > the other language clients, right? At least, this strategy causes
> > any
> > > >>> > problem for Java client. Do I understand correctly?
> > > >>> >
> > > >>> > Thanks,
> > > >>> > Dongjin
> > > >>> >
> > > >>> > On Wed, Aug 22, 2018 at 5:43 PM Dongjin Lee 
> > > >>> wrote:
> > > >>> >
> > > >>> > > Jason,
> > > >>> > >
> > > >>> > > > I think we would only use this error code when we /know/ that
> > > zstd
> > > >>> was
> > > >>> > > in use and the client doesn't support it? This is true if
> either
> > 1)
> > > >>> the
> > > >>> > > message needs down-conversion and we encounter a zstd
> compressed
> > > >>> message,
> > > >>> > > or 2) if the topic is explicitly configured to use zstd.
> > > >>> > >
> > > >>> > > Yes, it is right. And you know, the case 1 includes 1.a) old
> > > clients'
> > > >>> > > request v0, v1 records or 1.b) implicit zstd, the compression
> > type
> > > of
> > > >>> > > "producer" with Zstd compressed data.
> > > >>> > >
> > > >>> > > > However, if the compression type is set to "producer," then