[jira] [Created] (KAFKA-9290) Update IQ related JavaDocs

2019-12-11 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9290:
--

 Summary: Update IQ related JavaDocs
 Key: KAFKA-9290
 URL: https://issues.apache.org/jira/browse/KAFKA-9290
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


In Kafka 2.1.0 we deprecated couple of methods (KAFKA-7277) to pass in 
timestamps via IQ API via Duration/Instance parameters instead of plain longs.

In Kafka 2.3.0 we introduced TimestampedXxxStores (KAFKA-3522) and allow IQ to 
return the stored timestamp.

However, we never update our JavaDocs that contain code snippets to illustrate 
how a local store can be queries. For example 
`KGroupedStream#count(Materialized)` 
([https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java#L116-L122]):

 
{code:java}
* {@code * KafkaStreams streams = ... // counting words
* String queryableStoreName = "storeName"; // the store name should be the name 
of the store as defined by the Materialized instance
* ReadOnlyKeyValueStore localStore = 
streams.store(queryableStoreName, QueryableStoreTypes.keyValueStore());
* String key = "some-word";
* Long countForWord = localStore.get(key); // key must be local (application 
state is shared over all running Kafka Streams instances)
* }
{code}
We should update all JavaDocs to use `TimestampedXxxStore` and the new 
Duration/Instance methods in all those code snippets.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1016

2019-12-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Simplify the timeout logic to handle  protocol in Connect

[github] HOTFIX: Add comment to remind ordering restrictions on

[github] KAFKA-9288: Do not allow the same object to be inserted multiple times


--
[...truncated 4.66 MB...]
kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls STARTED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations2 STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations2 PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDelete STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclDelete PASSED

kafka.api.SaslSslAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.SaslSslAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.SaslSslAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.SaslSslAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction 
STARTED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction PASSED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions STARTED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testCommitTransactionTimeout STARTED

kafka.api.TransactionsTest > testCommitTransactionTimeout PASSED

kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

Minimum replication for Exactly Once semantics

2019-12-11 Thread Rajesh Kalyanasundaram
Hi all,

Kafka Streams application with Exactly once processing semantics, the default 
replication factor of broker is 3 i.e transaction.state.log.replication.factor
In doc its mentioned that atleast 3 brokers are required for exactly once. What 
is the minimum replication factor required for exactly once processing?

processing.guarantee
The processing guarantee that should be used. Possible values are 
"at_least_once" (default) and "exactly_once". Note that if exactly-once 
processing is enabled, the default for parameter commit.interval.ms changes to 
100ms. Additionally, consumers are configured with 
isolation.level="read_committed" and producers are configured with 
retries=Integer.MAX_VALUE and enable.idempotence=true per default. Note that 
"exactly_once" processing requires a cluster of at least three brokers by 
default, which is the recommended setting for production. For development, you 
can change this by adjusting the broker settings in both 
transaction.state.log.replication.factor and transaction.state.log.min.isr to 
the number of brokers you want to use. To learn more, see Processing 
Guarantees.

Thanks
Regards,
Rajesh
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Build failed in Jenkins: kafka-trunk-jdk8 #4098

2019-12-11 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Add comment to remind ordering restrictions on

[github] KAFKA-9288: Do not allow the same object to be inserted multiple times


--
[...truncated 5.59 MB...]
org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.s

[jira] [Created] (KAFKA-9291) remove the code of KAFKA-9288 which bring fatal bug

2019-12-11 Thread dengziming (Jira)
dengziming created KAFKA-9291:
-

 Summary: remove the code of KAFKA-9288 which bring fatal bug
 Key: KAFKA-9291
 URL: https://issues.apache.org/jira/browse/KAFKA-9291
 Project: Kafka
  Issue Type: Improvement
Reporter: dengziming
Assignee: dengziming
 Attachments: image-2019-12-11-19-23-04-499.png, 
image-2019-12-11-19-28-29-559.png, image-2019-12-11-19-29-28-156.png, 
image-2019-12-11-19-29-53-353.png

KAFKA-9288 add some code which I didn't now fully understand, but it indeed 
bring some bug which is serious and I debugged the process:

1.  when consumer client start, an `ApiVersionsRequest` are send

2. KafkaApis. handleApiVersionsRequest(request) are invoke

3. ApiVersionsResponse.createApiVersionsResponse()

4. and it will add all `ApiVersionsResponseKey` to 
`ApiVersionsResponseKeyCollection`

5. every time add an element will return false!( I didn't find the reason)

!image-2019-12-11-19-23-04-499.png!

 

6. after the for loop, the `ApiVersionsResponseKeyCollection` is EMPTY!

!image-2019-12-11-19-28-29-559.png!

 

7. when the Client receive the response, ERROR will occur.

!image-2019-12-11-19-29-28-156.png!

 

8. and my application was terminated

!image-2019-12-11-19-29-53-353.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Minimum replication for Exactly Once semantics

2019-12-11 Thread Colin McCabe
Hi Rajesh,

The reason why three brokers are recommended for EOS is because of the default 
replication factor of some of the internal topics used to implement it.  If you 
tweak those configurations, you could run with fewer (although that's not 
necessarily a good idea).

best,
Colin


On Wed, Dec 11, 2019, at 02:26, Rajesh Kalyanasundaram wrote:
> Hi all,
> 
> Kafka Streams application with Exactly once processing semantics, the 
> default replication factor of broker is 3 i.e 
> transaction.state.log.replication.factor
> In doc its mentioned that atleast 3 brokers are required for exactly 
> once. What is the minimum replication factor required for exactly once 
> processing?
> 
> processing.guarantee
> The processing guarantee that should be used. Possible values are 
> "at_least_once" (default) and "exactly_once". Note that if exactly-once 
> processing is enabled, the default for parameter commit.interval.ms 
> changes to 100ms. Additionally, consumers are configured with 
> isolation.level="read_committed" and producers are configured with 
> retries=Integer.MAX_VALUE and enable.idempotence=true per default. Note 
> that "exactly_once" processing requires a cluster of at least three 
> brokers by default, which is the recommended setting for production. 
> For development, you can change this by adjusting the broker settings 
> in both transaction.state.log.replication.factor and 
> transaction.state.log.min.isr to the number of brokers you want to use. 
> To learn more, see Processing 
> Guarantees.
> 
> Thanks
> Regards,
> Rajesh
> This email and any files transmitted with it are confidential, 
> proprietary and intended solely for the individual or entity to whom 
> they are addressed. If you have received this email in error please 
> delete it immediately.
>


Re: [DISCUSS] KIP-489 Kafka Consumer Record Latency Metric

2019-12-11 Thread Sean Glover
Hello again,

There has been some interest in this KIP recently.  I'm bumping the thread
to encourage feedback on the design.

Regards,
Sean

On Mon, Jul 15, 2019 at 9:01 AM Sean Glover 
wrote:

> To hopefully spark some discussion I've copied the motivation section from
> the KIP:
>
> Consumer lag is a useful metric to monitor how many records are queued to
> be processed.  We can look at individual lag per partition or we may
> aggregate metrics. For example, we may want to monitor what the maximum lag
> of any particular partition in our consumer subscription so we can identify
> hot partitions, caused by an insufficient producing partitioning strategy.
> We may want to monitor a sum of lag across all partitions so we have a
> sense as to our total backlog of messages to consume. Lag in offsets is
> useful when you have a good understanding of your messages and processing
> characteristics, but it doesn’t tell us how far behind *in time* we are.
> This is known as wait time in queueing theory, or more informally it’s
> referred to as latency.
>
> The latency of a message can be defined as the difference between when
> that message was first produced to when the message is received by a
> consumer.  The latency of records in a partition correlates with lag, but a
> larger lag doesn’t necessarily mean a larger latency. For example, a topic
> consumed by two separate application consumer groups A and B may have
> similar lag, but different latency per partition.  Application A is a
> consumer which performs CPU intensive business logic on each message it
> receives. It’s distributed across many consumer group members to handle the
> load quickly enough, but since its processing time is slower, it takes
> longer to process each message per partition.  Meanwhile, Application B is
> a consumer which performs a simple ETL operation to land streaming data in
> another system, such as HDFS. It may have similar lag to Application A, but
> because it has a faster processing time its latency per partition is
> significantly less.
>
> If the Kafka Consumer reported a latency metric it would be easier to
> build Service Level Agreements (SLAs) based on non-functional requirements
> of the streaming system.  For example, the system must never have a latency
> of greater than 10 minutes. This SLA could be used in monitoring alerts or
> as input to automatic scaling solutions.
>
> On Thu, Jul 11, 2019 at 12:36 PM Sean Glover 
> wrote:
>
>> Hi kafka-dev,
>>
>> I've created KIP-489 as a proposal for adding latency metrics to the
>> Kafka Consumer in a similar way as record-lag metrics are implemented.
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/489%3A+Kafka+Consumer+Record+Latency+Metric
>>
>> Regards,
>> Sean
>>
>> --
>> Principal Engineer, Lightbend, Inc.
>>
>> 
>>
>> @seg1o , in/seanaglover
>> 
>>
>
>
> --
> Principal Engineer, Lightbend, Inc.
>
> 
>
> @seg1o , in/seanaglover
> 
>


[jira] [Created] (KAFKA-9292) KIP-551: Expose disk read and write metrics

2019-12-11 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-9292:
---

 Summary: KIP-551: Expose disk read and write metrics
 Key: KAFKA-9292
 URL: https://issues.apache.org/jira/browse/KAFKA-9292
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe
Assignee: Colin McCabe


It's often helpful to know how many bytes Kafka is reading and writing from the 
disk.  The reason is because when disk access is required, there may be some 
impact on latency and bandwidth.  We currently don't have a metric that 
measures this directly.  It would be useful to add one.

See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-551%3A+Expose+disk+read+and+write+metrics




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9138) Add system test covering Foreign Key joins (KIP-213)

2019-12-11 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-9138.
-
Resolution: Fixed

Added the system test in 
https://github.com/apache/kafka/commit/717ce42a6d68d3ac8d9478451a31423b5d43234e 
via https://github.com/apache/kafka/pull/7664

> Add system test covering Foreign Key joins (KIP-213)
> 
>
> Key: KAFKA-9138
> URL: https://issues.apache.org/jira/browse/KAFKA-9138
> Project: Kafka
>  Issue Type: Test
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> There are unit and integration tests, but we should really have a system test 
> as well.
> I plan to create a new test, since this feature is pretty different than the 
> existing topology/data set of smoke test. Although, it might be possible for 
> the new test to subsume smoke test. I'd give the new test a few releases to 
> burn in before considering a merge, though.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9293) NPE in DumpLogSegments with --offsets-decoder

2019-12-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9293:
--

 Summary: NPE in DumpLogSegments with --offsets-decoder
 Key: KAFKA-9293
 URL: https://issues.apache.org/jira/browse/KAFKA-9293
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code}
at org.apache.kafka.common.utils.Utils.toArray(Utils.java:230)
at 
kafka.tools.DumpLogSegments$OffsetsMessageParser.$anonfun$parseGroupMetadata$2(DumpLogSegments.scala:287)
at 
kafka.tools.DumpLogSegments$OffsetsMessageParser.parseGroupMetadata(DumpLogSegments.scala:284)
at 
kafka.tools.DumpLogSegments$OffsetsMessageParser.parse(DumpLogSegments.scala:317)
at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2(DumpLogSegments.scala:372)
at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2$adapted(DumpLogSegments.scala:343)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1(DumpLogSegments.scala:343)
at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1$adapted(DumpLogSegments.scala:340)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at kafka.tools.DumpLogSegments$.dumpLog(DumpLogSegments.scala:340)
at 
kafka.tools.DumpLogSegments$.$anonfun$main$1(DumpLogSegments.scala:60)
at kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:51)
at kafka.tools.DumpLogSegments.main(DumpLogSegments.scala)
{code}

The problem is that "userData" is nullable, but the dump log tool doesn't check 
for null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2019-12-11 Thread Brian Byrne
Hello all,

I'm reviving the discussion for adding a quotas API to the admin client by
submitting a new proposal. There are some notable changes from previous
attempts, namely a way to deduce the effective quota for a client (entity),
a way to query for configured quotas, and the concept of "units" on quotas,
among other minor updates.

Please take a look, and I'll be happy to make any clarifications and
modifications in regards to feedback.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-546%3A+Add+quota-specific+APIs+to+the+Admin+Client%2C+redux

Thanks,
Brian


Build failed in Jenkins: kafka-trunk-jdk8 #4099

2019-12-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9138: Add system test for relational joins (#7664)


--
[...truncated 5.58 MB...]

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PA

Jenkins build is back to normal : kafka-trunk-jdk11 #1017

2019-12-11 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9291) error starting consumer

2019-12-11 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-9291.
---
Resolution: Abandoned

I made a mistake when debugging

> error starting consumer
> ---
>
> Key: KAFKA-9291
> URL: https://issues.apache.org/jira/browse/KAFKA-9291
> Project: Kafka
>  Issue Type: Bug
>Reporter: dengziming
>Assignee: dengziming
>Priority: Blocker
> Attachments: image-2019-12-11-19-23-04-499.png, 
> image-2019-12-11-19-28-29-559.png, image-2019-12-11-19-29-28-156.png, 
> image-2019-12-11-19-29-53-353.png
>
>
> KAFKA-9288 add some code which I didn't now fully understand, but it indeed 
> bring some bug which is serious and I debugged the process:
> 1.  when consumer client start, an `ApiVersionsRequest` are send
> 2. KafkaApis. handleApiVersionsRequest(request) are invoke
> 3. ApiVersionsResponse.createApiVersionsResponse()
> 4. and it will add all `ApiVersionsResponseKey` to 
> `ApiVersionsResponseKeyCollection`
> 5. every time add an element will return false! ( *this is where the bug is*, 
> but I didn't find the reason)
> !image-2019-12-11-19-23-04-499.png!
>  
> 6. after the for loop, the `ApiVersionsResponseKeyCollection` is EMPTY!
> !image-2019-12-11-19-28-29-559.png!
>  
> 7. when the Client receive the response, ERROR will occur.
> !image-2019-12-11-19-29-28-156.png!
>  
> 8. and my application was terminated
> !image-2019-12-11-19-29-53-353.png!
>  
> So we can conclude that the reason is the new code of KAFKA-9288 in 
> ApiVersionsResponseKeyCollection



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


OAuth Library - Open Source Contribution to Kafka

2019-12-11 Thread Sam Russak
Hello Apache Kafka Dev Team,

My team at the company I work for recently implemented an OAuth Library for
access control on Kafka Broker, producers, consumers, streams applications
and topics. We are looking to Open Source this software with Apache Kafka.
What would be the process for going forward?

Thank you,
Sam Russak


Re: OAuth Library - Open Source Contribution to Kafka

2019-12-11 Thread Christopher X Bogan

Christopher X Bogan has sent you an email via Gmail confidential mode:

[image: Gmail logo]Re: OAuth Library - Open Source Contribution to Kafka 


This message was sent on Dec 11, 2019 at 11:36:55 PM PST
You can open it by clicking the link below. This link will only work for 
dev@kafka.apache.org.

View the email 


Gmail confidential mode gives you more control over the messages you send. The 
sender may have chosen to set an expiration time, disable printing or 
forwarding, or track access to this message. Learn more 

Gmail: Email by Google
Use is subject to the Google Privacy Policy 

Google LLC, 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
You have received this message because someone sent you an email via Gmail 
confidential mode.
[image: Google logo]