Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #302

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] throw corresponding invalid producer epoch (#9700)


--
[...truncated 3.49 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2e42c2a6, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7c39efcb, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7c39efcb, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4446e88d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4446e88d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d7a27c9, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d7a27c9, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@15959732, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@15959732, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@266165fc, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@266165fc, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5510f33d, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5510f33d, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2928c239, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2928c239, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@10bdfc29, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@10bdfc29, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3d5e3547, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3d5e3547, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6f85b0ca, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.

Re: [DISCUSS] KIP-690: Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-10 Thread Omnia Ibrahim
Thanks, Ryanne for feedback. Could you please vote on the voting thread
https://www.mail-archive.com/dev@kafka.apache.org/msg113575.html thanks

Omnia

On Fri, Dec 4, 2020 at 4:53 PM Ryanne Dolan  wrote:

> Thanks Omnia, this looks great. I like the approach of introducing another
> policy class.
>
> Ryanne
>
> On Tue, Dec 1, 2020, 9:36 AM Omnia Ibrahim 
> wrote:
>
> > Hi everyone
> > I want to start discussion of the KIP 690, the proposal is here
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention
> >
> > Thanks for your time and feedback.
> >
> > Omnia
> >
>


[jira] [Resolved] (KAFKA-10747) Implement ClientQuota APIs for altering and describing IP entity quotas

2020-12-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-10747.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> Implement ClientQuota APIs for altering and describing IP entity quotas 
> 
>
> Key: KAFKA-10747
> URL: https://issues.apache.org/jira/browse/KAFKA-10747
> Project: Kafka
>  Issue Type: Sub-task
>  Components: config, core
>Affects Versions: 2.8.0
>Reporter: David Mao
>Assignee: David Mao
>Priority: Major
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-696: Update Streams FSM to clarify ERROR state meaning

2020-12-10 Thread Bruno Cadonna

Thanks for the KIP, Walker!

The KIP looks good to me. I have just a minor comment about the KIP 
document.


You talk about SHUTDOWN_CLIENT in the KIP, but never explain that it is 
a possible action that can be taken in the Streams uncaught exception 
handler. Could you please clarify that?


Best,
Bruno

On 09.12.20 19:04, Walker Carlson wrote:

Thanks for the comments. If there are no further concerns I would like to
call for a vote on KIP-696 to clarify and clean up the Streams State
Machine.

walker

On Wed, Dec 9, 2020 at 8:50 AM John Roesler  wrote:


Thanks, Walker!

Your proposal looks good to me.

-John

On Tue, 2020-12-08 at 18:29 -0800, Walker Carlson wrote:

Thanks for the feedback Guozhang!

I clarified some of the points in the Proposed Changes section so

hopefully

it will be more clear what is going on now. I also agree with your
suggestion about the possible call to close() on ERROR so I added this
line.
"Close() called on ERROR will be idempotent and not throw an exception,

but

we will log a warning."

I have linked those tickets and I will leave a comment trying to explain
how these changes will affect their issue.

walker

On Tue, Dec 8, 2020 at 4:57 PM Guozhang Wang  wrote:


Hello Walker,

Thanks for the KIP! Overall it looks reasonable to me. Just a few minor
comments for the wiki page itself:

1) Could you clarify the conditions when RUNNING / REBALANCING ->
PENDING_ERROR will happen; and when PENDING_ERROR -> ERROR will happen.
E.g. when I read "Streams will only reach ERROR state in the event of

an

exceptional failure in which the `StreamsUncaughtExceptionHandler`

chose to

either shutdown the application or the client." I thought the first
transition would happen before the handler, and the second transition

would

happen immediately after the handler returns "shutdown client" or

"shutdown

application", until I read the last statement regarding

"SHUTDOWN_CLIENT".


2) A compatibility issue: today it is possible that users would call
Streams APIs like shutdown in the global state transition listener. And
it's common to try shutting down the application automatically when
transiting to ERROR (assuming it was not a terminating state). I think

we

could consider making this call a no-op and log a warning.

3) Could you link the following JIRAs in the "JIRA" field?

https://issues.apache.org/jira/browse/KAFKA-10555
https://issues.apache.org/jira/browse/KAFKA-9638
https://issues.apache.org/jira/browse/KAFKA-6520

And maybe we can also left a comment on those tickets explaining what

would

happen to tackle the issues after this KIP.


Guozhang


On Tue, Dec 8, 2020 at 12:16 PM Walker Carlson 
wrote:


Hello all,

I'd like to propose KIP-696 to clarify the meaning of ERROR state in

the

KafkaStreams Client State Machine. This will update the States to be
consistent with changes in KIP-671 and KIP-663.

Here are the details: https://cwiki.apache.org/confluence/x/lCvZCQ

Thanks,
Walker




--
-- Guozhang









Re: [VOTE] KIP-696: Update Streams FSM to clarify ERROR state meaning

2020-12-10 Thread Bruno Cadonna

Thanks, Walker!

+1 (non-binding)

Best,
Bruno

On 09.12.20 20:07, Leah Thomas wrote:

Looks good, thanks Walker! +1 (non-binding)

Leah

On Wed, Dec 9, 2020 at 1:04 PM John Roesler  wrote:


Thanks, Walker!

I'm also +1 (binding)

-John

On Wed, 2020-12-09 at 11:03 -0800, Guozhang Wang wrote:

+1. Thanks Walker.

On Wed, Dec 9, 2020 at 10:58 AM Walker Carlson 
wrote:


Sorry I forgot to change the subject line to vote.

Thanks for the comments. If there are no further concerns I would like

to

call for a vote on KIP-696 to clarify and clean up the Streams State
Machine.

On Wed, Dec 9, 2020 at 10:04 AM Walker Carlson 
wrote:


Thanks for the comments. If there are no further concerns I would

like to

call for a vote on KIP-696 to clarify and clean up the Streams State
Machine.

walker

On Wed, Dec 9, 2020 at 8:50 AM John Roesler 

wrote:



Thanks, Walker!

Your proposal looks good to me.

-John

On Tue, 2020-12-08 at 18:29 -0800, Walker Carlson wrote:

Thanks for the feedback Guozhang!

I clarified some of the points in the Proposed Changes section so

hopefully

it will be more clear what is going on now. I also agree with

your

suggestion about the possible call to close() on ERROR so I

added this

line.
"Close() called on ERROR will be idempotent and not throw an

exception,

but

we will log a warning."

I have linked those tickets and I will leave a comment trying to

explain

how these changes will affect their issue.

walker

On Tue, Dec 8, 2020 at 4:57 PM Guozhang Wang 


wrote:



Hello Walker,

Thanks for the KIP! Overall it looks reasonable to me. Just a

few

minor

comments for the wiki page itself:

1) Could you clarify the conditions when RUNNING / REBALANCING

->

PENDING_ERROR will happen; and when PENDING_ERROR -> ERROR will

happen.

E.g. when I read "Streams will only reach ERROR state in the

event

of

an

exceptional failure in which the

`StreamsUncaughtExceptionHandler`

chose to

either shutdown the application or the client." I thought the

first

transition would happen before the handler, and the second

transition

would

happen immediately after the handler returns "shutdown client"

or

"shutdown

application", until I read the last statement regarding

"SHUTDOWN_CLIENT".


2) A compatibility issue: today it is possible that users

would call

Streams APIs like shutdown in the global state transition

listener.

And

it's common to try shutting down the application automatically

when

transiting to ERROR (assuming it was not a terminating state).

I

think we

could consider making this call a no-op and log a warning.

3) Could you link the following JIRAs in the "JIRA" field?

https://issues.apache.org/jira/browse/KAFKA-10555
https://issues.apache.org/jira/browse/KAFKA-9638
https://issues.apache.org/jira/browse/KAFKA-6520

And maybe we can also left a comment on those tickets

explaining

what

would

happen to tackle the issues after this KIP.


Guozhang


On Tue, Dec 8, 2020 at 12:16 PM Walker Carlson <

wcarl...@confluent.io



wrote:


Hello all,

I'd like to propose KIP-696 to clarify the meaning of ERROR

state

in the

KafkaStreams Client State Machine. This will update the

States to

be

consistent with changes in KIP-671 and KIP-663.

Here are the details:

https://cwiki.apache.org/confluence/x/lCvZCQ


Thanks,
Walker




--
-- Guozhang


















Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #277

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] throw corresponding invalid producer epoch (#9700)


--
[...truncated 3.46 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] S

[jira] [Resolved] (KAFKA-10772) java.lang.IllegalStateException: There are insufficient bytes available to read assignment from the sync-group response (actual byte size 0)

2020-12-10 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-10772.
---
Resolution: Duplicate

> java.lang.IllegalStateException: There are insufficient bytes available to 
> read assignment from the sync-group response (actual byte size 0)
> 
>
> Key: KAFKA-10772
> URL: https://issues.apache.org/jira/browse/KAFKA-10772
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Levani Kokhreidze
>Assignee: Bruno Cadonna
>Priority: Blocker
> Attachments: KAFKA-10772.log
>
>
> From time to time we encounter the following exception that results in Kafka 
> Streams threads dying.
> Broker version 2.4.1, Client version 2.6.0
> {code:java}
> Nov 27 00:59:53.681 streaming-app service: prod | streaming-app-2 | 
> stream-client [cluster1-profile-stats-pipeline-client-id] State transition 
> from REBALANCING to ERROR Nov 27 00:59:53.681 streaming-app service: prod | 
> streaming-app-2 | stream-client [cluster1-profile-stats-pipeline-client-id] 
> State transition from REBALANCING to ERROR Nov 27 00:59:53.682 streaming-app 
> service: prod | streaming-app-2 | 2020-11-27 00:59:53.681 ERROR 105 --- 
> [-StreamThread-1] .KafkaStreamsBasedStreamProcessingEngine : Stream 
> processing pipeline: [profile-stats] encountered unrecoverable exception. 
> Thread: [cluster1-profile-stats-pipeline-client-id-StreamThread-1] is 
> completely dead. If all worker threads die, Kafka Streams will be moved to 
> permanent ERROR state. Nov 27 00:59:53.682 streaming-app service: prod | 
> streaming-app-2 | Stream processing pipeline: [profile-stats] encountered 
> unrecoverable exception. Thread: 
> [cluster1-profile-stats-pipeline-client-id-StreamThread-1] is completely 
> dead. If all worker threads die, Kafka Streams will be moved to permanent 
> ERROR state. java.lang.IllegalStateException: There are insufficient bytes 
> available to read assignment from the sync-group response (actual byte size 
> 0) , this is not expected; it is possible that the leader's assign function 
> is buggy and did not return any assignment for this member, or because static 
> member is configured and the protocol is buggy hence did not get the 
> assignment for this member at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:367)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:440)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:513)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1268)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230) 
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210) 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:766)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:624)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2020-12-10 Thread Bruno Cadonna

Hi,

I'd like to start the discussion on KIP-698 that proposes an explicit 
user initialization of broker-side state for Kafka Streams instead of 
letting Kafka Streams setting up the broker-side state automatically 
during rebalance. Such an explicit initialization avoids possible data 
loss issues due to automatic initialization.


https://cwiki.apache.org/confluence/x/7CnZCQ

Best,
Bruno


Re: [VOTE] KIP-661: Expose task configurations in Connect REST API

2020-12-10 Thread Mickael Maison
I'm +1 (binding) too

The vote passes. Here is the summary:
+3 binding votes: Gwen, Bill, Mickael
+3 non-binding votes: Brandon, Ning, Tom

I'll get a PR ready, thanks!

On Thu, Dec 3, 2020 at 10:26 PM Bill Bejeck  wrote:
>
> Hi Mickael,
>
> This KIP looks like it will be useful.
>
> +1 (binding)
>
> Thanks,
> Bill
>
> On Thu, Nov 19, 2020 at 7:04 AM Mickael Maison 
> wrote:
>
> > Bumping this thread
> >
> > So far we have 1 binding and 3 non binding votes.
> > Let me know if you have any feedback.
> >
> > Thanks
> >
> > On Mon, Nov 2, 2020 at 12:16 PM Tom Bentley  wrote:
> > >
> > > +1 non-binding
> > >
> > > Thanks Mickael.
> > >
> > > Tom
> > >
> > > On Fri, Oct 16, 2020 at 8:54 PM Gwen Shapira  wrote:
> > >
> > > > I definitely needed this capability a few times before. Thank you.
> > > >
> > > > +1 (binding)
> > > >
> > > > On Thu, Sep 24, 2020 at 7:54 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > I'd like to start a vote on KIP-661:
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-661%3A+Expose+task+configurations+in+Connect+REST+API
> > > > >
> > > > > Thanks
> > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Engineering Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > > >
> >


[jira] [Created] (KAFKA-10833) KIP-661: Expose task configurations in Connect REST API

2020-12-10 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-10833:
--

 Summary: KIP-661: Expose task configurations in Connect REST API
 Key: KAFKA-10833
 URL: https://issues.apache.org/jira/browse/KAFKA-10833
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #323

2020-12-10 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10748) Add IP connection rate throttling metric

2020-12-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-10748.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> Add IP connection rate throttling metric
> 
>
> Key: KAFKA-10748
> URL: https://issues.apache.org/jira/browse/KAFKA-10748
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, network
>Affects Versions: 2.8.0
>Reporter: David Mao
>Assignee: David Mao
>Priority: Minor
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #278

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10747: Extend DescribeClientQuotas and AlterClientQuotas APIs to 
support IP connection rate quota (KIP-612) (#9628)


--
[...truncated 3.46 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.Topolo

Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-10 Thread Mickael Maison
Hi Omnia,

Thank you for the reply, it makes sense.

A couple more comments:

1) I'm assuming the new interface and default implementation will be
in the mirror-client project? as the names of some of these topics are
needed by RemoteClusterUtils on the client-side.

2) I'm about to open a KIP to specify where the offset-syncs topic is
created by MM2. In restricted environments, we'd prefer MM2 to only
have read access to the source cluster and have the offset-syncs on
the target cluster. I think allowing to specify the cluster where to
create that topic would be a natural extension of the interface you
propose here.

So I wonder if your interface could be named InternalTopicsPolicy?
That's a bit more generic than InternalTopicNamingPolicy. That would
also match the configuration setting, internal.topics.policy.class,
you're proposing.

Thanks

On Thu, Dec 3, 2020 at 10:15 PM Omnia Ibrahim  wrote:
>
> Hi Mickael,
> Thanks for your feedback!
> Regards your question about having more configurations, I considered adding
> configuration per each topic however this meant adding more configurations
> for MM2 which already have so many, also the more complicated and advanced
> replication pattern you have between clusters the more configuration lines
> will be added to your MM2 config which isn't going to be pretty if you
> don't have the same topics names across your clusters.
>
> Also, it added more complexity to the implementation as MM2 need to
> 1- identify if a topic is checkpoints so we could list the checkpoints
> topics in MirrorMaker 2 utils as one cluster could have X numbers
> checkpoints topics if it's connected to X clusters, this is done right now
> by listing any topic with suffix `.checkpoints.internal`. This could be
> done by add `checkpoints.topic.suffix` config but this would make an
> assumption that checkpoints will always have a suffix also having a suffix
> means that we may need a separator as well to concatenate this suffix with
> a prefix to identify source cluster name.
> 2- identify if a topic is internal, so it shouldn't be replicated or track
> checkpoints for it, right now this is relaying on disallow topics with
> `.internal` suffix to be not replicated and not tracked in checkpoints but
> with making topics configurable we need a way to define what is an internal
> topic. This could be done by making using a list of all internal topics
> have been entered to the configuration.
>
> So having an interface seemed easier and also give more flexibility for
> users to define their own topics name, define what is internal topic means,
> how to find checkpoints topics and it will be one line config for each
> herder, also it more consistence with MM2 code as MM2 config have
> TopicFilter, ReplicationPolicy, GroupFilter, etc as interface and they can
> be overridden by providing a custom implementation for them or have some
> config that change their default implementations.
>
> Hope this answer your question. I also updated the KIP to add this to the
> rejected solutions.
>
>
> On Thu, Dec 3, 2020 at 3:19 PM Mickael Maison 
> wrote:
>
> > Hi Omnia,
> >
> > Thanks for the KIP. Indeed being able to configure MM2's internal
> > topic names would be a nice improvement.
> >
> > Looking at the KIP, I was surprised you propose an interface to allow
> > users to specify names. Have you considered making names changeable
> > via configurations? If so, we should definitely mention it in the
> > rejected alternatives as it's the first method that comes to mind.
> >
> > I understand an interface gives a lot of flexibility but I'd expect
> > topic names to be relatively simple and known in advance in most
> > cases.
> >
> > I've not checked all use cases but something like below felt appropriate:
> > clusters = primary,backup
> > primary->backup.offsets-sync.topic=backup.mytopic-offsets-sync
> >
> > On Tue, Dec 1, 2020 at 3:36 PM Omnia Ibrahim 
> > wrote:
> > >
> > > Hey everyone,
> > > Please take a look at KIP-690:
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention
> > >
> > > Thanks for your feedback and support.
> > >
> > > Omnia
> > >
> >


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #303

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10747: Extend DescribeClientQuotas and AlterClientQuotas APIs to 
support IP connection rate quota (KIP-612) (#9628)


--
[...truncated 6.98 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7f748e97, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7f748e97, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16379fe4, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16379fe4, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e72a001, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e72a001, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3300e874, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3300e874, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@685bc8b5, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@685bc8b5, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4239a2e5, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4239a2e5, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1990700b, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1990700b, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d676805, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d676805, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b4f137c, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b4f137c, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@14eeb47a, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@14eeb47a, 
timestamped = false

Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-10 Thread Omnia Ibrahim
Hi Mickael,
1) That's right the interface and default implementation will in
mirror-connect
2) Renaming the interface should be fine too especially if you planning to
move other functionality related to the creation there, I can edit this

if you are okay with that please vote for the KIP here
https://www.mail-archive.com/dev@kafka.apache.org/msg113575.html


Thanks
Omnia
On Thu, Dec 10, 2020 at 12:58 PM Mickael Maison 
wrote:

> Hi Omnia,
>
> Thank you for the reply, it makes sense.
>
> A couple more comments:
>
> 1) I'm assuming the new interface and default implementation will be
> in the mirror-client project? as the names of some of these topics are
> needed by RemoteClusterUtils on the client-side.
>
> 2) I'm about to open a KIP to specify where the offset-syncs topic is
> created by MM2. In restricted environments, we'd prefer MM2 to only
> have read access to the source cluster and have the offset-syncs on
> the target cluster. I think allowing to specify the cluster where to
> create that topic would be a natural extension of the interface you
> propose here.
>
> So I wonder if your interface could be named InternalTopicsPolicy?
> That's a bit more generic than InternalTopicNamingPolicy. That would
> also match the configuration setting, internal.topics.policy.class,
> you're proposing.
>
> Thanks
>
> On Thu, Dec 3, 2020 at 10:15 PM Omnia Ibrahim 
> wrote:
> >
> > Hi Mickael,
> > Thanks for your feedback!
> > Regards your question about having more configurations, I considered
> adding
> > configuration per each topic however this meant adding more
> configurations
> > for MM2 which already have so many, also the more complicated and
> advanced
> > replication pattern you have between clusters the more configuration
> lines
> > will be added to your MM2 config which isn't going to be pretty if you
> > don't have the same topics names across your clusters.
> >
> > Also, it added more complexity to the implementation as MM2 need to
> > 1- identify if a topic is checkpoints so we could list the checkpoints
> > topics in MirrorMaker 2 utils as one cluster could have X numbers
> > checkpoints topics if it's connected to X clusters, this is done right
> now
> > by listing any topic with suffix `.checkpoints.internal`. This could be
> > done by add `checkpoints.topic.suffix` config but this would make an
> > assumption that checkpoints will always have a suffix also having a
> suffix
> > means that we may need a separator as well to concatenate this suffix
> with
> > a prefix to identify source cluster name.
> > 2- identify if a topic is internal, so it shouldn't be replicated or
> track
> > checkpoints for it, right now this is relaying on disallow topics with
> > `.internal` suffix to be not replicated and not tracked in checkpoints
> but
> > with making topics configurable we need a way to define what is an
> internal
> > topic. This could be done by making using a list of all internal topics
> > have been entered to the configuration.
> >
> > So having an interface seemed easier and also give more flexibility for
> > users to define their own topics name, define what is internal topic
> means,
> > how to find checkpoints topics and it will be one line config for each
> > herder, also it more consistence with MM2 code as MM2 config have
> > TopicFilter, ReplicationPolicy, GroupFilter, etc as interface and they
> can
> > be overridden by providing a custom implementation for them or have some
> > config that change their default implementations.
> >
> > Hope this answer your question. I also updated the KIP to add this to the
> > rejected solutions.
> >
> >
> > On Thu, Dec 3, 2020 at 3:19 PM Mickael Maison 
> > wrote:
> >
> > > Hi Omnia,
> > >
> > > Thanks for the KIP. Indeed being able to configure MM2's internal
> > > topic names would be a nice improvement.
> > >
> > > Looking at the KIP, I was surprised you propose an interface to allow
> > > users to specify names. Have you considered making names changeable
> > > via configurations? If so, we should definitely mention it in the
> > > rejected alternatives as it's the first method that comes to mind.
> > >
> > > I understand an interface gives a lot of flexibility but I'd expect
> > > topic names to be relatively simple and known in advance in most
> > > cases.
> > >
> > > I've not checked all use cases but something like below felt
> appropriate:
> > > clusters = primary,backup
> > > primary->backup.offsets-sync.topic=backup.mytopic-offsets-sync
> > >
> > > On Tue, Dec 1, 2020 at 3:36 PM Omnia Ibrahim 
> > > wrote:
> > > >
> > > > Hey everyone,
> > > > Please take a look at KIP-690:
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention
> > > >
> > > > Thanks for your feedback and support.
> > > >
> > > > Omnia
> > > >
> > >
>


Re: [DISCUSS] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2020-12-10 Thread John Roesler
Hi Bruno,

Thanks for the KIP!

This seems like a nice data integrity improvement, and the KIP looks good to me.

I’m wondering if we should plan to transition to manual init only in the 
future. I.e. maybe we log a warning, then later on we switch the default config 
to manual, and then ultimately drop the config completely. What do you think?

Thanks,
John

On Thu, Dec 10, 2020, at 04:36, Bruno Cadonna wrote:
> Hi,
> 
> I'd like to start the discussion on KIP-698 that proposes an explicit 
> user initialization of broker-side state for Kafka Streams instead of 
> letting Kafka Streams setting up the broker-side state automatically 
> during rebalance. Such an explicit initialization avoids possible data 
> loss issues due to automatic initialization.
> 
> https://cwiki.apache.org/confluence/x/7CnZCQ
> 
> Best,
> Bruno
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #279

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10748: Add IP connection rate throttling metric (KIP-612) (#9685)


--
[...truncated 3.46 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[

Re: [DISCUSS] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2020-12-10 Thread Bruno Cadonna

Hi John,

Thank you for the feedback!

I am undecided, because while manual init only makes Kafka Streams safer 
regarding data loss, it makes first toy apps with Kafka Streams a little 
bit more complicated. I am a bit more inclined to manual init only, though.


Best,
Bruno


On 10.12.20 15:20, John Roesler wrote:

Hi Bruno,

Thanks for the KIP!

This seems like a nice data integrity improvement, and the KIP looks good to me.

I’m wondering if we should plan to transition to manual init only in the 
future. I.e. maybe we log a warning, then later on we switch the default config 
to manual, and then ultimately drop the config completely. What do you think?

Thanks,
John

On Thu, Dec 10, 2020, at 04:36, Bruno Cadonna wrote:

Hi,

I'd like to start the discussion on KIP-698 that proposes an explicit
user initialization of broker-side state for Kafka Streams instead of
letting Kafka Streams setting up the broker-side state automatically
during rebalance. Such an explicit initialization avoids possible data
loss issues due to automatic initialization.

https://cwiki.apache.org/confluence/x/7CnZCQ

Best,
Bruno



[jira] [Resolved] (KAFKA-10813) StreamsProducer should catch InvalidProducerEpoch and throw TaskMigrated in all cases

2020-12-10 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-10813.
-
Resolution: Fixed

> StreamsProducer should catch InvalidProducerEpoch and throw TaskMigrated in 
> all cases
> -
>
> Key: KAFKA-10813
> URL: https://issues.apache.org/jira/browse/KAFKA-10813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Blocker
> Fix For: 2.7.0
>
>
> We fixed the error code handling on producer in 
> https://issues.apache.org/jira/browse/KAFKA-10687, however the newly thrown 
> `InvalidProducerEpoch` exception was not properly handled on Streams side in 
> all cases. We should catch it and rethrow as TaskMigrated to trigger 
> exception, similar to ProducerFenced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] guozhangwang commented on pull request #313: MINOR: remove quickstart-*.html

2020-12-10 Thread GitBox


guozhangwang commented on pull request #313:
URL: https://github.com/apache/kafka-site/pull/313#issuecomment-742667050


   LGTM.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-10677) Complete fetches in purgatory immediately after raft leader resigns

2020-12-10 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10677.
-
Resolution: Fixed

> Complete fetches in purgatory immediately after raft leader resigns
> ---
>
> Key: KAFKA-10677
> URL: https://issues.apache.org/jira/browse/KAFKA-10677
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: dengziming
>Priority: Major
>
> The current logic does not complete fetches in purgatory immediately after 
> the leader has resigned. The idea was that there was no point in doing so 
> until the election had completed because clients would just have to retry. 
> However, the fetches in purgatory might correspond to requests from other 
> voters, so the concern is that this might delay a leader election. For 
> example, the voter might be trying to send a Vote request on the same socket 
> that is blocking on a pending Fetch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9552) Stream should handle OutOfSequence exception thrown from Producer

2020-12-10 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9552.

Resolution: Not A Problem

> Stream should handle OutOfSequence exception thrown from Producer
> -
>
> Key: KAFKA-9552
> URL: https://issues.apache.org/jira/browse/KAFKA-9552
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Boyang Chen
>Priority: Major
>
> As of today the stream thread could die from OutOfSequence error:
> {code:java}
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) 
> org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker 
> received an out of order sequence number.
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) [2020-02-12 
> 15:14:35,185] ERROR 
> [stream-soak-test-546f8754-5991-4d62-8565-dbe98d51638e-StreamThread-1] 
> stream-thread 
> [stream-soak-test-546f8754-5991-4d62-8565-dbe98d51638e-StreamThread-1] Failed 
> to commit stream task 3_2 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks)
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) 
> org.apache.kafka.streams.errors.StreamsException: task [3_2] Abort sending 
> since an error caught with a previous record (timestamp 1581484094825) to 
> topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-49-changelog due 
> to org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker 
> received an out of order sequence number.
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:154)
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:52)
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:214)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1353)
> {code}
>  Although this is fatal exception for Producer, stream should treat it as an 
> opportunity to reinitialize by doing a rebalance, instead of killing 
> computation resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2020-12-10 Thread John Roesler
Thanks, Bruno,

I think my feelings are the same as yours. It seems like
either call is just a matter of some forward looking
statement in the KIP and maybe a warning log if we're
leaning toward changing the default in the future.

I'm happy with whatever you prefer.

Thanks again,
-John

On Thu, 2020-12-10 at 16:37 +0100, Bruno Cadonna wrote:
> Hi John,
> 
> Thank you for the feedback!
> 
> I am undecided, because while manual init only makes Kafka Streams safer 
> regarding data loss, it makes first toy apps with Kafka Streams a little 
> bit more complicated. I am a bit more inclined to manual init only, though.
> 
> Best,
> Bruno
> 
> 
> On 10.12.20 15:20, John Roesler wrote:
> > Hi Bruno,
> > 
> > Thanks for the KIP!
> > 
> > This seems like a nice data integrity improvement, and the KIP looks good 
> > to me.
> > 
> > I’m wondering if we should plan to transition to manual init only in the 
> > future. I.e. maybe we log a warning, then later on we switch the default 
> > config to manual, and then ultimately drop the config completely. What do 
> > you think?
> > 
> > Thanks,
> > John
> > 
> > On Thu, Dec 10, 2020, at 04:36, Bruno Cadonna wrote:
> > > Hi,
> > > 
> > > I'd like to start the discussion on KIP-698 that proposes an explicit
> > > user initialization of broker-side state for Kafka Streams instead of
> > > letting Kafka Streams setting up the broker-side state automatically
> > > during rebalance. Such an explicit initialization avoids possible data
> > > loss issues due to automatic initialization.
> > > 
> > > https://cwiki.apache.org/confluence/x/7CnZCQ
> > > 
> > > Best,
> > > Bruno
> > > 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #304

2020-12-10 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-9552) Stream should handle OutOfSequence exception thrown from Producer

2020-12-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-9552:


> Stream should handle OutOfSequence exception thrown from Producer
> -
>
> Key: KAFKA-9552
> URL: https://issues.apache.org/jira/browse/KAFKA-9552
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Boyang Chen
>Priority: Major
>
> As of today the stream thread could die from OutOfSequence error:
> {code:java}
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) 
> org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker 
> received an out of order sequence number.
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) [2020-02-12 
> 15:14:35,185] ERROR 
> [stream-soak-test-546f8754-5991-4d62-8565-dbe98d51638e-StreamThread-1] 
> stream-thread 
> [stream-soak-test-546f8754-5991-4d62-8565-dbe98d51638e-StreamThread-1] Failed 
> to commit stream task 3_2 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks)
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) 
> org.apache.kafka.streams.errors.StreamsException: task [3_2] Abort sending 
> since an error caught with a previous record (timestamp 1581484094825) to 
> topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-49-changelog due 
> to org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker 
> received an out of order sequence number.
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:154)
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:52)
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:214)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1353)
> {code}
>  Although this is fatal exception for Producer, stream should treat it as an 
> opportunity to reinitialize by doing a rebalance, instead of killing 
> computation resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9552) Stream should handle OutOfSequence exception thrown from Producer

2020-12-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9552.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Stream should handle OutOfSequence exception thrown from Producer
> -
>
> Key: KAFKA-9552
> URL: https://issues.apache.org/jira/browse/KAFKA-9552
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>
> As of today the stream thread could die from OutOfSequence error:
> {code:java}
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) 
> org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker 
> received an out of order sequence number.
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) [2020-02-12 
> 15:14:35,185] ERROR 
> [stream-soak-test-546f8754-5991-4d62-8565-dbe98d51638e-StreamThread-1] 
> stream-thread 
> [stream-soak-test-546f8754-5991-4d62-8565-dbe98d51638e-StreamThread-1] Failed 
> to commit stream task 3_2 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks)
>  [2020-02-12T07:14:35-08:00] 
> (streams-soak-2-5-eos_soak_i-03f89b1e566ac95cc_streamslog) 
> org.apache.kafka.streams.errors.StreamsException: task [3_2] Abort sending 
> since an error caught with a previous record (timestamp 1581484094825) to 
> topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-49-changelog due 
> to org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker 
> received an out of order sequence number.
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:154)
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:52)
>  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:214)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1353)
> {code}
>  Although this is fatal exception for Producer, stream should treat it as an 
> opportunity to reinitialize by doing a rebalance, instead of killing 
> computation resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-696: Update Streams FSM to clarify ERROR state meaning

2020-12-10 Thread Sophie Blee-Goldman
KIP looks good to me, thanks Walker!

+1 (binding)

-Sophie

On Thu, Dec 10, 2020 at 1:53 AM Bruno Cadonna  wrote:

> Thanks, Walker!
>
> +1 (non-binding)
>
> Best,
> Bruno
>
> On 09.12.20 20:07, Leah Thomas wrote:
> > Looks good, thanks Walker! +1 (non-binding)
> >
> > Leah
> >
> > On Wed, Dec 9, 2020 at 1:04 PM John Roesler  wrote:
> >
> >> Thanks, Walker!
> >>
> >> I'm also +1 (binding)
> >>
> >> -John
> >>
> >> On Wed, 2020-12-09 at 11:03 -0800, Guozhang Wang wrote:
> >>> +1. Thanks Walker.
> >>>
> >>> On Wed, Dec 9, 2020 at 10:58 AM Walker Carlson 
> >>> wrote:
> >>>
>  Sorry I forgot to change the subject line to vote.
> 
>  Thanks for the comments. If there are no further concerns I would like
> >> to
>  call for a vote on KIP-696 to clarify and clean up the Streams State
>  Machine.
> 
>  On Wed, Dec 9, 2020 at 10:04 AM Walker Carlson  >
>  wrote:
> 
> > Thanks for the comments. If there are no further concerns I would
> >> like to
> > call for a vote on KIP-696 to clarify and clean up the Streams State
> > Machine.
> >
> > walker
> >
> > On Wed, Dec 9, 2020 at 8:50 AM John Roesler 
> >> wrote:
> >
> >> Thanks, Walker!
> >>
> >> Your proposal looks good to me.
> >>
> >> -John
> >>
> >> On Tue, 2020-12-08 at 18:29 -0800, Walker Carlson wrote:
> >>> Thanks for the feedback Guozhang!
> >>>
> >>> I clarified some of the points in the Proposed Changes section so
> >> hopefully
> >>> it will be more clear what is going on now. I also agree with
> >> your
> >>> suggestion about the possible call to close() on ERROR so I
> >> added this
> >>> line.
> >>> "Close() called on ERROR will be idempotent and not throw an
>  exception,
> >> but
> >>> we will log a warning."
> >>>
> >>> I have linked those tickets and I will leave a comment trying to
>  explain
> >>> how these changes will affect their issue.
> >>>
> >>> walker
> >>>
> >>> On Tue, Dec 8, 2020 at 4:57 PM Guozhang Wang  >>>
> >> wrote:
> >>>
>  Hello Walker,
> 
>  Thanks for the KIP! Overall it looks reasonable to me. Just a
> >> few
> >> minor
>  comments for the wiki page itself:
> 
>  1) Could you clarify the conditions when RUNNING / REBALANCING
> >> ->
>  PENDING_ERROR will happen; and when PENDING_ERROR -> ERROR will
> >> happen.
>  E.g. when I read "Streams will only reach ERROR state in the
> >> event
>  of
> >> an
>  exceptional failure in which the
> >> `StreamsUncaughtExceptionHandler`
> >> chose to
>  either shutdown the application or the client." I thought the
> >> first
>  transition would happen before the handler, and the second
>  transition
> >> would
>  happen immediately after the handler returns "shutdown client"
> >> or
> >> "shutdown
>  application", until I read the last statement regarding
> >> "SHUTDOWN_CLIENT".
> 
>  2) A compatibility issue: today it is possible that users
> >> would call
>  Streams APIs like shutdown in the global state transition
> >> listener.
> >> And
>  it's common to try shutting down the application automatically
> >> when
>  transiting to ERROR (assuming it was not a terminating state).
> >> I
> >> think we
>  could consider making this call a no-op and log a warning.
> 
>  3) Could you link the following JIRAs in the "JIRA" field?
> 
>  https://issues.apache.org/jira/browse/KAFKA-10555
>  https://issues.apache.org/jira/browse/KAFKA-9638
>  https://issues.apache.org/jira/browse/KAFKA-6520
> 
>  And maybe we can also left a comment on those tickets
> >> explaining
>  what
> >> would
>  happen to tackle the issues after this KIP.
> 
> 
>  Guozhang
> 
> 
>  On Tue, Dec 8, 2020 at 12:16 PM Walker Carlson <
>  wcarl...@confluent.io
> >>>
>  wrote:
> 
> > Hello all,
> >
> > I'd like to propose KIP-696 to clarify the meaning of ERROR
> >> state
> >> in the
> > KafkaStreams Client State Machine. This will update the
> >> States to
>  be
> > consistent with changes in KIP-671 and KIP-663.
> >
> > Here are the details:
>  https://cwiki.apache.org/confluence/x/lCvZCQ
> >
> > Thanks,
> > Walker
> >
> 
> 
>  --
>  -- Guozhang
> 
> >>
> >>
> >>
> 
> >>>
> >>>
> >>
> >>
> >>
> >
>


[jira] [Resolved] (KAFKA-10802) Spurious log message when starting consumers

2020-12-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10802.

Fix Version/s: 2.8.0
   2.6.1
   Resolution: Fixed

> Spurious log message when starting consumers
> 
>
> Key: KAFKA-10802
> URL: https://issues.apache.org/jira/browse/KAFKA-10802
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Priority: Major
> Fix For: 2.6.1, 2.8.0
>
>
> Reported by Gary Russell in the [2.6.1 RC3 vote 
> thread|https://lists.apache.org/thread.html/r13d2c687b2fafbe9907fceb3d4f3cc6d5b34f9f36a7fcc985c38b506%40%3Cdev.kafka.apache.org%3E]
> I am seeing this on every consumer start:
> 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1] 
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
> clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> org.apache.kafka.common.errors.MemberIdRequiredException: The group member 
> needs to have a valid member id before actually entering a consumer group.
> Due to this change 
> https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> I understand what a MemberIdRequiredException is, but the previous (2.6.0) 
> log (with exception.getMessage()) didn't stand out like the new one does 
> because it was all on one line.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #325

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10748: Add IP connection rate throttling metric (KIP-612) (#9685)


--
[...truncated 6.99 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureOutputRecords STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureOutputRecords PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCapturePunctuator STARTED

org.apache.kafka.streams.test.MockProcessorContextAPITest > 
shouldCapturePunctuator PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
should

Re: [VOTE] 2.6.1 RC2

2020-12-10 Thread Mickael Maison
Hi,

I'm including the fix for
https://issues.apache.org/jira/browse/KAFKA-10802 in 2.6.1, so closing
this vote.
I'll start a new vote once I've built RC3

Thanks

On Thu, Dec 3, 2020 at 10:07 AM Mickael Maison  wrote:
>
> Thanks for the report Gary.
>
> Guozhang, I've opened
> https://issues.apache.org/jira/browse/KAFKA-10802. It's currently not
> marked as a blocker for 2.6.1.
>
> On Tue, Dec 1, 2020 at 6:40 PM Guozhang Wang  wrote:
> >
> > Hello Gary,
> >
> > Thanks for pointing this out, that change was for making other exceptions
> > easier to debug but I think MemberIdRequiredException is overlooked. I can
> > provide a hotfix to separate this exception from others in this log entry
> > to be included in a future release.
> >
> >
> > Guozhang
> >
> > On Sat, Nov 28, 2020 at 11:54 AM Ismael Juma  wrote:
> >
> > > Thoughts Guozhang?
> > >
> > > On Wed, Nov 25, 2020, 11:16 AM Gary Russell  wrote:
> > >
> > > > I am seeing this on every consumer start:
> > > >
> > > > 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1]
> > > > o.a.k.c.c.internals.AbstractCoordinator  : [Consumer
> > > > clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> > > >
> > > > org.apache.kafka.common.errors.MemberIdRequiredException: The group
> > > member
> > > > needs to have a valid member id before actually entering a consumer
> > > group.
> > > >
> > > >
> > > > Due to this change [1].
> > > >
> > > > I understand what a MemberIdRequiredException is, but the previous
> > > (2.6.0)
> > > > log (with exception.getMessage()) didn't stand out like the new one does
> > > > because it was all on one line.
> > > >
> > > > Probably not a blocker, but I suspect it will cause some angst when 
> > > > users
> > > > start seeing it since it stands out so much. It will be worse if/when 
> > > > the
> > > > lack of stack trace for ApiExceptions is ever fixed.
> > > >
> > > > I am not sure I understand why it's logged at INFO at all, since it's a
> > > > normal state during initial handshaking.
> > > >
> > > >
> > > >
> > > > [1]:
> > > >
> > > https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> > > >
> > > >
> > > > 
> > > > From: Mickael Maison 
> > > > Sent: Wednesday, November 25, 2020 1:41 PM
> > > > To: dev ; Users ;
> > > > kafka-clients 
> > > > Subject: [VOTE] 2.6.1 RC2
> > > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the third candidate for release of Apache Kafka 2.6.1.
> > > >
> > > > Since RC1, the following JIRAs have been fixed: KAFKA-10758
> > > >
> > > > Release notes for the 2.6.1 release:
> > > >
> > > >
> > > https://nam04.safelinks.protection.outlook.com/?url=https:%2F%2Fhome.apache.org%2F~mimaison%2Fkafka-2.6.1-rc2%2FRELEASE_NOTES.html&data=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mpMKjztN2CqqGQDrf5wfJ1JTMTep9oA2tf2n2tH8OEI%3D&reserved=0
> > > >
> > > > *** Please download, test and vote by Wednesday, December 2, 5PM PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > >
> > > >
> > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkafka.apache.org%2FKEYS&data=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=WZTUB%2B7qCILAKvU%2B07JURXDQzTlgjth87eI6IWL120M%3D&reserved=0
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > >
> > > >
> > > https://nam04.safelinks.protection.outlook.com/?url=https:%2F%2Fhome.apache.org%2F~mimaison%2Fkafka-2.6.1-rc2%2F&data=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=o0VgYxAqwmj%2BnKA%2FRe0Idwe0IDozqt%2BqFkewexGg6H8%3D&reserved=0
> > > >
> > > > * Maven artifacts to be voted upon:
> > > >
> > > >
> > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepository.apache.org%2Fcontent%2Fgroups%2Fstaging%2Forg%2Fapache%2Fkafka%2F&data=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=p6Tt30M5cfuXRHr1oR8usuPYBXUlsV4frtPaVmY91b4%3D&reserved=0
> > > >
> > > > * Javadoc:
> > > >
> > > >
> > > https://nam04.safelinks.protection.outlook.com/?url=https:%2F%2Fhome.apache.org%2F~mimaison%2Fkafka-2.6.1-rc2%2Fjavadoc%2F&data=04%7C01%7Cgrussell%40vmware.com%7Cf7eb

Re: [DISCUSS] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2020-12-10 Thread Sophie Blee-Goldman
Hey John,

I think we should avoid logging a warning that implies we've committed
to changing a default unless we've absolutely committed to it, which it
sounds like we have not (fwiw I'm also on the fence, but leaning towards
leaving it automatic -- just think of how many people already forget to
create their source topics before startup and struggle with that). This is
probably part of a larger discussion on whether to default to OOTB-friendly
or production-ready settings, which should probably be considered
holistically
rather than on a case-by-case basis.

That said, I'm totally down with logging a warning that data loss is
possible
when using automatic initialization, if that's what you meant.

Bruno,

Thanks for the KIP, it looks good in general but I'm wondering if we can
make
the InitParameters API a bit more aligned to the config/parameter classes
used
throughout Streams (eg Materialized).

For example something like

public class Initialized {

public static withSetupInternalTopicsIfIncompleteEnabled();
public static withSetupInternalTopicsIfIncompleteDisabled();
// we also don't tend to have getters for these kind of classes,
but maybe we should start :)
}


On Thu, Dec 10, 2020 at 9:33 AM John Roesler  wrote:

> Thanks, Bruno,
>
> I think my feelings are the same as yours. It seems like
> either call is just a matter of some forward looking
> statement in the KIP and maybe a warning log if we're
> leaning toward changing the default in the future.
>
> I'm happy with whatever you prefer.
>
> Thanks again,
> -John
>
> On Thu, 2020-12-10 at 16:37 +0100, Bruno Cadonna wrote:
> > Hi John,
> >
> > Thank you for the feedback!
> >
> > I am undecided, because while manual init only makes Kafka Streams safer
> > regarding data loss, it makes first toy apps with Kafka Streams a little
> > bit more complicated. I am a bit more inclined to manual init only,
> though.
> >
> > Best,
> > Bruno
> >
> >
> > On 10.12.20 15:20, John Roesler wrote:
> > > Hi Bruno,
> > >
> > > Thanks for the KIP!
> > >
> > > This seems like a nice data integrity improvement, and the KIP looks
> good to me.
> > >
> > > I’m wondering if we should plan to transition to manual init only in
> the future. I.e. maybe we log a warning, then later on we switch the
> default config to manual, and then ultimately drop the config completely.
> What do you think?
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Dec 10, 2020, at 04:36, Bruno Cadonna wrote:
> > > > Hi,
> > > >
> > > > I'd like to start the discussion on KIP-698 that proposes an explicit
> > > > user initialization of broker-side state for Kafka Streams instead of
> > > > letting Kafka Streams setting up the broker-side state automatically
> > > > during rebalance. Such an explicit initialization avoids possible
> data
> > > > loss issues due to automatic initialization.
> > > >
> > > > https://cwiki.apache.org/confluence/x/7CnZCQ
> > > >
> > > > Best,
> > > > Bruno
> > > >
>
>
>


Re: [DISCUSS] KIP-687: Automatic Reloading of Security Store

2020-12-10 Thread Boyang Chen
After some offline discussions, we believe that it's the right direction to
go by doing a hybrid approach which includes both file-watch trigger and
interval based reloading. The former guarantees a swift change in 99% time,
while the latter provides a time-based guarantee in the worst case when the
file-watch does not take effect. The current default reloading interval is
set to 5 min. I have updated the KIP and ticket, feel free to check out and
see if it makes sense.

Best,
Boyang

On Tue, Dec 8, 2020 at 8:58 PM Boyang Chen 
wrote:

> Hey Gwen, thanks for the feedback.
>
> On Sun, Dec 6, 2020 at 10:06 PM Gwen Shapira  wrote:
>
>> Agree with Igor. IIRC, we also encountered cases where filewatch was
>> not triggered as expected. An interval will give us a better
>> worse-case scenario that is easily controlled by the Kafka admin.
>>
>> Are the cases you were referring to happening in the cloud environment?
> Should we investigate instead of simply assuming the standard API won't
> work? I checked around and found a similar complaint here
> .
>
> I would be partially agreeing that we want to have a reliable approach for
> all different operating systems in general, but would be great if we could
> reach a quantitative measure of file-watch success rate if possible for us
> to make the call. Eventually, the benefit of file-watch is more prompt
> reaction time and less configuration to the broker.
>
>> Gwen
>>
>> On Sun, Dec 6, 2020 at 8:17 AM Igor Soarez  wrote:
>> >
>> >
>> > > > The proposed change relies on a file watch, why not also have a
>> polling
>> > > > interval to check the file for changes?
>> > > >
>> > > > The periodical check could work, the slight downside is that we need
>> > > additional configurations to schedule the interval. Do you think the
>> > > file-watch approach has any extra overhead than the interval based
>> solution?
>> >
>> > I don't think so. The reason I'm asking this is the KIP currently
>> includes:
>> >
>> >   "When the file watch does not work for unknown reason, user could
>> still try to change the store path in an explicit AlterConfig call in the
>> worst case."
>> >
>> > Having the interval in addition to the file watch could result in a
>> better worst case scenario.
>> > I understand it would require introducing at least one new
>> configuration for the interval, so maybe this doesn't have to solved in
>> this KIP.
>> >
>> > --
>> > Igor
>> >
>> > On Fri, Dec 4, 2020, at 5:14 PM, Boyang Chen wrote:
>> > > Hey Igor, thanks for the feedback.
>> > >
>> > > On Fri, Dec 4, 2020 at 5:24 AM Igor Soarez  wrote:
>> > >
>> > > > Hi Boyang,
>> > > >
>> > >
>> > >
>> > > > What happens if the file is changed into an invalid store? Does the
>> > > > previous store stay in use?
>> > > >
>> > > > If the reload fails, the previous store should be effective. I will
>> state
>> > > that in the KIP.
>> > >
>> > >
>> > > > Thanks,
>> > > >
>> > > > --
>> > > > Igor
>> > > >
>> > > > On Fri, Dec 4, 2020, at 1:28 AM, Boyang Chen wrote:
>> > > > > Hey there,
>> > > > >
>> > > > > I would like to start the discussion thread for KIP-687:
>> > > > >
>> > > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-687%3A+Automatic+Reloading+of+Security+Store
>> > > > >
>> > > > > This KIP is trying to deprecate the AlterConfigs API support of
>> updating
>> > > > > the security store by reloading path in-place, and replace with a
>> > > > > file-watch mechanism inside the broker. Let me know what you
>> think.
>> > > > >
>> > > > > Best,
>> > > > > Boyang
>> > > > >
>> > > >
>> > >
>>
>>
>>
>> --
>> Gwen Shapira
>> Engineering Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter | blog
>>
>


Request to be added as a contributor in Kafka JIRA board

2020-12-10 Thread Lev Zemlyanov
Hi,

Please add me as a contributor in Kafka JIRA board and Confluence Wiki. My
JIRA Id is levzemlyanov. My Confluence ID is lev.

Thanks, Lev

-
Lev Zemlyanov
SWE @ Connect Team
Confluent
Mountain View, CA
-


Re: Request to be added as a contributor in Kafka JIRA board

2020-12-10 Thread Konstantine Karantasis
Added to both.

Thanks for joining Lev!

-Konstantine

On Thu, Dec 10, 2020 at 11:57 AM Lev Zemlyanov  wrote:

> Hi,
>
> Please add me as a contributor in Kafka JIRA board and Confluence Wiki. My
> JIRA Id is levzemlyanov. My Confluence ID is lev.
>
> Thanks, Lev
>
> -
> Lev Zemlyanov
> SWE @ Connect Team
> Confluent
> Mountain View, CA
> -
>


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #59

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[Mickael Maison] MINOR: Do not print log4j for memberId required (#9667)


--
[...truncated 3.17 MB...]
org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upg

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #305

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10677; Complete fetches in purgatory immediately after resigning 
(#9639)


--
[...truncated 3.49 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1653cdfe,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@dafeef8, 
timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@dafeef8, 
timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@221e84e, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@221e84e, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c3feed8, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c3feed8, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@40bfe9c, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@40bfe9c, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@76a118cb,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@76a118cb,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7bcffdc9,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7bcffdc9,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@8f58546, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@8f58546, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@53d175da, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@53d175da, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d48582b, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d48582b, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThro

[GitHub] [kafka-site] bbejeck merged pull request #313: MINOR: remove quickstart-*.html

2020-12-10 Thread GitBox


bbejeck merged pull request #313:
URL: https://github.com/apache/kafka-site/pull/313


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck commented on pull request #313: MINOR: remove quickstart-*.html

2020-12-10 Thread GitBox


bbejeck commented on pull request #313:
URL: https://github.com/apache/kafka-site/pull/313#issuecomment-742814988


   \cc @mimaison 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #280

2020-12-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10834) Remove redundant type casts in Connect

2020-12-10 Thread Konstantine Karantasis (Jira)
Konstantine Karantasis created KAFKA-10834:
--

 Summary: Remove redundant type casts in Connect
 Key: KAFKA-10834
 URL: https://issues.apache.org/jira/browse/KAFKA-10834
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Konstantine Karantasis
Assignee: Lev Zemlyanov


Some type casts in the code base are not required any more and can be removed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10835) Replace Runnable and Callable overrides with lambdas in Connect

2020-12-10 Thread Konstantine Karantasis (Jira)
Konstantine Karantasis created KAFKA-10835:
--

 Summary: Replace Runnable and Callable overrides with lambdas in 
Connect
 Key: KAFKA-10835
 URL: https://issues.apache.org/jira/browse/KAFKA-10835
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Konstantine Karantasis
Assignee: Lev Zemlyanov


We've been using Java 8 for sometime now of course. Replacing the overrides 
from the pre-Java 8 era will simplify some parts of the code and will reduce 
verbosity. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10837) Fix javadoc issues and warnings in Connect

2020-12-10 Thread Konstantine Karantasis (Jira)
Konstantine Karantasis created KAFKA-10837:
--

 Summary: Fix javadoc issues and warnings in Connect
 Key: KAFKA-10837
 URL: https://issues.apache.org/jira/browse/KAFKA-10837
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Konstantine Karantasis
Assignee: Daniel Osvath


A few old and new issues remain in javadoc declarations in Connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10836) Use type inference and cleanup generic type declarations in Connect

2020-12-10 Thread Konstantine Karantasis (Jira)
Konstantine Karantasis created KAFKA-10836:
--

 Summary: Use type inference and cleanup generic type declarations 
in Connect
 Key: KAFKA-10836
 URL: https://issues.apache.org/jira/browse/KAFKA-10836
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Konstantine Karantasis
Assignee: Daniel Osvath


Even though the diamond operator has been available since Java 7, there are 
still a few places in Connect where we list types in declarations of generic 
collections. It's be nice to clean those up. 

Optionally, we should fix places where raw types are used instead of the 
equivalent generic types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10838) Make member fields final if applicable in Connect

2020-12-10 Thread Konstantine Karantasis (Jira)
Konstantine Karantasis created KAFKA-10838:
--

 Summary: Make member fields final if applicable in Connect
 Key: KAFKA-10838
 URL: https://issues.apache.org/jira/browse/KAFKA-10838
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Konstantine Karantasis
Assignee: Daniel Osvath


A reasonable optimization with low risk and some added safety. It seems that 
could potentially be applied in several classes in Connect. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10839) Improve consumer group coordinator unavailable message

2020-12-10 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-10839:


 Summary: Improve consumer group coordinator unavailable message
 Key: KAFKA-10839
 URL: https://issues.apache.org/jira/browse/KAFKA-10839
 Project: Kafka
  Issue Type: Improvement
Reporter: Lucas Bradstreet


When a consumer encounters an issue that triggers marking a coordinator as 
unknown, the error message it prints does not give context about the error that 
triggered it.
{noformat}
log.info("Group coordinator {} is unavailable or invalid, will attempt 
rediscovery", this.coordinator);{noformat}
These may be triggered by response errors or the coordinator becoming 
disconnected. We should improve this error message to make the cause clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #76

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] throw corresponding invalid producer epoch (#9700) (#9723)


--
[...truncated 6.87 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = f

[VOTE] 2.7.0 RC5

2020-12-10 Thread Bill Bejeck
Hello Kafka users, developers and client-developers,

This is the sixth candidate for release of Apache Kafka 2.7.0.

* Configurable TCP connection timeout and improve the initial metadata fetch
* Enforce broker-wide and per-listener connection creation rate (KIP-612,
part 1)
* Throttle Create Topic, Create Partition and Delete Topic Operations
* Add TRACE-level end-to-end latency metrics to Streams
* Add Broker-side SCRAM Config API
* Support PEM format for SSL certificates and private key
* Add RocksDB Memory Consumption to RocksDB Metrics
* Add Sliding-Window support for Aggregations

This release also includes a few other features, 53 improvements, and 84
bug fixes.

Release notes for the 2.7.0 release:
https://home.apache.org/~bbejeck/kafka-2.7.0-rc5/RELEASE_NOTES.html

*** Please download, test and vote by Friday, December 18, 12 PM ET ***

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~bbejeck/kafka-2.7.0-rc5/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~bbejeck/kafka-2.7.0-rc5/javadoc/

* Tag to be voted upon (off 2.7 branch) is the 2.7.0 tag:
https://github.com/apache/kafka/releases/tag/2.7.0-rc5

* Documentation:
https://kafka.apache.org/27/documentation.html

* Protocol:
https://kafka.apache.org/27/protocol.html

* Successful Jenkins builds for the 2.7 branch:
Unit/integration tests: Link to follow

Thanks,
Bill


Request to be added as a contributor in Kafka JIRA board

2020-12-10 Thread Daniel Osvath
Hi,

Please add me as a contributor to the Kafka JIRA board and Confluence
Wiki. My JIRA ID is dosvath. My Confluence ID is dosvath.

Thanks,

Daniel

-
Daniel Osvath
SWE @ Connect Team
Confluent
Mountain View, CA
-


[jira] [Created] (KAFKA-10840) Need way to catch auth issues in poll method

2020-12-10 Thread Devin G. Bost (Jira)
Devin G. Bost created KAFKA-10840:
-

 Summary: Need way to catch auth issues in poll method
 Key: KAFKA-10840
 URL: https://issues.apache.org/jira/browse/KAFKA-10840
 Project: Kafka
  Issue Type: Improvement
Reporter: Devin G. Bost


We recently implemented SSL authentication at our company, and when certs 
expire, the Kafka client poll method silently fails without throwing any kind 
of exception. This is a problem because the data flow could stop at any time 
(due to certificate expiration) without us being able to handle it. The auth 
issue shows up in Kafka broker logs, but we don't see any indication on the 
client-side that there was an auth issue. As a consequence, the auth failure 
happens 10 times a second forever. 

We need a way to know on the client-side if an auth issue is blocking the 
connection to Kafka so we can handle the exception and refresh the certs 
(keystore/truststore) when the certs expire. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #326

2020-12-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #60

2020-12-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3.17 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache

Re: Request to be added as a contributor in Kafka JIRA board

2020-12-10 Thread Konstantine Karantasis
Thanks for your interest Daniel.

Added.

-Konstantine

On Thu, Dec 10, 2020 at 2:34 PM Daniel Osvath  wrote:

> Hi,
>
> Please add me as a contributor to the Kafka JIRA board and Confluence
> Wiki. My JIRA ID is dosvath. My Confluence ID is dosvath.
>
> Thanks,
>
> Daniel
>
> -
> Daniel Osvath
> SWE @ Connect Team
> Confluent
> Mountain View, CA
> -
>


[jira] [Resolved] (KAFKA-7036) Complete the docs of KafkaConsumer#poll

2020-12-10 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-7036.
---
Resolution: Won't Fix

> Complete the docs of KafkaConsumer#poll
> ---
>
> Key: KAFKA-7036
> URL: https://issues.apache.org/jira/browse/KAFKA-7036
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> KafkaConsumer#poll has a nice docs about the expected exceptions. However, it 
> lacks the description of SerializationException. Another mirror issue is that 
> KafkaConsumer doesn't catch all type of exception which may be thrown by 
> deserializer (see below). We should use Throwable to replace the 
> RuntimeException so as to catch all exception and then wrap them to 
> SerializationException.
> {code:java}
> private ConsumerRecord parseRecord(TopicPartition partition,
>  RecordBatch batch,
>  Record record) {
> try {
> long offset = record.offset();
> long timestamp = record.timestamp();
> TimestampType timestampType = batch.timestampType();
> Headers headers = new RecordHeaders(record.headers());
> ByteBuffer keyBytes = record.key();
> byte[] keyByteArray = keyBytes == null ? null : 
> Utils.toArray(keyBytes);
> K key = keyBytes == null ? null : 
> this.keyDeserializer.deserialize(partition.topic(), headers, keyByteArray);
> ByteBuffer valueBytes = record.value();
> byte[] valueByteArray = valueBytes == null ? null : 
> Utils.toArray(valueBytes);
> V value = valueBytes == null ? null : 
> this.valueDeserializer.deserialize(partition.topic(), headers, 
> valueByteArray);
> return new ConsumerRecord<>(partition.topic(), partition.partition(), 
> offset,
> timestamp, timestampType, 
> record.checksumOrNull(),
> keyByteArray == null ? 
> ConsumerRecord.NULL_SIZE : keyByteArray.length,
> valueByteArray == null ? 
> ConsumerRecord.NULL_SIZE : valueByteArray.length,
> key, value, headers);
> } catch (RuntimeException e) {
> throw new SerializationException("Error deserializing key/value for 
> partition " + partition +
> " at offset " + record.offset() + ". If needed, please seek 
> past the record to continue consumption.", e);
> }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10463) the necessary utilities in Dockerfile should include git

2020-12-10 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10463.

Fix Version/s: 2.7.0
   Resolution: Fixed

> the necessary utilities in Dockerfile should include git
> 
>
> Key: KAFKA-10463
> URL: https://issues.apache.org/jira/browse/KAFKA-10463
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.7.0
>
>
> the default image of Dockerfile is openjdk:8 and it pre-installed git so it 
> is fine that necessary utilities does not include git. However, the later 
> version of openjdk image does not include git by default and error message 
> "git: command not found" ensues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9786) fix flaky MetricsTest.testGeneralBrokerTopicMetricsAreGreedilyRegistered

2020-12-10 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-9786.
---
Resolution: Won't Fix

> fix flaky MetricsTest.testGeneralBrokerTopicMetricsAreGreedilyRegistered
> 
>
> Key: KAFKA-9786
> URL: https://issues.apache.org/jira/browse/KAFKA-9786
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>  Labels: flaky-test
>
> {code:java}
> java.lang.AssertionError: expected:<18> but was:<23>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:633)
>   at 
> kafka.metrics.MetricsTest.testGeneralBrokerTopicMetricsAreGreedilyRegistered(MetricsTest.scala:108)
> {code}
> As gradle may use same JVM to run multiples test (see 
> https://docs.gradle.org/current/dsl/org.gradle.api.tasks.testing.Test.html#org.gradle.api.tasks.testing.Test:forkEvery),
>  the metrics from other tests can break 
> MetricsTest.testGeneralBrokerTopicMetricsAreGreedilyRegistered. 
> {code:scala}
>   private def topicMetrics(topic: Option[String]): Set[String] = {
> val metricNames = 
> KafkaYammerMetrics.defaultRegistry.allMetrics().keySet.asScala.map(_.getMBeanName)
> filterByTopicMetricRegex(metricNames, topic)
>   }
> {code}
> MetricsTest.testGeneralBrokerTopicMetricsAreGreedilyRegistered is not 
> captured by QA since the test which leaves orphan metrics in JVM is 
> ReplicaManagerTest and it belongs to integrationTest. By contrast, 
> MetricsTest is a part of unitTest. Hence, they are NOT executed by same JVM 
> (since they are NOT in the same task). 
> MetricsTest.testGeneralBrokerTopicMetricsAreGreedilyRegistered fails 
> frequently on my jenkins because my jenkins verify kafka by running 
> "./gradlew clean core:test".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #281

2020-12-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10017: fix flaky EOS-beta upgrade test (#9688)

[github] MINOR: Update jmh to 1.27 for async profiler support (#9129)


--
[...truncated 3.46 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = fal

[jira] [Created] (KAFKA-10841) LogReadResult should be able to converted to FetchPartitionData

2020-12-10 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10841:
--

 Summary: LogReadResult should be able to converted to 
FetchPartitionData 
 Key: KAFKA-10841
 URL: https://issues.apache.org/jira/browse/KAFKA-10841
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai


There are duplicate code which try to convert LogReadResult to 
FetchPartitionData. It seems to me the duplicate code can be eliminated by 
moving the conversion to LogReadResult.

occurrence 1: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L1076

occurrence 2:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedFetch.scala#L189
  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10842) Refactor raft outbound request channel

2020-12-10 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10842:
---

 Summary: Refactor raft outbound request channel
 Key: KAFKA-10842
 URL: https://issues.apache.org/jira/browse/KAFKA-10842
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


There are a few bugs in the current `KafkaNetworkChannel` implementation. Most 
of the problems are just features which haven't been implemented yet, such as 
timing out requests which are unable to be sent. Most of these issues are 
already addressed by `InterBrokerSendThread`. Since this is the class we are 
standardizing on, we should change the implementation here.

We also want to address some shortcomings with the current network api used by 
the raft layer. Specifically we want to allow outbound IO to be done on a 
separate thread so that it does not block request handling. Additionally, we 
would like to leave the door more open for concurrent handling of inbound 
requests, which is not currently possible because all requests (inbound and 
outbound) are piped through the same `receive()` API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #306

2020-12-10 Thread Apache Jenkins Server
See