[Discuss] KIP-581: Value of optional null field which has default value

2020-03-18 Thread Cheng Pan
Hi all,

I'd like to use this thread to discuss KIP-581: Value of optional null field 
which has default value, please see detail at: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-581%3A+Value+of+optional+null+field+which+has+default+value


There are some previous discussion at: https://github.com/apache/kafka/pull/7112


I'm a beginner for apache project, please let me know if I did any thing wrong.


Best regards,
Cheng Pan

Re: [VOTE] KIP-574: CLI Dynamic Configuration with file input

2020-03-18 Thread David Jacot
+1 (non-binding), thanks for the KIP!

David

On Mon, Mar 16, 2020 at 4:06 PM Aneel Nazareth  wrote:

> Hello all,
>
> Thanks to the folks who have given feedback. I've incorporated the
> suggestions, and think that this is now ready for a vote:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
>
> Thank you,
> Aneel
>


Re: [DISCUSS] KIP-580: Exponential Backoff for Kafka Clients

2020-03-18 Thread Sanjana Kaundinya
Thanks for the feedback Boyang.

If there’s anyone else who has feedback regarding this KIP, would really
appreciate it hearing it!

Thanks,
Sanjana

On Tue, Mar 17, 2020 at 11:38 PM Boyang Chen  wrote:

> Sounds great!
>
> Get Outlook for iOS
> 
> From: Sanjana Kaundinya 
> Sent: Tuesday, March 17, 2020 5:54:35 PM
> To: dev@kafka.apache.org 
> Subject: Re: [DISCUSS] KIP-580: Exponential Backoff for Kafka Clients
>
> Thanks for the explanation Boyang. One of the most common problems that we
> have in Kafka is with respect to metadata fetches. For example, if there is
> a broker failure, all clients start to fetch metadata at the same time and
> it often takes a while for the metadata to converge. In a high load
> cluster, there are also issues where the volume of metadata has made
> convergence of metadata slower.
>
> For this case, exponential backoff helps as it reduces the retry rate and
> spaces out how often clients will retry, thereby bringing down the time for
> convergence. Something that Jason mentioned that would be a great addition
> here would be if the backoff should be “jittered” as it was in KIP-144 with
> respect to exponential reconnect backoff. This would help prevent the
> clients from being synchronized on when they retry, thereby spacing out the
> number of requests being sent to the broker at the same time.
>
> I’ll add this example to the KIP and flush out more of the details - so
> it’s more clear.
>
> On Mar 17, 2020, 1:24 PM -0700, Boyang Chen ,
> wrote:
> > Thanks for the reply Sanjana. I guess I would like to rephrase my
> question
> > 2 and 3 as my previous response is a little bit unactionable.
> >
> > My specific point is that exponential backoff is not a silver bullet and
> we
> > should consider using it to solve known problems, instead of making the
> > holistic changes to all clients in Kafka ecosystem. I do like the
> > exponential backoff idea and believe this would be of great value, but
> > maybe we should focus on proposing some existing modules that are
> suffering
> > from static retry, and only change them in this first KIP. If in the
> > future, some other component users believe they are also suffering, we
> > could get more minor KIPs to change the behavior as well.
> >
> > Boyang
> >
> > On Sun, Mar 15, 2020 at 12:07 AM Sanjana Kaundinya  >
> > wrote:
> >
> > > Thanks for the feedback Boyang, I will revise the KIP with the
> > > mathematical relations as per your suggestion. To address your
> feedback:
> > >
> > > 1. Currently, with the default of 100 ms per retry backoff, in 1 second
> > > we would have 10 retries. In the case of using an exponential backoff,
> we
> > > would have a total of 4 retries in 1 second. Thus we have less than
> half of
> > > the amount of retries in the same timeframe and can lessen broker
> pressure.
> > > This calculation is done as following (using the formula laid out in
> the
> > > KIP:
> > >
> > > Try 1 at time 0 ms, failures = 0, next retry in 100 ms (default retry
> ms
> > > is initially 100 ms)
> > > Try 2 at time 100 ms, failures = 1, next retry in 200 ms
> > > Try 3 at time 300 ms, failures = 2, next retry in 400 ms
> > > Try 4 at time 700 ms, failures = 3, next retry in 800 ms
> > > Try 5 at time 1500 ms, failures = 4, next retry in 1000 ms (default max
> > > retry ms is 1000 ms)
> > >
> > > For 2 and 3, could you elaborate more about what you mean with respect
> to
> > > client timeouts? I’m not very familiar with the Streams framework, so
> would
> > > love to get more insight to how that currently works, with respect to
> > > producer transactions, so I can appropriately update the KIP to address
> > > these scenarios.
> > > On Mar 13, 2020, 7:15 PM -0700, Boyang Chen <
> reluctanthero...@gmail.com>,
> > > wrote:
> > > > Thanks for the KIP Sanjana. I think the motivation is good, but lack
> of
> > > > more quantitative analysis. For instance:
> > > >
> > > > 1. How much retries we are saving by applying the exponential retry
> vs
> > > > static retry? There should be some mathematical relations between the
> > > > static retry ms, the initial exponential retry ms, the max
> exponential
> > > > retry ms in a given time interval.
> > > > 2. How does this affect the client timeout? With exponential retry,
> the
> > > > client shall be getting easier to timeout on a parent level caller,
> for
> > > > instance stream attempts to retry initializing producer transactions
> with
> > > > given 5 minute interval. With exponential retry this mechanism could
> > > > experience more frequent timeout which we should be careful with.
> > > > 3. With regards to #2, we should have more detailed checklist of all
> the
> > > > existing static retry scenarios, and adjust the initial exponential
> retry
> > > > ms to make sure we won't get easily timeout in high level due to too
> few
> > > > attempts.
> > > >
> > > > Boyang
> > > >
> > > > On Fri, Mar 13, 2020 at 4:38 PM Sanjana Kau

Build failed in Jenkins: kafka-trunk-jdk11 #1253

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: fix flaky

[github] MINOR: clean up required setup for StreamsPartitionAssignorTest (#8306)


--
[...truncated 2.92 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTes

Build failed in Jenkins: kafka-trunk-jdk8 #4333

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: fix flaky

[github] MINOR: clean up required setup for StreamsPartitionAssignorTest (#8306)

[github] KAFKA-9625: Fix altering and describing dynamic broker configurations


--
[...truncated 2.91 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:

Regarding segment size config

2020-03-18 Thread 张祥
Hi community,

I understand that there are two configs regarding segment file size,
log.segment.bytes for broker and segment.bytes for topic. The default
values are both 1G and they are required to be an integer so they cannot
be larger than 2G. My question is, assuming I am not making any mistakes,
what is the reason that log segment size is limited below 2G ? Thanks.


Build failed in Jenkins: kafka-trunk-jdk11 #1254

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9625: Fix altering and describing dynamic broker configurations


--
[...truncated 2.93 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTe

Need for histogram type for Kafka Connect JMX Metrics

2020-03-18 Thread Kanupriya Batra
Hi,



We are using kafka-connect framework in our use-case, and would also like to 
track metrics for connectors/task. But currently Kafka Connect JMX metrics are 
not supporting histogram types due to which we are not able to plot percentiles 
on a lot of important metrics like :
kafka_connect_sink_task_metrics_put_batch_avg_time_ms, or 
kafka_connect_sink_task_metrics_sink_record_read_rate.
I see there is a KIP Ticket 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework

) tracking the same, but it’s an old one from 2017 and there isn’t any progress 
after that.
Could you let me know if there are plans to support percentile metrics soon, if 
not, how can I achieve it from my end.








Thanks,
Kanupriya


Jenkins build is back to normal : kafka-2.5-jdk8 #66

2020-03-18 Thread Apache Jenkins Server
See 




Conditionally applying SMTs

2020-03-18 Thread Tom Bentley
Hi,

I opened https://issues.apache.org/jira/browse/KAFKA-9673 a week or so ago
sketching an idea for conditionally applying SMTs, which improve the
experience for users of connectors, such as Debezium, which can send
records to multiple different topics where SMTs need to be applied to for
some topics but not others. For example, in Debezium's case there's the
desire to differentiate between schema change events and normal data
modification events. I'm interested in any feedback people may have on this
idea.

Many thanks,

Tom


[jira] [Resolved] (KAFKA-9404) Use ArrayList instead of LinkedList in Sensor Class

2020-03-18 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-9404.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Use ArrayList instead of LinkedList in Sensor Class
> ---
>
> Key: KAFKA-9404
> URL: https://issues.apache.org/jira/browse/KAFKA-9404
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Mollitor
>Priority: Trivial
> Fix For: 2.6.0
>
>
> Since this collection is going to be bulk-built once and not modified, better 
> to use an {{ArrayList}} since its memory requirements are smaller and it is 
> much faster here... just instantiate the internal array once and fill it... 
> as opposed to {{LinkedList}} that must build the nodes for each item 
> inserted.  {{ArrayList}} is also faster for iteration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9732) Kafka Foreign-Key Joiner has unexpected default value used when a table is created via a stream+groupByKey+reduce

2020-03-18 Thread Adam Bellemare (Jira)
Adam Bellemare created KAFKA-9732:
-

 Summary: Kafka Foreign-Key Joiner has unexpected default value 
used when a table is created via a stream+groupByKey+reduce
 Key: KAFKA-9732
 URL: https://issues.apache.org/jira/browse/KAFKA-9732
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.1, 2.4.0
Reporter: Adam Bellemare


I'm upgrading some internal business code that used to use a prototype version 
of the FKJoiner, migrating to the 2.4.1 Kafka release. I am running into an 
issue where the joiner is using the default Serde, despite me clearly 
specifying NOT to use the default serde (unless I am missing something!). 
Currently, this is how I generate the left KTable, used in the 
_*leftTable.join(rightTable, ...)*_ FKJoin.

Let's call this process 1:
{code:scala}
val externalMyKeySerde = ... //Confluent Kafka S.R. serde.
val externalMyValueSerde = ...//Confluend Kafka S.R. value serde
val myConsumer = Consumed.`with`(externalMyKeySerde, externalMyValueSerde)

//For wrapping nulls in mapValues below
case class OptionalDeletable[T](elem: Option[T])

//Internal Serdes that do NOT use the SR
//Same serde logic as externalMyKeySerde, but doesn't register schemas to 
schema registry.
val internalMyKeySerde = ... 
//Same serde logic as externalMyValueSerde, but doesn't register schemas to 
schema registry.
val internalOptionalDeletableMyValueSerde: Serde[OptionalDeletable[MyValue]] = 
... 

val myLeftTable: KTable[MyKey, MyValue] =
  streamBuilder.stream[MyKey, MyValue]("inputTopic")(myConsumer)
.mapValues(
  v => {
//We need the nulls to propagate deletes.
//Wrap this in a simple case-class because we can't 
groupByKey+reduce null values as they otherwise get filtered out. 
OptionalDeletable(Some(v))
  }
)
.groupByKey(Grouped.`with`(internalMyKeySerde, 
internalOptionalDeletableMyValueSerde))
.reduce((_,x) => x)(
Materialized.as("myLeftTable")(internalMyKeySerde, 
internalOptionalDeletableMyValueSerde))
.mapValues(v => v.elem.get) //Unwrap the element
{code}

Next, we create the right table and specify the FKjoining logic
{code:scala}
//This is created in an identical way to Process 1... I wont show it here for 
brevity.
val rightTable: KTable[RightTableKey, RightTableValue] = 
streamBuilder.table(...)

//Not showing previous definitions because I don't think they're relevant to 
this issue...
val itemMaterialized =
Materialized.as[MyKey, JoinedOutput, KeyValueStore[Bytes, 
Array[Byte]]]("materializedOutputTable")(
  internalMyKeySerde, internalJoinedOutputSerde)

val joinedTable = myLeftTable.join[JoinedOutput, RightTableKey, 
RightTableValue](
  rightTable, foreignKeyExtractor, joinerFunction, materializedOutputTable)

//Force evaluation to output some data
joinedTable.toStream.to("outputStream")
{code}

When I execute this with leftTable generated via process 1, I end up somehow 
losing the leftTable serde along the way and end up falling back onto the 
default serde. This results in a runtime exception as follows:
{code:java}

Caused by: java.lang.ClassCastException: com.bellemare.sample.MyValue cannot be 
cast to [B
at 
org.apache.kafka.common.serialization.ByteArraySerializer.serialize(ByteArraySerializer.java:19)
at 
org.apache.kafka.streams.kstream.internals.foreignkeyjoin.ForeignJoinSubscriptionSendProcessorSupplier$UnbindChangeProcessor.process(ForeignJoinSubscriptionSendProcessorSupplier.java:94)
at 
org.apache.kafka.streams.kstream.internals.foreignkeyjoin.ForeignJoinSubscriptionSendProcessorSupplier$UnbindChangeProcessor.process(ForeignJoinSubscriptionSendProcessorSupplier.java:71)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:118)
... 30 more
{code}
Now, if I change process 1 to the following:

Process 2:
{code:scala}
val externalMyKeySerde = ... //Confluent Kafka S.R. serde.
val externalMyValueSerde = ...//Confluend Kafka S.R. value serde
val myConsumer = Consumed.`with`(externalMyKeySerde, externalMyValueSerde)

val myLeftTable: KTable[MyKey, MyValue] =
  streamBuilder.table[MyKey, MyValue]("inputTopic")(myConsumer)
//The downside of this approach is that we end up registering a bunch of 
internal topics to the schema registry (S.R.), significantly increasing the 
clutter in our lookup UI.
{code}
Everything works as expected, and the expected `_*externalMyValueSerde*_` is 
used to serialize the events (though I don't want this, as it registers to the 
SR and clutters it up).

I don't think I'm missing any Serdes inputs anywhere in the DSL, but I'm having 
a hard time figuring out *if this is normal existing behaviour for how a KTable 
is created via* *Process 1* or if I'm stumbling upon a bug somewhere. When I 
try to debug my way through this, the FKJoiner appears to use `

[jira] [Created] (KAFKA-9733) Consider addition to Kafka's replication model

2020-03-18 Thread Richard Yu (Jira)
Richard Yu created KAFKA-9733:
-

 Summary: Consider addition to Kafka's replication model
 Key: KAFKA-9733
 URL: https://issues.apache.org/jira/browse/KAFKA-9733
 Project: Kafka
  Issue Type: New Feature
  Components: clients, core
Reporter: Richard Yu


Note: Description still not finished.

This feature I'm proposing might not offer too much of a performance boost, but 
I think it is still worth considering. In our current replication model, we 
have a single leader and several followers (with our ISR included). However, 
the current bottleneck would be that once the leader goes down, it will take a 
while to get the next leader online, which is a serious pain. (also leading to 
a considerable write/read delay)

In order to help alleviate this issue, we can consider multiple clusters 
independent of each other i.e. each of them are their own leader/follower group 
for the _same partition set_. The difference here is that these clusters can 
_communicate_ between one another. 

At first, this might seem redundant, but there is a reasoning to this:
 # Let's say we have two leader/follower groups for the same replicated 
partition.
 # One leader goes down, and that means for the respective followers, they 
would under normal circumstances be unable to receive new write updates.
 # However, in this situation, we can have those followers poll their 
write/read requests from the other group whose leader has _not gone down._ It 
doesn't necessarily have to be  the leader either, it can be other members from 
that group's ISR. 
 # The idea here is that if the members of these two groups detect that they 
are lagging behind another, they would be able to poll one another for updates.

So what is the difference here from just having multiple leaders in a single 
cluster?

The answer is that the leader is responsible for making sure that there is 
consistency within _its own cluster._ Not the other cluster it is in 
communication with.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-581: add tag "partition" to BrokerTopicMetrics so as to observe the partition metrics on the same broker

2020-03-18 Thread Chia-Ping Tsai
hi

this ticket is about to records partition metrics rather than topic metrics. It 
helps us to observe more precis metrics for specify partition. The downside is 
that broker needs more space to keep metrics and the origin metrics interface 
(tags) is broken since this ticket adds new tag "partition=xxx" to it. The 
alternative is an new config flag used to enable new metrics (of course, it is 
false by default). Also, we ought to provide enough document to explain benefit 
and cost of new metrics.

KIP-581 
(https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=148642648)

JIRA (https://issues.apache.org/jira/browse/KAFKA-9730)

Please take a look :)

Best Regards,
Chia-Ping


Re: [DISCUSS] KIP-577: Allow HTTP Response Headers Configured for Kafka Connect

2020-03-18 Thread Randall Hauch
Thanks for the proposal, Jeff. I can see how this proposal will add value.

I have a few comments, most of which are asking for more detail in the KIP.

   1. I do think the KIP needs to be much more explicit about what the new
   configuration properties will be and to use them consistently. For example,
   one of the examples shows using the "header.config" property, but the first
   paragraph in the proposal talks about using the "response.http.headers."
   prefix on all such properties. If that's the case, then all examples should
   include this prefix. We know from experience that users will copy examples
   from KIPs and paste directly into their worker configs.
   2. The KIP should be very clear about the types of each new property and
   the validation rules that will be implemented. The more validation we
   perform, the less opportunity for users to mis-configure these properties.
   The test plan should also list all of the cases that the tests should
   cover, including positive cases to verify they work correctly and negative
   cases to verify that invalid configurations are caught properly.
   3. I infer that the purpose of the include and exclude paths is to
   control which headers are applied to which of the Connect REST API's
   resources. But it'd be good to be more explicit and to have the examples be
   more meaningful. Also, the motivation section should clearly spell out why
   different headers on different resources, and should maybe provide a
   concrete example use case.
   4. The "header.config" example has a comma outside of the quotes, and
   this seems to not align with the statement before about having to use
   commas. How will the example even be parsed? Seems like we need more
   explanation here, and possibly more examples.
   5. Although the proposal seems to follow the pattern used for SMTs, this
   still is a complicated pattern. Is there any way to simplify the proposal?

Best regards,

Randall

On Fri, Mar 13, 2020 at 4:04 PM Jeff Huang  wrote:

> Hi Aneel,
>
> It is great idea. I will update KIP based on your suggestion.
>
> Jeff.
>
> On 2020/03/13 18:58:23, Aneel Nazareth  wrote:
> > If we're including and excluding paths, it seems like it might make
> > sense to allow for the configuration of multiple filters.
> >
> > We could do this with a pattern similar to how Kafka listeners are
> > configured. Something like:
> >
> > response.http.header.filters = myfilter1,myfilter2
> > response.http.header.myfilter1.included.paths = ...
> > response.http.header.myfilter1.included.mime.types = ...
> > response.http.header.myfilter1.config = set X-Frame-Options: DENY,"add
> > Cache-Control: no-cache, no-store, must-revalidate", ...
> >
> > response.http.header.myfilter2.included.paths = ...
> > response.http.header.myfilter2.included.mime.types = ...
> > response.http.header.myfilter2.config = setDate Expires: 3154000 ...
> >
> > But before we go down that road: are people going to want to be able
> > to set multiple different header filters? Or is one header filter for
> > all of the responses good enough?
> >
> > On Fri, Mar 13, 2020 at 10:56 AM Jeff Huang 
> wrote:
> > >
> > > Hi Aneel,
> > >
> > > That is really great point. I will update KIP. We need add following
> properties combining with header configs:
> > > includedPaths - CSV of path specs to include
> > > excludedPaths - CSV of path specs to exclude
> > > includedMimeTypes - CSV of mime types to include
> > > excludedMimeTypes - CSV of mime types to exclude
> > > includedHttpMethods - CSV of http methods to include
> > > excludedHttpMethods - CSV of http methods to exclude
> > >
> > > Jeff.
> > >
> > > On 2020/03/13 14:28:11, Aneel Nazareth  wrote:
> > > > Hi Jeff,
> > > >
> > > > Thanks for the KIP.
> > > >
> > > > Will users always want to set identical headers on all responses?
> Does
> > > > it make sense to also allow configuration of the HeaderFilter init
> > > > parameters like "includedPaths", "excludedHttpMethods", etc.? Does it
> > > > make sense to allow multiple configurations (so that eg. different
> > > > paths have different headers?)
> > > >
> > > > Cheers,
> > > > Aneel
> > > >
> > > > On Thu, Mar 12, 2020 at 7:05 PM Zhiguo Huang <
> jeff.hu...@confluent.io> wrote:
> > > > >
> > > > >
> > > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4334

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9404: Use ArrayList instead of LinkedList in Sensor (#7936)


--
[...truncated 5.88 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTes

Jenkins build is back to normal : kafka-trunk-jdk11 #1255

2020-03-18 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-8122) Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState

2020-03-18 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen reopened KAFKA-8122:

  Assignee: (was: Matthias J. Sax)

> Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState
> 
>
> Key: KAFKA-8122
> URL: https://issues.apache.org/jira/browse/KAFKA-8122
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0, 2.4.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3285/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState/]
> {quote}java.lang.AssertionError: Expected: <[KeyValue(0, 0), KeyValue(0, 1), 
> KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 10), KeyValue(0, 15), KeyValue(0, 
> 21), KeyValue(0, 28), KeyValue(0, 36), KeyValue(0, 45)]> but: was 
> <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 
> 10), KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), 
> KeyValue(0, 45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 105)]> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at 
> org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:212)
>  at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:414){quote}
> STDOUT
> {quote}[2019-03-17 01:19:51,971] INFO Created server with tickTime 800 
> minSessionTimeout 1600 maxSessionTimeout 16000 datadir 
> /tmp/kafka-10997967593034298484/version-2 snapdir 
> /tmp/kafka-5184295822696533708/version-2 
> (org.apache.zookeeper.server.ZooKeeperServer:174) [2019-03-17 01:19:51,971] 
> INFO binding to port /127.0.0.1:0 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:89) [2019-03-17 
> 01:19:51,973] INFO KafkaConfig values: advertised.host.name = null 
> advertised.listeners = null advertised.port = null 
> alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 60 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 3 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 360 
> delegation.token.expiry.time.ms = 8640 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 60480 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 30 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 8640 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 6 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 6 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
> log.message.timestamp.difference.max.ms = 9223372036854

[jira] [Resolved] (KAFKA-9656) TxnOffsetCommit should not return COORDINATOR_LOADING error for old request versions

2020-03-18 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9656.

Fix Version/s: 2.6.0
   Resolution: Fixed

> TxnOffsetCommit should not return COORDINATOR_LOADING error for old request 
> versions
> 
>
> Key: KAFKA-9656
> URL: https://issues.apache.org/jira/browse/KAFKA-9656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>
> In KAFKA-7296, we fixed a bug which causes the producer to enter a fatal 
> state when the COORDINATOR_LOADING_IN_PROGRESS error is received. The impact 
> of this bug in streams was that the application would crash. Generally we 
> want users to upgrade to a later client version, but in some cases, this 
> takes a long time. I am suggesting here that we revert the behavior change on 
> the broker for older versions of TxnOffsetCommit. For versions older than 2 
> (which was introduced in 2.3), rather than returning 
> COORDINATOR_LOADING_IN_PROGRESS, we can return COORDINATOR_NOT_AVAILABLE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[Discuss] KIP-582 Add a "continue" option for Kafka Connect error handling

2020-03-18 Thread Zihan Li
Hi all, 

I'd like to use this thread to discuss KIP-582 Add a "continue" option for 
Kafka Connect error handling, please see detail at:
https://cwiki.apache.org/confluence/x/XRvcC 
 

Best,
Zihan Li

Build failed in Jenkins: kafka-trunk-jdk8 #4335

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Scala to 2.12.11 (#8308)


--
[...truncated 2.91 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-syste

Build failed in Jenkins: kafka-trunk-jdk11 #1256

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Scala to 2.12.11 (#8308)

[wangguoz] HOTFIX: do not depend on file modified time in StateDirectoryTest


--
[...truncated 2.92 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.strea

Re: [VOTE] KIP-574: CLI Dynamic Configuration with file input

2020-03-18 Thread Colin McCabe
+1 (binding).

Thanks, Aneel.

best,
Colin

On Wed, Mar 18, 2020, at 00:04, David Jacot wrote:
> +1 (non-binding), thanks for the KIP!
> 
> David
> 
> On Mon, Mar 16, 2020 at 4:06 PM Aneel Nazareth  wrote:
> 
> > Hello all,
> >
> > Thanks to the folks who have given feedback. I've incorporated the
> > suggestions, and think that this is now ready for a vote:
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> >
> > Thank you,
> > Aneel
> >
>


[jira] [Resolved] (KAFKA-5604) All producer methods should raise `ProducerFencedException` after the first time.

2020-03-18 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5604.
--
Fix Version/s: 2.3.0
 Assignee: Guozhang Wang  (was: Apurva Mehta)
   Resolution: Fixed

> All producer methods should raise `ProducerFencedException` after the first 
> time.
> -
>
> Key: KAFKA-5604
> URL: https://issues.apache.org/jira/browse/KAFKA-5604
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.3.0
>
>
> Currently, when a `ProducerFencedException` is raised from a transactional 
> producer, the expectation is that the application should call `close` 
> immediately. However, if the application calls other producer methods, they 
> would get a `KafkaException`. This is a bit confusing, and results in tickets 
> like : https://issues.apache.org/jira/browse/KAFKA-5603. 
> We should update the producer so that calls to any method other than `close` 
> should raise a `ProducerFencedException` after the first time it is raised.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-580: Exponential Backoff for Kafka Clients

2020-03-18 Thread Guozhang Wang
Hello Sanjana,

Thanks for the proposed KIP, I think that makes a lot of sense -- as you
mentioned in the motivation, we've indeed seen many issues with regard to
the frequent retries, with bounded exponential backoff in the scenario
where there's a long connectivity issue we would effectively reduce the
request load by 10 given the default configs.

For higher-level Streams client and Connect frameworks, today we also have
a retry logic but that's used in a slightly different way. For example in
Streams, we tend to handle the retry logic at the thread-level and hence
very likely we'd like to change that mechanism in KIP-572 anyways. For
producer / consumer / admin clients, I think just applying this behavioral
change across these clients makes lot of sense. So I think can just leave
the Streams / Connect out of the scope of this KIP to be addressed in
separate discussions.

I do not have further comments about this KIP :) LGTM.

Guozhang


On Wed, Mar 18, 2020 at 12:09 AM Sanjana Kaundinya 
wrote:

> Thanks for the feedback Boyang.
>
> If there’s anyone else who has feedback regarding this KIP, would really
> appreciate it hearing it!
>
> Thanks,
> Sanjana
>
> On Tue, Mar 17, 2020 at 11:38 PM Boyang Chen  wrote:
>
> > Sounds great!
> >
> > Get Outlook for iOS
> > 
> > From: Sanjana Kaundinya 
> > Sent: Tuesday, March 17, 2020 5:54:35 PM
> > To: dev@kafka.apache.org 
> > Subject: Re: [DISCUSS] KIP-580: Exponential Backoff for Kafka Clients
> >
> > Thanks for the explanation Boyang. One of the most common problems that
> we
> > have in Kafka is with respect to metadata fetches. For example, if there
> is
> > a broker failure, all clients start to fetch metadata at the same time
> and
> > it often takes a while for the metadata to converge. In a high load
> > cluster, there are also issues where the volume of metadata has made
> > convergence of metadata slower.
> >
> > For this case, exponential backoff helps as it reduces the retry rate and
> > spaces out how often clients will retry, thereby bringing down the time
> for
> > convergence. Something that Jason mentioned that would be a great
> addition
> > here would be if the backoff should be “jittered” as it was in KIP-144
> with
> > respect to exponential reconnect backoff. This would help prevent the
> > clients from being synchronized on when they retry, thereby spacing out
> the
> > number of requests being sent to the broker at the same time.
> >
> > I’ll add this example to the KIP and flush out more of the details - so
> > it’s more clear.
> >
> > On Mar 17, 2020, 1:24 PM -0700, Boyang Chen  >,
> > wrote:
> > > Thanks for the reply Sanjana. I guess I would like to rephrase my
> > question
> > > 2 and 3 as my previous response is a little bit unactionable.
> > >
> > > My specific point is that exponential backoff is not a silver bullet
> and
> > we
> > > should consider using it to solve known problems, instead of making the
> > > holistic changes to all clients in Kafka ecosystem. I do like the
> > > exponential backoff idea and believe this would be of great value, but
> > > maybe we should focus on proposing some existing modules that are
> > suffering
> > > from static retry, and only change them in this first KIP. If in the
> > > future, some other component users believe they are also suffering, we
> > > could get more minor KIPs to change the behavior as well.
> > >
> > > Boyang
> > >
> > > On Sun, Mar 15, 2020 at 12:07 AM Sanjana Kaundinya <
> skaundi...@gmail.com
> > >
> > > wrote:
> > >
> > > > Thanks for the feedback Boyang, I will revise the KIP with the
> > > > mathematical relations as per your suggestion. To address your
> > feedback:
> > > >
> > > > 1. Currently, with the default of 100 ms per retry backoff, in 1
> second
> > > > we would have 10 retries. In the case of using an exponential
> backoff,
> > we
> > > > would have a total of 4 retries in 1 second. Thus we have less than
> > half of
> > > > the amount of retries in the same timeframe and can lessen broker
> > pressure.
> > > > This calculation is done as following (using the formula laid out in
> > the
> > > > KIP:
> > > >
> > > > Try 1 at time 0 ms, failures = 0, next retry in 100 ms (default retry
> > ms
> > > > is initially 100 ms)
> > > > Try 2 at time 100 ms, failures = 1, next retry in 200 ms
> > > > Try 3 at time 300 ms, failures = 2, next retry in 400 ms
> > > > Try 4 at time 700 ms, failures = 3, next retry in 800 ms
> > > > Try 5 at time 1500 ms, failures = 4, next retry in 1000 ms (default
> max
> > > > retry ms is 1000 ms)
> > > >
> > > > For 2 and 3, could you elaborate more about what you mean with
> respect
> > to
> > > > client timeouts? I’m not very familiar with the Streams framework, so
> > would
> > > > love to get more insight to how that currently works, with respect to
> > > > producer transactions, so I can appropriately update the KIP to
> address
> > > > these scenarios.
> > > > On Mar 13, 20

Re: Add a customized logo for Kafka Streams

2020-03-18 Thread Guozhang Wang
I like the drawings too :)

On Fri, Mar 13, 2020 at 3:35 AM Patrik Kleindl  wrote:

> Great idea, I would definitely buy the book with that on the cover :-)
> best regards
> Patrik
>
> On Fri, 13 Mar 2020 at 09:57, Becket Qin  wrote:
>
> > I also like this one!
> >
> > Jiangjie (Becket) Qin
> >
> > On Fri, Mar 13, 2020 at 9:07 AM Matthias J. Sax 
> wrote:
> >
> > > I personally love it!
> > >
> > > -Matthias
> > >
> > > On 3/12/20 11:31 AM, Sophie Blee-Goldman wrote:
> > > > How reasonable of it. Let's try this again:
> > > > Streams Logo option 2
> > > >
> > > <
> > >
> >
> https://docs.google.com/drawings/d/1WoWn0kF3E7dbL1FGYT8_-bIeT2IjfYORAm6gg8Ils4k/edit?usp=sharing
> > > >
> > > >
> > > > On Thu, Mar 12, 2020 at 9:34 AM Guozhang Wang 
> > > wrote:
> > > >
> > > >> Hi Sophie,
> > > >>
> > > >> I cannot find the attachment from your previous email --- in fact,
> ASF
> > > >> mailing list usually blocks all attachments for security reasons. If
> > you
> > > >> can share a link to the image (google drawings etc) in your email
> that
> > > >> would be great.
> > > >>
> > > >> Guozhang
> > > >>
> > > >> On Wed, Mar 11, 2020 at 1:02 PM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > >> wrote:
> > > >>
> > > >>> Just to throw another proposal out there and inspire some debate,
> > > here's
> > > >>> a similar-but-different
> > > >>> idea (inspired by John + some sketches I found on google):
> > > >>>
> > > >>> *~~ See attachment, inlined image is too large for the mailing list
> > ~~*
> > > >>>
> > > >>> This one's definitely more fun, my only concern is that it doesn't
> > > really
> > > >>> scale well. At the lower end
> > > >>> of sizes the otters will be pretty difficult to see; and I had to
> > > stretch
> > > >>> out the Kafka circles even at
> > > >>> the larger end just to fit them through.
> > > >>>
> > > >>> But maybe with a cleaner drawing and some color it'll still look
> > > good and
> > > >>> be recognizable + distinct
> > > >>> enough when small.
> > > >>>
> > > >>> Any thoughts? Any binding and/or non-binding votes?
> > > >>>
> > > >>> On Sun, Mar 8, 2020 at 1:00 AM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > >>> wrote:
> > > >>>
> > >  Seems the mailing list may have filtered the inlined prototype
> logo,
> > >  attaching it here instead
> > > 
> > >  On Sat, Mar 7, 2020 at 11:54 PM Sophie Blee-Goldman
> > > 
> > >  wrote:
> > > 
> > > > Matthias makes a good point about being careful not to position
> > > Streams
> > > > as
> > > > outside of Apache Kafka. One obvious thing we could do it just
> > > include
> > > > the
> > > > Kafka logo as-is in the Streams logo, somehow.
> > > >
> > > > I have some unqualified opinions on what that might look like:
> > > > A good logo is simple and clean, so incorporating the Kafka logo
> > as a
> > > > minor
> > > > detail within a more complicated image is probably not the best
> way
> > > to
> > > > get
> > > > the quick and easy comprehension/recognition that we're going
> for.
> > > >
> > > > That said I'd throw out the idea of just attaching something to
> the
> > > > Kafka logo,
> > > > perhaps a stream-dwelling animal, perhaps a (river) otter? It
> could
> > > be
> > > > "swimming" left of the Kafka logo, with its head touching the
> upper
> > > > circle and
> > > > its tail touching the bottom one. Like Streams, it starts with
> > Kafka
> > > > and ends
> > > > with Kafka (ie reading input topics and writing to output
> topics).
> > > >
> > > > Without further ado, here's my very rough prototype for the Kafka
> > > > Streams logo:
> > > >
> > > > [image: image.png]
> > > > Obviously the real thing would be colored and presumably done by
> > > someone
> > > > with actual artist talent/experience (or at least photoshop
> > ability).
> > > >
> > > > Thoughts?
> > > >
> > > > On Sat, Mar 7, 2020, 1:08 PM Matthias J. Sax 
> > > wrote:
> > > >
> > > > Boyang,
> > > >
> > > > thanks for starting this discussion. I like the idea in general
> > > > however we need to be a little careful IMHO -- as you mentioned Kafka
> > > > is one project and thus we should avoid the impression that Kafka
> > > > Streams is not part of Apache Kafka.
> > > >
> > > > Besides this, many projects use animals that are often very adorable.
> > > > Maybe we could find a cute Streams related mascot? :)
> > > >
> > > > I would love to hear opinions especially from the PMC if having a
> logo
> > > > for Kafka Streams is a viable thing to do.
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 3/3/20 1:01 AM, Patrik Kleindl wrote:
> > >  Hi Boyang Great idea, that would help in some discussions. To
> > > throw
> > >  in a first idea: https://imgur.com/a/UowXaMk best regards
> > Patrik
> > > 
> > >  On Mon, 2 Mar 2020 at 18:23, Boyang Chen
> > >   wrote:
> > > 
> >

Build failed in Jenkins: kafka-trunk-jdk11 #1257

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9047; AdminClient group operations should respect retries and

[github] KAFKA-9656; Return COORDINATOR_NOT_AVAILABLE for older producer clients


--
[...truncated 2.93 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false

Build failed in Jenkins: kafka-trunk-jdk8 #4336

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: do not depend on file modified time in StateDirectoryTest

[github] KAFKA-9047; AdminClient group operations should respect retries and

[github] KAFKA-9656; Return COORDINATOR_NOT_AVAILABLE for older producer clients


--
[...truncated 5.89 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0

[jira] [Resolved] (KAFKA-9636) Simple join of two KTables fails

2020-03-18 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-9636.
-
Resolution: Not A Bug

Closing this ticket as "not a bug" because the test failure turned out to be 
caused by the test itself.

> Simple join of two KTables fails
> 
>
> Key: KAFKA-9636
> URL: https://issues.apache.org/jira/browse/KAFKA-9636
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.1
>Reporter: Paul Snively
>Assignee: John Roesler
>Priority: Major
> Attachments: merge_issue.zip
>
>
> Attempting to join two KTables yields a `Topology` that, when tested with 
> `TopologyTestDriver` by adding records to the two `TestInputTopic`s, results 
> in an empty `TestOutputTopic`.
> I'm attaching a very small reproduction. The code is in Scala. The project is 
> therefore an "sbt" project. You can reproduce the results from your shell 
> with `sbt test`. The failure output will include the `describe` of the 
> `Topology` in question.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-581: add tag "partition" to BrokerTopicMetrics so as to observe the partition metrics on the same broker

2020-03-18 Thread Cheng Pan
Hi Chia-Ping,


I'm afraid the KIP number is conflict. I created (KIP-581: Value of optional 
null field which has default value) yesterday early, and insert it into table 
[KIPs under discussion].
https://cwiki.apache.org/confluence/display/KAFKA/KIP-581%3A+Value+of+optional+null+field+which+has+default+value


Would you please choose another KIP number, maybe 583, I also see KIP-582 
discussion thread appears in the mail list, and insert into table [KIPs under 
discussion].


Best Regards,
Cheng Pan
-- Original --
From:  "Chia-Ping Tsai";;
Send time: Thursday, Mar 19, 2020 1:48 AM
To: "dev"; 

Subject:   [DISCUSS] KIP-581: add tag "partition" to BrokerTopicMetrics so as 
to observe the partition metrics on the same broker



hi

this ticket is about to records partition metrics rather than topic metrics. It 
helps us to observe more precis metrics for specify partition. The downside is 
that broker needs more space to keep metrics and the origin metrics interface 
(tags) is broken since this ticket adds new tag "partition=xxx" to it. The 
alternative is an new config flag used to enable new metrics (of course, it is 
false by default). Also, we ought to provide enough document to explain benefit 
and cost of new metrics.

KIP-581 
(https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=148642648)

JIRA (https://issues.apache.org/jira/browse/KAFKA-9730)

Please take a look :)

Best Regards,
Chia-Ping

Build failed in Jenkins: kafka-trunk-jdk11 #1258

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-5604: Remove the redundant TODO marker on the Streams side 
(#8313)


--
[...truncated 5.92 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.Window

回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-03-18 Thread feyman2009
Hi, team
Before going too far on the KIP update, I would like to hear your opinions 
about how we would change the interface of AdminClient, the two alternatives I 
could think of are:
1) Extend adminClient.removeMembersFromConsumerGroup to support remove all
As Guochang suggested, we could add some flag param in 
RemoveMembersFromConsumerGroupOptions to indicated the "remove all" logic.  
2) Add a new API like 
adminClient.removeAllMembersFromConsumerGroup(groupId, options) 

I think 1) will be more compact from the API perspective, but looking at 
the code, I found that the if we are going to remove all, then the 
RemoveMembersFromConsumerGroupResult#memberInfos/memberResult()/all() should be 
changed accordingly, and they seem not that meaningful under the "remove all" 
scenario.

A minor thought about the adminClient.removeMembersFromConsumerGroup API is:
Looking at some other deleteXX APIs, like deleteTopics, deleteRecords, the 
results contains only a Map>, I think it's enough to describe the 
related results, is it make sense that we may remove memberInfos in 
RemoveMembersFromConsumerGroupResult ? This KIP has no dependency on this if we 
choose alternative 2)
 
Could you advise? Thanks!

Feyman
 

送时间:2020年3月15日(星期日) 10:11
收件人:dev 
主 题:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Hi, all
Thanks a lot for your feedback!
According to the discussion, it seems we don't have some valid use cases 
for removing specific dynamic members, I think it makes sense to encapsulate 
the "get and delete" logic in adminClient. I will update the KIP shortly!

Thanks!

Feyman


--
发件人:Boyang Chen 
发送时间:2020年3月14日(星期六) 00:39
收件人:dev 
主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Thanks Matthias and Guozhang for the feedback. I'm not worrying too much
about the member.id exposure as we have done so in a couple of areas. As
for the recommended admin client change, I think it makes sense in an
encapsulation perspective. Maybe I'm still a bit hesitant as we are losing
the flexibility of closing only a subset of `dynamic members` potentially,
but we could always get back and address it if some user feels necessary to
have it.

My short answer would be, LGTM :)

Boyang

On Thu, Mar 12, 2020 at 5:26 PM Guozhang Wang  wrote:

> Hi Matthias,
>
> About the AdminClient param API: that's a great point here. I think overall
> if users want to just "remove all members" they should not need to first
> get all the member.ids themselves, but instead internally the admin client
> can first issue a describe-group request to get all the member.ids, and
> then use them in the next issued leave-group request, all abstracted away
> from the users. With that in mind, maybe in
> RemoveMembersFromConsumerGroupOptions we can just introduce an overloaded
> flag param besides "members" that indicate "remove all"?
>
> Guozhang
>
> On Thu, Mar 12, 2020 at 2:59 PM Matthias J. Sax  wrote:
>
> > Feyman,
> >
> > some more comments/questions:
> >
> > The description of `LeaveGroupRequest` is clear but it's unclear how
> > `MemberToRemove` should behave. Which parameter is required? Which is
> > optional? What is the relationship between both.
> >
> > The `LeaveGroupRequest` description clearly states that specifying a
> > `memberId` is optional if the `groupInstanceId` is provided. If
> > `MemberToRemove` applies the same pattern, it must be explicitly defined
> > in the KIP (and explained in the JavaDocs of `MemberToRemove`) because
> > we cannot expect that an admin-client users knows that internally a
> > `LeaveGroupRequest` is used nor what the semantics of a
> > `LeaveGroupRequest` are.
> >
> >
> > About Admin API:
> >
> > In general, I am also confused that we allow so specify a `memberId` at
> > all, because the `memberId` is an internal id that is not really exposed
> > to the user. Hence, from a AdminClient point of view, accepting a
> > `memberId` as input seems questionable to me? Of course, `memberId` can
> > be collected via `describeConsumerGroups()` but it will return the
> > `memberId` of _all_ consumer in the group and thus how would a user know
> > which member should be removed for a dynamic group (if an individual
> > member should be removed)?
> >
> > Hence, how can any user get to know the `memberId` of an individual
> > client in a programtic way?
> >
> > Also I am wondering in general, why the removal of single dynamic member
> > is important? In general, I would expect a short `session.timeout` for
> > dynamic groups and thus removing a specific member from the group seems
> > not to be an important feature -- for static groups we expect a long
> > `session.timeout` and a user can also identify individual clients via
> > `groupInstandId`, hence the feature makes sense for this case and is
> > straight forward to use.
> >
> >
> > About St

Re: [DISCUSS] KIP-581: add tag "partition" to BrokerTopicMetrics so as to observe the partition metrics on the same broker

2020-03-18 Thread Chia-Ping Tsai
hi Cheng

seems the "Next KIP Number" is not synced :(

I have updated my KIP number and corrected the "Next KIP Number"

thanks for the reminder.

On 2020/03/19 02:32:55, "Cheng Pan" <379377...@qq.com> wrote: 
> Hi Chia-Ping,
> 
> 
> I'm afraid the KIP number is conflict. I created (KIP-581: Value of optional 
> null field which has default value) yesterday early, and insert it into table 
> [KIPs under discussion].
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-581%3A+Value+of+optional+null+field+which+has+default+value
> 
> 
> Would you please choose another KIP number, maybe 583, I also see KIP-582 
> discussion thread appears in the mail list, and insert into table [KIPs under 
> discussion].
> 
> 
> Best Regards,
> Cheng Pan
> -- Original --
> From:  "Chia-Ping Tsai";;
> Send time: Thursday, Mar 19, 2020 1:48 AM
> To: "dev"; 
> 
> Subject:   [DISCUSS] KIP-581: add tag "partition" to BrokerTopicMetrics so as 
> to observe the partition metrics on the same broker
> 
> 
> 
> hi
> 
> this ticket is about to records partition metrics rather than topic metrics. 
> It helps us to observe more precis metrics for specify partition. The 
> downside is that broker needs more space to keep metrics and the origin 
> metrics interface (tags) is broken since this ticket adds new tag 
> "partition=xxx" to it. The alternative is an new config flag used to enable 
> new metrics (of course, it is false by default). Also, we ought to provide 
> enough document to explain benefit and cost of new metrics.
> 
> KIP-581 
> (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=148642648)
> 
> JIRA (https://issues.apache.org/jira/browse/KAFKA-9730)
> 
> Please take a look :)
> 
> Best Regards,
> Chia-Ping


Build failed in Jenkins: kafka-trunk-jdk11 #1259

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9568: enforce rebalance if client endpoint has changed (#8299)


--
[...truncated 2.93 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTest

Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-03-18 Thread Boyang Chen
Thanks for the insight Feyman. I personally feel adding another admin
client command is redundant, so picking option 1). The memberInfos struct
is internal and just used for result reference purposes. I think it could
still work even we overload with `removeAll` option, if that makes sense.

Boyang

On Wed, Mar 18, 2020 at 8:51 PM feyman2009 
wrote:

> Hi, team
> Before going too far on the KIP update, I would like to hear your
> opinions about how we would change the interface of AdminClient, the two
> alternatives I could think of are:
> 1) Extend adminClient.removeMembersFromConsumerGroup to support remove
> all
> As Guochang suggested, we could add some flag param in
> RemoveMembersFromConsumerGroupOptions to indicated the "remove all" logic.
> 2) Add a new API like
> adminClient.removeAllMembersFromConsumerGroup(groupId, options)
>
> I think 1) will be more compact from the API perspective, but looking
> at the code, I found that the if we are going to remove all, then the
> RemoveMembersFromConsumerGroupResult#memberInfos/memberResult()/all()
> should be changed accordingly, and they seem not that meaningful under the
> "remove all" scenario.
>
> A minor thought about the adminClient.removeMembersFromConsumerGroup
> API is:
> Looking at some other deleteXX APIs, like deleteTopics, deleteRecords,
> the results contains only a Map>, I think it's enough to
> describe the related results, is it make sense that we may remove
> memberInfos in RemoveMembersFromConsumerGroupResult ? This KIP has no
> dependency on this if we choose alternative 2)
>
> Could you advise? Thanks!
>
> Feyman
>
>
> 送时间:2020年3月15日(星期日) 10:11
> 收件人:dev 
> 主 题:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in
> StreamsResetter
>
> Hi, all
> Thanks a lot for your feedback!
> According to the discussion, it seems we don't have some valid use
> cases for removing specific dynamic members, I think it makes sense to
> encapsulate the "get and delete" logic in adminClient. I will update the
> KIP shortly!
>
> Thanks!
>
> Feyman
>
>
> --
> 发件人:Boyang Chen 
> 发送时间:2020年3月14日(星期六) 00:39
> 收件人:dev 
> 主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in
> StreamsResetter
>
> Thanks Matthias and Guozhang for the feedback. I'm not worrying too much
> about the member.id exposure as we have done so in a couple of areas. As
> for the recommended admin client change, I think it makes sense in an
> encapsulation perspective. Maybe I'm still a bit hesitant as we are losing
> the flexibility of closing only a subset of `dynamic members` potentially,
> but we could always get back and address it if some user feels necessary to
> have it.
>
> My short answer would be, LGTM :)
>
> Boyang
>
> On Thu, Mar 12, 2020 at 5:26 PM Guozhang Wang  wrote:
>
> > Hi Matthias,
> >
> > About the AdminClient param API: that's a great point here. I think
> overall
> > if users want to just "remove all members" they should not need to first
> > get all the member.ids themselves, but instead internally the admin
> client
> > can first issue a describe-group request to get all the member.ids, and
> > then use them in the next issued leave-group request, all abstracted away
> > from the users. With that in mind, maybe in
> > RemoveMembersFromConsumerGroupOptions we can just introduce an overloaded
> > flag param besides "members" that indicate "remove all"?
> >
> > Guozhang
> >
> > On Thu, Mar 12, 2020 at 2:59 PM Matthias J. Sax 
> wrote:
> >
> > > Feyman,
> > >
> > > some more comments/questions:
> > >
> > > The description of `LeaveGroupRequest` is clear but it's unclear how
> > > `MemberToRemove` should behave. Which parameter is required? Which is
> > > optional? What is the relationship between both.
> > >
> > > The `LeaveGroupRequest` description clearly states that specifying a
> > > `memberId` is optional if the `groupInstanceId` is provided. If
> > > `MemberToRemove` applies the same pattern, it must be explicitly
> defined
> > > in the KIP (and explained in the JavaDocs of `MemberToRemove`) because
> > > we cannot expect that an admin-client users knows that internally a
> > > `LeaveGroupRequest` is used nor what the semantics of a
> > > `LeaveGroupRequest` are.
> > >
> > >
> > > About Admin API:
> > >
> > > In general, I am also confused that we allow so specify a `memberId` at
> > > all, because the `memberId` is an internal id that is not really
> exposed
> > > to the user. Hence, from a AdminClient point of view, accepting a
> > > `memberId` as input seems questionable to me? Of course, `memberId` can
> > > be collected via `describeConsumerGroups()` but it will return the
> > > `memberId` of _all_ consumer in the group and thus how would a user
> know
> > > which member should be removed for a dynamic group (if an individual
> > > member should be removed)?
> > >
> > > Hence, how can any user get to know the `memberId

Build failed in Jenkins: kafka-2.5-jdk8 #67

2020-03-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: do not rely on file modified time in StateDirectoryTest


--
[...truncated 1.55 MB...]

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsBootstrapServer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsBootstrapServer PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsBrokerList STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsBrokerList PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testBootstrapServerOverride STARTED

kafka.tools.ConsoleProducerTest > testBootstrapServerOverride PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testBrokerListOverride STARTED

kafka.tools.ConsumerPerformanceTest > testBrokerListOverride PASSED

kafka.tools.ConsumerPerformanceTest > testConfigBootStrapServer STARTED

kafka.tools.ConsumerPerformanceTest > testConfigBootStrapServer PASSED

kafka.tools.ConsumerPerformanceTest > testConfigBrokerList STARTED

kafka.tools.ConsumerPerformanceTest > testConfigBrokerList PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit STARTED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning 
STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnGroupIdAndPartitionGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnGroupIdAndPartitionGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnUnrecognizedNewConsumerOption 
STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnUnrecognizedNewConsumerOption 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod STARTED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithNoOffsetReset STARTED

kafka.tools.ConsoleConsum

Kafka JMX monitoring

2020-03-18 Thread 张祥
Hi,

I want to know what the best practice to collect Kafka JMX metrics is. I
haven't found a decent way to collect and parse JMX in Java (because it is
too much) and I learn that there are tools like tools like jmxtrans to do
this. I wonder if there is more. Thanks. Regards.