Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #246

2020-11-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Factor out common response parsing logic (#9617)


--
[...truncated 6.95 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@274917fd, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@274917fd, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@57286c26, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@57286c26, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1f34dcbc, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1f34dcbc, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@52d58bcb, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@52d58bcb, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3dcc8893, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3dcc8893, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@29918a50, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@29918a50, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@107972e2, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@107972e2, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6e0c38f5, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6e0c38f5, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@375dadb7, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@375dadb7, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@12bb7206, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@12bb7206, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kaf

Re: Identifying JIRA's for contribution

2020-11-20 Thread Loknath Priyatham Teja Singamsetty
Quick follow up. Are there any outstanding works which require
contributions ?

On Thu, Nov 19, 2020 at 4:21 PM Loknath Priyatham Teja Singamsetty <
lsingamse...@salesforce.com> wrote:

> Hi Kafka Team,
>
> We are a team of interested people from the Salesforce company and want to
> contribute to open source and want to pick up some JIRA's and start
> contributing back to the community. Could you please share the JIRA's which
> we can start working on.
>
> --
> Thanks,
> Loknath,
> Engineer Manager,
> Salesforce
>


-- 
Thanks,
Loknath.


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #271

2020-11-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10607: Consistent behaviour for response errorCounts() (#9433)

[github] MINOR: remove unnecessary semicolon from Agent.java and 
AgentClient.java (#9625)


--
[...truncated 3.48 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTim

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #247

2020-11-20 Thread Apache Jenkins Server
See 




Re: Identifying JIRA's for contribution

2020-11-20 Thread John Roesler
Hi Loknath,

A couple of people replied to you already. Here’s the thread: 
https://lists.apache.org/x/thread.html/rb93185752163fa71acfce5d11820a48f6c4551863aa3d1d98b21c0ea@%3Cdev.kafka.apache.org%3E

I’m not sure what might have been wrong. Did you subscribe by following the 
instructions at https://kafka.apache.org/contact.html?

Otherwise, the replies might have gotten caught in your spam filter or 
something. I’m including your address directly in the To field to hopefully get 
this message through to you. 

Thanks,
John

On Fri, Nov 20, 2020, at 03:26, Loknath Priyatham Teja Singamsetty wrote:
> Quick follow up. Are there any outstanding works which require
> contributions ?
> 
> On Thu, Nov 19, 2020 at 4:21 PM Loknath Priyatham Teja Singamsetty <
> lsingamse...@salesforce.com> wrote:
> 
> > Hi Kafka Team,
> >
> > We are a team of interested people from the Salesforce company and want to
> > contribute to open source and want to pick up some JIRA's and start
> > contributing back to the community. Could you please share the JIRA's which
> > we can start working on.
> >
> > --
> > Thanks,
> > Loknath,
> > Engineer Manager,
> > Salesforce
> >
> 
> 
> -- 
> Thanks,
> Loknath.
>


Re: [VOTE] KIP-684 - Support mutual TLS authentication on SASL_SSL listeners

2020-11-20 Thread Rajini Sivaram
The vote has passed with 4 binding votes (David, Ismael, Manikumar, me) and
one non-binding vote (Ron). Many thanks to everyone who reviewed and voted.

I will update the KIP page and submit a PR.

Thank you,

Rajini



On Tue, Nov 17, 2020 at 9:41 AM Manikumar  wrote:

> +1 (binding), Thanks for the KIP.
>
> Thanks,
>
> On Mon, Nov 16, 2020 at 9:11 PM Ismael Juma  wrote:
>
> > Thanks for the KIP, +1 (binding).
> >
> > Ismael
> >
> > On Mon, Nov 16, 2020 at 3:33 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi all,
> > >
> > > I would like to start vote on KIP-684 to support TLS client
> > authentication
> > > (mTLS) on SASL_SSL listeners:
> > >
> > >-
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-684+-+Support+mutual+TLS+authentication+on+SASL_SSL+listeners
> > >
> > >
> > > Thank you...
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> >
>


Re: Identifying JIRA's for contribution

2020-11-20 Thread Loknath Priyatham Teja Singamsetty
Thanks John. This helps. Will redirect my team towards the list.

On Fri, Nov 20, 2020 at 8:03 PM John Roesler  wrote:

> Hi Loknath,
>
> A couple of people replied to you already. Here’s the thread:
> https://urldefense.com/v3/__https://lists.apache.org/x/thread.html/rb93185752163fa71acfce5d11820a48f6c4551863aa3d1d98b21c0ea@*3Cdev.kafka.apache.org*3E__;JSU!!DCbAVzZNrAf4!VHdK5yld6ksakYw_nn6xZOfil0KZ1zz54iStTNd0iWXItSTOrLEpPOpAU6lr1n-xOmO4$
>
> I’m not sure what might have been wrong. Did you subscribe by following
> the instructions at
> https://urldefense.com/v3/__https://kafka.apache.org/contact.html?__;!!DCbAVzZNrAf4!VHdK5yld6ksakYw_nn6xZOfil0KZ1zz54iStTNd0iWXItSTOrLEpPOpAU6lr1ojNE2VB$
>
> Otherwise, the replies might have gotten caught in your spam filter or
> something. I’m including your address directly in the To field to hopefully
> get this message through to you.
>
> Thanks,
> John
>
> On Fri, Nov 20, 2020, at 03:26, Loknath Priyatham Teja Singamsetty wrote:
> > Quick follow up. Are there any outstanding works which require
> > contributions ?
> >
> > On Thu, Nov 19, 2020 at 4:21 PM Loknath Priyatham Teja Singamsetty <
> > lsingamse...@salesforce.com> wrote:
> >
> > > Hi Kafka Team,
> > >
> > > We are a team of interested people from the Salesforce company and
> want to
> > > contribute to open source and want to pick up some JIRA's and start
> > > contributing back to the community. Could you please share the JIRA's
> which
> > > we can start working on.
> > >
> > > --
> > > Thanks,
> > > Loknath,
> > > Engineer Manager,
> > > Salesforce
> > >
> >
> >
> > --
> > Thanks,
> > Loknath.
> >
>


-- 
Thanks,
Loknath.


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-20 Thread Kowshik Prakasam
Hi Harsha/Satish,

Hope you are doing well. Would you be able to please update the meeting
notes section for the most recent 2 meetings (from 10/13 and 11/10)? It
will be useful to share the context with the community.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-MeetingNotes


Cheers,
Kowshik


On Tue, Nov 10, 2020 at 11:39 PM Kowshik Prakasam 
wrote:

> Hi Harsha,
>
> The goal we discussed is to aim for preview in AK 3.0. In order to get us
> there, it will be useful to think about the order in which the code changes
> will be implemented, reviewed and merged. Since you are driving the
> development, do you want to layout the order of things? For example, do you
> eventually want to break up the PR into multiple smaller ones? If so, you
> could list the milestones there. Another perspective is that this can be
> helpful to budget time suitably and to understand the progress.
> Let us know how we can help.
>
>
> Cheers,
> Kowshik
>
> On Tue, Nov 10, 2020 at 3:26 PM Harsha Chintalapani 
> wrote:
>
>> Thanks Kowshik for the link. Seems reasonable,  as we discussed on the
>> call, code and completion of this KIP will be taken up by us.
>> Regarding Milestone 2, what you think it needs to be clarified there?
>> I believe what we are promising in the KIP along with unit tests, systems
>> tests will be delivered and we can call that as preview.   We will be
>> running this in our production and continue to provide the data and
>> metrics
>> to push this feature to GA.
>>
>>
>>
>> On Tue, Nov 10, 2020 at 10:07 AM, Kowshik Prakasam <
>> kpraka...@confluent.io>
>> wrote:
>>
>> > Hi Harsha/Satish,
>> >
>> > Thanks for the discussion today. Here is a link to the KIP-405
>>  development
>> > milestones google doc we discussed in the meeting today: https://docs.
>> > google.com/document/d/1B5_jaZvWWb2DUpgbgImq0k_IPZ4DWrR8Ru7YpuJrXdc/edit
>> > . I have shared it with you. Please have a look and share your
>> > feedback/improvements. As we discussed, things are clear until
>> milestone 1.
>> > Beyond that, we can discuss it again (perhaps in next sync or later),
>> once
>> > you have thought through the implementation plan/milestones and release
>> > into preview in 3.0.
>> >
>> > Cheers,
>> > Kowshik
>> >
>> > On Tue, Nov 10, 2020 at 6:56 AM Satish Duggana <
>> satish.dugg...@gmail.com>
>> > wrote:
>> >
>> > Hi Jun,
>> > Thanks for your comments. Please find the inline replies below.
>> >
>> > 605.2 "Build the local leader epoch cache by cutting the leader epoch
>> > sequence received from remote storage to [LSO, ELO]." I mentioned an
>> issue
>> > earlier. Suppose the leader's local start offset is 100. The follower
>> finds
>> > a remote segment covering offset range [80, 120). The producerState with
>> > this remote segment is up to offset 120. To trim the producerState to
>> > offset 100 requires more work since one needs to download the previous
>> > producerState up to offset 80 and then replay the messages from 80 to
>> 100.
>> > It seems that it's simpler in this case for the follower just to take
>> the
>> > remote segment as it is and start fetching from offset 120.
>> >
>> > We chose that approach to avoid any edge cases here. It may be possible
>> > that the remote log segment that is received may not have the same
>> leader
>> > epoch sequence from 100-120 as it contains on the leader(this can happen
>> > due to unclean leader). It is safe to start from what the leader returns
>> > here.Another way is to find the remote log segment
>> >
>> > 5016. Just to echo what Kowshik was saying. It seems that
>> > RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
>> > partition, not on the replicas for the __remote_log_segment_metadata
>> > partition. It's not clear how the leader of
>> __remote_log_segment_metadata
>> > obtains the metadata for remote segments for deletion.
>> >
>> > RLMM will always receive the callback for the remote log metadata topic
>> > partitions hosted on the local broker and these will be subscribed. I
>> will
>> > make this clear in the KIP.
>> >
>> > 5100. KIP-516  has been
>> accepted and is being implemented now. Could you
>> > update the KIP based on topicID?
>> >
>> > We mentioned KIP-516 
>> and how it helps. We will update this KIP with all
>> > the changes it brings with KIP-516
>> .
>> >
>> > 5101. RLMM: It would be useful to clarify how the following two APIs are
>> > used. According to the wiki, the former is used for topic deletion and
>> the
>> > latter is used for retention. It seems that retention should use the
>> former
>> > since remote segments without a matching epoch in the leader
>> (potentially
>> > due to unclean leader election) also need to be garbage collected. The
>> > latter seems to be used 

[jira] [Created] (KAFKA-10755) Should consider commit latency when computing next commit timestamp

2020-11-20 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-10755:
---

 Summary: Should consider commit latency when computing next commit 
timestamp
 Key: KAFKA-10755
 URL: https://issues.apache.org/jira/browse/KAFKA-10755
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.0
Reporter: Matthias J. Sax


In 2.6, we reworked the main processing/commit loop in `StreamThread` and 
introduced a regression, by _not_ updating the current time after committing. 
This implies that we compute the next commit timestamp too low (ie, too early).

For small commit intervals and high commit latency (like in EOS), this big may 
lead to an increased commit frequency and fewer processed records between two 
commits, and thus to reduced throughput.

For example, assume that the commit interval is 100ms and the commit latency is 
50ms, and we start the commit at timestamp 1. The commit finishes at 10050, 
and the next commit should happen at 10150. However, if we don't update the 
current timestamp, we incorrectly compute the next commit time as 10100, ie, 
50ms too early, and we have only 50ms to process data instead of the intended 
100ms.

In the worst case, if the commit latency is larger than the commit interval, it 
would imply that we commit after processing a single record per task.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10745) Please let me know how I check the time which Source connector receive the data from source table.

2020-11-20 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10745.
-
Resolution: Invalid

[~nayusik] – we use the Jira board for bug reports, not to answer questions.

If you have questions, please sign up to the user mailing list as described on 
the webpage ([https://kafka.apache.org/contact]) and ask your question there. 
Thanks.

> Please let me know how I check the time which Source connector receive the 
> data from source table.
> --
>
> Key: KAFKA-10745
> URL: https://issues.apache.org/jira/browse/KAFKA-10745
> Project: Kafka
>  Issue Type: Improvement
>Reporter: NAYUSIK
>Priority: Major
>
> Please let me know how I check the time which Source connector receive the 
> data from source table.
> I want to check the time by section.
> We are currently using JDBC Connector.
> The time we can see is the time when the data is created on the source table, 
> the time when the data is entered into Kafka, and the time when the data is 
> generated on the target table.
> But I also want to know the time when Source connector receive the data from 
> source table.
> Please tell me what settings I need to set up on the Source connector.
> Thank you for your support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10752) One topic partition multiple consumer

2020-11-20 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10752.
-
Resolution: Invalid

[~dnamicro] – we use the Jira board for bug reports, not to answer questions.

If you have questions, please sign up to the user mailing list as described on 
the webpage ([https://kafka.apache.org/contact]) and ask your question there. 
Thanks.

> One topic partition multiple consumer
> -
>
> Key: KAFKA-10752
> URL: https://issues.apache.org/jira/browse/KAFKA-10752
> Project: Kafka
>  Issue Type: Task
>Reporter: AaronTrazona
>Priority: Minor
>
> # Does this means that single partition cannot be consumed by multiple 
> consumers? Cant we have single partition and a consumer group with more than 
> one consumer and make them all consume from single partition?
>  # If single partition can be consumed by only single consumer, I was 
> thinking why is this design decision?
>  # What if I need total order over records and still need it to be consumed 
> parallel? Is it undoable in Kafka? Or such scenario does not make sense?
> I need clarification if this is doable in kafka, that have  1 topic partition 
> with multiple consumer  (round robin strategy)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.6.1 RC0

2020-11-20 Thread Matthias J. Sax
Mickael,

we discovered a regression bug in Kafka Streams today that was
introduced in the 2.6.0 release. It affects EOS users, what is a growing
percentage of Kafka Streams users.

I marked it as critical for now, but would propose it as a blocker for
2.6.1. What do you think? Can we roll a new RC to include a fix in 2.6.1?

A PR is already under review.

https://issues.apache.org/jira/browse/KAFKA-10755


-Matthias

On 11/12/20 4:17 PM, Mickael Maison wrote:
> Hello Kafka users, developers and client-developers,
> 
> This is the first candidate for release of Apache Kafka 2.6.1.
> 
> Apache Kafka 2.6.1 is a bugfix release and fixes 35 issues since the
> 2.6.0 release. Please see the release notes for more information.
> 
> Release notes for the 2.6.1 release:
> https://home.apache.org/~mimaison/kafka-2.6.1-rc0/RELEASE_NOTES.html
> 
> *** Please download, test and vote by Thursday, November 19, 5pm PT
> 
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
> 
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~mimaison/kafka-2.6.1-rc0/
> 
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> 
> * Javadoc:
> https://home.apache.org/~mimaison/kafka-2.6.1-rc0/javadoc/
> 
> * Tag to be voted upon (off 2.6 branch) is the 2.6.1 tag:
> https://github.com/apache/kafka/releases/tag/2.6.1-rc0
> 
> * Documentation:
> https://kafka.apache.org/26/documentation.html
> 
> * Protocol:
> https://kafka.apache.org/26/protocol.html
> 
> * Successful Jenkins builds for the 2.6 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/51/
> 
> Thanks,
> Mickael
> 


Re: [VOTE] KIP-680: TopologyTestDriver should not require a Properties argument

2020-11-20 Thread Rohit Deshpande
Thanks Guozhang.
Waiting for binding votes.
Thanks,
Rohit

On Tue, Nov 17, 2020 at 10:13 AM Guozhang Wang  wrote:

> +1, thanks Rohit.
>
>
> Guozhang
>
> On Sun, Nov 15, 2020 at 11:53 AM Rohit Deshpande 
> wrote:
>
> > Hello all,
> > I would like to start voting on KIP-680: TopologyTestDriver should not
> > require a Properties argument.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
> >
> > Discuss thread:
> >
> >
> https://lists.apache.org/thread.html/r5d3d0afc6feb5e18ade47aefbd88534f1b19b2f550a14d33cbc7a0dd%40%3Cdev.kafka.apache.org%3E
> >
> > Jira for the KIP:
> > https://issues.apache.org/jira/browse/KAFKA-10629
> >
> > If we end up making changes, they will look like this:
> > https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
> >
> > Thanks,
> > Rohit
> >
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-10756) Add missing unit test for `UnattachedState`

2020-11-20 Thread dengziming (Jira)
dengziming created KAFKA-10756:
--

 Summary: Add missing unit test for `UnattachedState`
 Key: KAFKA-10756
 URL: https://issues.apache.org/jira/browse/KAFKA-10756
 Project: Kafka
  Issue Type: Sub-task
Reporter: dengziming
Assignee: dengziming


Add unit test for UnattachedState, similar to KAFKA-10519



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-680: TopologyTestDriver should not require a Properties argument

2020-11-20 Thread John Roesler
Thanks again for the KIP, Rohit. 

I’m +1 (binding)

Sorry, I missed your vote thread. 

-John

On Fri, Nov 20, 2020, at 21:35, Rohit Deshpande wrote:
> Thanks Guozhang.
> Waiting for binding votes.
> Thanks,
> Rohit
> 
> On Tue, Nov 17, 2020 at 10:13 AM Guozhang Wang  wrote:
> 
> > +1, thanks Rohit.
> >
> >
> > Guozhang
> >
> > On Sun, Nov 15, 2020 at 11:53 AM Rohit Deshpande 
> > wrote:
> >
> > > Hello all,
> > > I would like to start voting on KIP-680: TopologyTestDriver should not
> > > require a Properties argument.
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
> > >
> > > Discuss thread:
> > >
> > >
> > https://lists.apache.org/thread.html/r5d3d0afc6feb5e18ade47aefbd88534f1b19b2f550a14d33cbc7a0dd%40%3Cdev.kafka.apache.org%3E
> > >
> > > Jira for the KIP:
> > > https://issues.apache.org/jira/browse/KAFKA-10629
> > >
> > > If we end up making changes, they will look like this:
> > > https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
> > >
> > > Thanks,
> > > Rohit
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


[jira] [Created] (KAFKA-10757) KAFKA-10755 brings a compile error

2020-11-20 Thread dengziming (Jira)
dengziming created KAFKA-10757:
--

 Summary: KAFKA-10755 brings a compile error 
 Key: KAFKA-10757
 URL: https://issues.apache.org/jira/browse/KAFKA-10757
 Project: Kafka
  Issue Type: Bug
Reporter: dengziming
Assignee: dengziming


The `new TaskManager` has 10 params but StreamThreadTest call a `new 
StreamThreadTest` with 9 params.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #248

2020-11-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10755: Should consider commit latency when computing next commit 
timestamp (#9634)


--
[...truncated 2.39 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73e36fc0,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@41ac4190,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@41ac4190,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@72f13ff, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@72f13ff, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@249278f1, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@249278f1, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4c679ecc, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4c679ecc, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6fe4e611, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6fe4e611, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@102ba0f9, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@102ba0f9, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@274917fd, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@274917fd, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@57286c26, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@57286c26, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1f34dcbc, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1f34dcbc, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@52d58bcb, 
t

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #272

2020-11-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10755: Should consider commit latency when computing next commit 
timestamp (#9634)


--
[...truncated 2.39 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
s

Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #53

2020-11-20 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6181) Examining log messages with {{--deep-iteration}} should show superset of fields

2020-11-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-6181.
---
Fix Version/s: 2.8.0
   Resolution: Fixed

> Examining log messages with {{--deep-iteration}} should show superset of 
> fields
> ---
>
> Key: KAFKA-6181
> URL: https://issues.apache.org/jira/browse/KAFKA-6181
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Yeva Byzek
>Assignee: Prithvi
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
>
> Printing log data on Kafka brokers using {{kafka.tools.DumpLogSegments}}:
>  {{--deep-iteration}} should show a superset of fields in each message, as 
> compared to without this parameter, however some fields are missing. Impact: 
> users need to execute both commands to get the full set of fields.
> {noformat}
> kafka-run-class kafka.tools.DumpLogSegments \
> --print-data-log \
> --files .log
> Dumping .log
> Starting offset: 0
> baseOffset: 0 lastOffset: 35 baseSequence: -1 lastSequence: -1 producerId: -1 
> producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false position: 0 
> CreateTime: 1509987569448 isvalid: true size: 3985 magic: 2 compresscodec: 
> NONE crc: 4227905507
> {noformat}
> {noformat}
> kafka-run-class kafka.tools.DumpLogSegments \
> --print-data-log \
> --files .log \
> --deep-iteration
> Dumping .log
> Starting offset: 0
> offset: 0 position: 0 CreateTime: 1509987569420 isvalid: true keysize: -1 
> valuesize: 100
> magic: 2 compresscodec: NONE producerId: -1 sequence: -1 isTransactional: 
> false headerKeys: [] payload: 
> SSXVNJHPDQDXVCRASTVYBCWVMGNYKRXVZXKGXTSPSJDGYLUEGQFLAQLOCFLJBEPOWFNSOMYARHAOPUFOJHHDXEHXJBHW
> {noformat}
> Notice, for example, that {{partitionLeaderEpoch}} and {{crc}} are missing. 
> Print these and all missing fields.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)