[jira] [Created] (KAFKA-10704) Mirror maker with TLS at target

2020-11-10 Thread Tushar Bhasme (Jira)
Tushar Bhasme created KAFKA-10704:
-

 Summary: Mirror maker with TLS at target
 Key: KAFKA-10704
 URL: https://issues.apache.org/jira/browse/KAFKA-10704
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.6.0
Reporter: Tushar Bhasme
 Fix For: 2.7.0


We need to setup mirror maker from a single node kafka cluster to a three node 
Strimzi cluster. There is no SSL setup at source, however the target cluster is 
configured with MTLS.

With below config, commands from source like listing topics etc are working:
{code:java}
cat client-ssl.properties
security.protocol=SSL
ssl.truststore.location=my.truststore
ssl.truststore.password=123456
ssl.keystore.location=my.keystore
ssl.keystore.password=123456
ssl.key.password=password{code}

However, we are not able to get mirror maker working with the similar configs:
{code:java}
source.security.protocol=PLAINTEXT
target.security.protocol=SSL
target.ssl.truststore.location=my.truststore
target.ssl.truststore.password=123456
target.ssl.keystore.location=my.keystore
target.ssl.keystore.password=123456
target.ssl.key.password=password{code}
Errors while running mirror maker:
{code:java}
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, 
deadlineMs=1605011994642, tries=1, nextAllowedTryMs=1605011994743) timed out at 
1605011994643 after 1 attempt(s)


Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
for a node assignment. Call: fetchMetadata


[2020-11-10 12:40:24,642] INFO App info kafka.admin.client for adminclient-8 
unregistered (org.apache.kafka.common.utils.AppInfoParser:83)


[2020-11-10 12:40:24,643] INFO [AdminClient clientId=adminclient-8] Metadata 
update failed 
(org.apache.kafka.clients.admin.internals.AdminMetadataManager:235)


org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, 
deadlineMs=1605012024643, tries=1, nextAllowedTryMs=-9223372036854775709) timed 
out at 9223372036854775807 after 1attempt(s)


Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient 
thread has exited. Call: fetchMetadata


[2020-11-10 12:40:24,644] INFO Metrics scheduler closed 
(org.apache.kafka.common.metrics.Metrics:668)


[2020-11-10 12:40:24,644] INFO Closing reporter 
org.apache.kafka.common.metrics.JmxReporter 
(org.apache.kafka.common.metrics.Metrics:672)


[2020-11-10 12:40:24,644] INFO Metrics reporters closed 
(org.apache.kafka.common.metrics.Metrics:678)


[2020-11-10 12:40:24,645] ERROR Stopping due to error 
(org.apache.kafka.connect.mirror.MirrorMaker:304)


org.apache.kafka.connect.errors.ConnectException: Failed to connect to and 
describe Kafka cluster. Check worker's broker connection and security 
properties.


at 
org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:70)


at 
org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:51)


at 
org.apache.kafka.connect.mirror.MirrorMaker.addHerder(MirrorMaker.java:235)


at 
org.apache.kafka.connect.mirror.MirrorMaker.lambda$new$1(MirrorMaker.java:136)


at java.lang.Iterable.forEach(Iterable.java:75)


at 
org.apache.kafka.connect.mirror.MirrorMaker.(MirrorMaker.java:136)


at 
org.apache.kafka.connect.mirror.MirrorMaker.(MirrorMaker.java:148)


at 
org.apache.kafka.connect.mirror.MirrorMaker.main(MirrorMaker.java:291)


Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, 
deadlineMs=1605012024641, tries=1, nextAllowedTryMs=1605012024742)timed out at 
1605012024642 after 1 attempt(s)


at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)


at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)


at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)


at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)


at 
org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)


... 7 more


Caused by: org.apache.kafka.common.errors.TimeoutException: 
Call(callName=listNodes, deadlineMs=1605012024641, tries=1, 
nextAllowedTryMs=1605012024742) timed out at 1605012024642 after 1 attempt(s)


Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
for a node assignment. Call: listNodes
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Support of virtual threads by Kafka

2020-11-10 Thread Ismael Juma
Hi Leonid,

Thanks for looking into this. I think the main question is how
virtual threads would help here. Kafka tends to use a small number of
threads combined with non blocking IO, but there are some exceptions like
disk IO. My understanding is that the latter is still blocking even with
the latest builds of Loom. Is that right?

Ismael

On Mon, Nov 9, 2020 at 10:26 AM Leonid Mesnik 
wrote:

> Hi
>
>
> Currently, I am working on Loom project which enables virtual threads
> for Java.
>
> https://wiki.openjdk.java.net/display/loom/Main
>
> As one of real-life tests it would be interesting to run Kafka using
> virtual threads. However it might require some support in Kafka for
> this. It is needed to add ability to start some threads as "virtual". Do
> you know if anyone is interested and could help me with this?
>
> Here are more details:
>
> Basically, the virtual thread is a sub-class of java.lang.Thread. So it
> is need to refactor code to avoid subclassing of thread and factorize
> thread creation. I placed "example" fix what should be done to add
> ability to run KafkaThread as virtual thread. It is just to demonstrate
> the overall idea of changes.
>
>
> https://github.com/lmesnik/kafka/commit/872a2d5fd57b0c76878eece6c54c783897ccbf5e
>
>
> I want to check with you if it is a good approach for Kafka and are
> there are other places to be updated. There is no plan to push such
> support in mainline yet. Also, no plans to make any significant changes.
> But if they want we could do it.
>
> What do you think about this?
>
> Leonid
>
>


[jira] [Resolved] (KAFKA-10469) Respect logging hierarchy (KIP-676)

2020-11-10 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-10469.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> Respect logging hierarchy (KIP-676)
> ---
>
> Key: KAFKA-10469
> URL: https://issues.apache.org/jira/browse/KAFKA-10469
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 2.8.0
>
>
> {{Log4jController#loggers}} incorrectly uses the root logger's log level for 
> any loggers which lack a configured log level of their own. This is incorrect 
> because loggers without an explicit level inherit their level from their 
> parent logger and this resolved level might be different from the root 
> logger's level. This means that the levels reported from 
> {{Admin.describeConfigs}}, which uses {{Log4jController#loggers}} are 
> incorrect. This can be shown by using the default {{log4j.properties}} and 
> describing a broker's loggers, it reports
> {noformat}
> kafka.controller=TRACE
> kafka.controller.ControllerChannelManager=INFO
> kafka.controller.ControllerEventManager$ControllerEventThread=INFO
> kafka.controller.KafkaController=INFO
> kafka.controller.RequestSendThread=INFO
> kafka.controller.TopicDeletionManager=INFO
> kafka.controller.ZkPartitionStateMachine=INFO
> kafka.controller.ZkReplicaStateMachine=INFO
> {noformat}
> The default {{log4j.properties}} does indeed set {{kafka.controller}} to 
> {{TRACE}}, but it does not configure the others, so they're actually at 
> {{TRACE}} not {{INFO}} as reported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-10 Thread Satish Duggana
Hi Jun,
Thanks for your comments. Please find the inline replies below.

605.2 "Build the local leader epoch cache by cutting the leader epoch
sequence received from remote storage to [LSO, ELO]." I mentioned an issue
earlier. Suppose the leader's local start offset is 100. The follower finds
a remote segment covering offset range [80, 120). The producerState with
this remote segment is up to offset 120. To trim the producerState to
offset 100 requires more work since one needs to download the previous
producerState up to offset 80 and then replay the messages from 80 to 100.
It seems that it's simpler in this case for the follower just to take the
remote segment as it is and start fetching from offset 120.

We chose that approach to avoid any edge cases here. It may be
possible that the remote log segment that is received may not have the
same leader epoch sequence from 100-120 as it contains on the
leader(this can happen due to unclean leader). It is safe to start
from what the leader returns here.Another way is to find the remote
log segment

5016. Just to echo what Kowshik was saying. It seems that
RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
partition, not on the replicas for the __remote_log_segment_metadata
partition. It's not clear how the leader of __remote_log_segment_metadata
obtains the metadata for remote segments for deletion.

RLMM will always receive the callback for the remote log metadata
topic partitions hosted on the local broker and these will be
subscribed. I will make this clear in the KIP.

5100. KIP-516 has been accepted and is being implemented now. Could you
update the KIP based on topicID?

We mentioned KIP-516 and how it helps. We will update this KIP with
all the changes it brings with KIP-516.

5101. RLMM: It would be useful to clarify how the following two APIs are
used. According to the wiki, the former is used for topic deletion and the
latter is used for retention. It seems that retention should use the former
since remote segments without a matching epoch in the leader (potentially
due to unclean leader election) also need to be garbage collected. The
latter seems to be used for the new leader to determine the last tiered
segment.
default Iterator
listRemoteLogSegments(TopicPartition topicPartition)
Iterator listRemoteLogSegments(TopicPartition
topicPartition, long leaderEpoch);

Right,.that is what we are currently doing. We will update the
javadocs and wiki with that. Earlier, we did not want to remove the
segments which are not matched with leader epochs from the ladder
partition as they may be used later by a replica which can become a
leader (unclean leader election) and refer those segments. But that
may leak these segments in remote storage until the topic lifetime. We
decided to cleanup the segments with the oldest incase of size based
retention also.

5102. RSM:
5102.1 For methods like fetchLogSegmentData(), it seems that they can
use RemoteLogSegmentId instead of RemoteLogSegmentMetadata.

It will be useful to have metadata for RSM to fetch log segment. It
may create location/path using id with other metadata too.

5102.2 In fetchLogSegmentData(), should we use long instead of Long?

Wanted to keep endPosition as optional to read till the end of the
segment and avoid sentinels.

5102.3 Why only some of the methods have default implementation and others
Don't?

Actually,  RSM will not have any default implementations. Those 3
methods were made default earlier for tests etc. Updated the wiki.

5102.4. Could we define RemoteLogSegmentMetadataUpdate
and DeletePartitionUpdate?

Sure, they will be added.


5102.5 LogSegmentData: It seems that it's easier to pass
in leaderEpochIndex as a ByteBuffer or byte array than a file since it will
be generated in memory.

Right, this is in plan.

5102.6 RemoteLogSegmentMetadata: It seems that it needs both baseOffset and
startOffset. For example, deleteRecords() could move the startOffset to the
middle of a segment. If we copy the full segment to remote storage, the
baseOffset and the startOffset will be different.

Good point. startOffset is baseOffset by default, if not set explicitly.

5102.7 Could we define all the public methods for RemoteLogSegmentMetadata
and LogSegmentData?

Sure, updated the wiki.

5102.8 Could we document whether endOffset in RemoteLogSegmentMetadata is
inclusive/exclusive?

It is inclusive, will update.

5103. configs:
5103.1 Could we define the default value of non-required configs (e.g the
size of new thread pools)?

Sure, that makes sense.

5103.2 It seems that local.log.retention.ms should default to retention.ms,
instead of remote.log.retention.minutes. Similarly, it seems
that local.log.retention.bytes should default to segment.bytes.

Right, we do not have  remote.log.retention as we discussed earlier.
Thanks for catching the typo.

5103.3 remote.log.manager.thread.pool.size: The description says "used in
scheduling tasks to copy segments, fetch remote log i

[DISCUSS] KIP-685: Loosen permission for listing reassignments

2020-11-10 Thread David Jacot
Hi all,

I have submitted a small KIP to address KAFKA-10216. This KIP proposes to
change
the permission of the ListPartitionReassignments API to Topic Describe
instead of Cluster Describe.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-685%3A+Loosen+permission+for+listing+reassignments

Please, let me know what you think.

Best,
David


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #243

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Log resource pattern of ACL updates at INFO level (#9578)


--
[...truncated 6.92 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@662a0991, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4ab4e990, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4ab4e990, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1f4c0548, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1f4c0548, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllo

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #219

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Log resource pattern of ACL updates at INFO level (#9578)


--
[...truncated 6.92 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@3fb94f5d,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@3fb94f5d,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1347e8b9,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1347e8b9,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@276bcb7f,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@276bcb7f,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6a19322e,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6a19322e,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1faf5b97,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1faf5b97,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@33017afb,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@33017afb,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4a486fd7,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4a486fd7,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6963bcf2,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6963bcf2,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c7521aa, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c7521aa, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@341b547d,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStore

Re: [DISCUSS] KIP-685: Loosen permission for listing reassignments

2020-11-10 Thread Gwen Shapira
Makes sense to me. Thanks David.

On Tue, Nov 10, 2020 at 6:57 AM David Jacot  wrote:
>
> Hi all,
>
> I have submitted a small KIP to address KAFKA-10216. This KIP proposes to
> change
> the permission of the ListPartitionReassignments API to Topic Describe
> instead of Cluster Describe.
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-685%3A+Loosen+permission+for+listing+reassignments
>
> Please, let me know what you think.
>
> Best,
> David



-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #244

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] KAFKA-10469: Respect logging hierarchy (KIP-676) (#9266)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16d9f998, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16d9f998, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@38ea6179, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@38ea6179, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31affa4e, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31affa4e, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e21a38d, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e21a38d, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@576d8ee8, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@576d8ee8, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3ce7ccd5, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3ce7ccd5, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@72d43c8d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@72d43c8d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@71eac491, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@71eac491, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4acb1e8a, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4acb1e8a, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4848f262, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4848f262, 
timestamped = false, caching = true, logging = true] PASSED

org.apache

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-10 Thread Kowshik Prakasam
Hi Harsha/Satish,

Thanks for the discussion today. Here is a link to the KIP-405 development
milestones google doc we discussed in the meeting today:
https://docs.google.com/document/d/1B5_jaZvWWb2DUpgbgImq0k_IPZ4DWrR8Ru7YpuJrXdc/edit
. I have shared it with you. Please have a look and share your
feedback/improvements. As we discussed, things are clear until milestone 1.
Beyond that, we can discuss it again (perhaps in next sync or later), once
you have thought through the implementation plan/milestones and release
into preview in 3.0.


Cheers,
Kowshik


On Tue, Nov 10, 2020 at 6:56 AM Satish Duggana 
wrote:

> Hi Jun,
> Thanks for your comments. Please find the inline replies below.
>
> 605.2 "Build the local leader epoch cache by cutting the leader epoch
> sequence received from remote storage to [LSO, ELO]." I mentioned an issue
> earlier. Suppose the leader's local start offset is 100. The follower finds
> a remote segment covering offset range [80, 120). The producerState with
> this remote segment is up to offset 120. To trim the producerState to
> offset 100 requires more work since one needs to download the previous
> producerState up to offset 80 and then replay the messages from 80 to 100.
> It seems that it's simpler in this case for the follower just to take the
> remote segment as it is and start fetching from offset 120.
>
> We chose that approach to avoid any edge cases here. It may be
> possible that the remote log segment that is received may not have the
> same leader epoch sequence from 100-120 as it contains on the
> leader(this can happen due to unclean leader). It is safe to start
> from what the leader returns here.Another way is to find the remote
> log segment
>
> 5016. Just to echo what Kowshik was saying. It seems that
> RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
> partition, not on the replicas for the __remote_log_segment_metadata
> partition. It's not clear how the leader of __remote_log_segment_metadata
> obtains the metadata for remote segments for deletion.
>
> RLMM will always receive the callback for the remote log metadata
> topic partitions hosted on the local broker and these will be
> subscribed. I will make this clear in the KIP.
>
> 5100. KIP-516 has been accepted and is being implemented now. Could you
> update the KIP based on topicID?
>
> We mentioned KIP-516 and how it helps. We will update this KIP with
> all the changes it brings with KIP-516.
>
> 5101. RLMM: It would be useful to clarify how the following two APIs are
> used. According to the wiki, the former is used for topic deletion and the
> latter is used for retention. It seems that retention should use the former
> since remote segments without a matching epoch in the leader (potentially
> due to unclean leader election) also need to be garbage collected. The
> latter seems to be used for the new leader to determine the last tiered
> segment.
> default Iterator
> listRemoteLogSegments(TopicPartition topicPartition)
> Iterator listRemoteLogSegments(TopicPartition
> topicPartition, long leaderEpoch);
>
> Right,.that is what we are currently doing. We will update the
> javadocs and wiki with that. Earlier, we did not want to remove the
> segments which are not matched with leader epochs from the ladder
> partition as they may be used later by a replica which can become a
> leader (unclean leader election) and refer those segments. But that
> may leak these segments in remote storage until the topic lifetime. We
> decided to cleanup the segments with the oldest incase of size based
> retention also.
>
> 5102. RSM:
> 5102.1 For methods like fetchLogSegmentData(), it seems that they can
> use RemoteLogSegmentId instead of RemoteLogSegmentMetadata.
>
> It will be useful to have metadata for RSM to fetch log segment. It
> may create location/path using id with other metadata too.
>
> 5102.2 In fetchLogSegmentData(), should we use long instead of Long?
>
> Wanted to keep endPosition as optional to read till the end of the
> segment and avoid sentinels.
>
> 5102.3 Why only some of the methods have default implementation and others
> Don't?
>
> Actually,  RSM will not have any default implementations. Those 3
> methods were made default earlier for tests etc. Updated the wiki.
>
> 5102.4. Could we define RemoteLogSegmentMetadataUpdate
> and DeletePartitionUpdate?
>
> Sure, they will be added.
>
>
> 5102.5 LogSegmentData: It seems that it's easier to pass
> in leaderEpochIndex as a ByteBuffer or byte array than a file since it will
> be generated in memory.
>
> Right, this is in plan.
>
> 5102.6 RemoteLogSegmentMetadata: It seems that it needs both baseOffset and
> startOffset. For example, deleteRecords() could move the startOffset to the
> middle of a segment. If we copy the full segment to remote storage, the
> baseOffset and the startOffset will be different.
>
> Good point. startOffset is baseOffset by default, if not set explicitly.
>
> 5102.7 Could we define all the p

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #220

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] KAFKA-10469: Respect logging hierarchy (KIP-676) (#9266)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@10485cab, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@10485cab, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@47431d16, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@47431d16, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@22ce717c, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@22ce717c, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@196ddbc, 
timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@196ddbc, 
timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@53696421,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@53696421,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c9d78f7, 
timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c9d78f7, 
timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5d89bd89,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5d89bd89,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6ac5e0b9,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6ac5e0b9,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7215d9ac,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7215d9ac,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@15284dd9,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apac

[GitHub] [kafka-site] scott-confluent opened a new pull request #309: adding tencent and bytedance to powered by page

2020-11-10 Thread GitBox


scott-confluent opened a new pull request #309:
URL: https://github.com/apache/kafka-site/pull/309


   I also optimized all the logos using imageOptim.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-11-10 Thread Ron Dagostino
Hi Coln.  Ignore my previous question about ConfigRecord.ResourceType
having to be a string -- I now see that
org.apache.kafka.common.config.ConfigResource defines the types of
configs for an int8.

I do have a question about how the broker will connect to the
controller.  The KIP says that controller.listener.names is "required
if this process is a KIP-500 controller" and "Despite the similar
name, note that this is different from the "control plane listener"
introduced by KIP-291.  The "control plane listener" is used on
brokers, not on controllers."  This leads me to believe that
controller.listener.names is not used on brokers.  But I believe it
will be required, otherwise the only information that the broker will
have is a list of hosts and ports (from controller.connect).  I
believe the broker will require a value in controller.listener.names
and then the broker will take that value (the first in the list?) and
convert that to a security protocol via
listener.security.protocol.map.  Also, if the security protocol is
SASL_{PLAINTEXT,SSL}, what config will defne the SASL mechanism that
the broker should use?

Ron

On Wed, Oct 28, 2020 at 1:29 PM Ron Dagostino  wrote:
>
> HI again, Colin.  I just noticed that both ConfigRecord and
> AccessControlRecord have a ResourceType of type int8.  I thought that
> config resources are in the set {topics, clients, users, brokers} and
> ACL resource types are a different set as defined by the
> org.apache.kafka.common.resource.ResourceType enum.  Does
> ConfigRecord.ResourceType need to be a String?
>
> Ron
>
> On Sun, Oct 25, 2020 at 6:04 AM Ron Dagostino  wrote:
> >
> > Hi Colin and Jun.
> >
> > Regarding these issues:
> >
> > 83.1 It seems that the broker can transition from FENCED to RUNNING
> > without registering for a new broker epoch. I am not sure how this
> > works. Once the controller fences a broker, there is no need for the
> > controller to keep the boker epoch around. So, if the fenced broker's
> > heartbeat request with the existing broker epoch will be rejected,
> > leading the broker back to the FENCED state again.; 104.
> > REGISTERING(1) : It says "Otherwise, the broker moves into the FENCED
> > state.". It seems this should be RUNNING?
> >
> > When would/could a broker re-register -- i.e. send
> > BrokerRegistrationRequest again once it receives a
> > BrokerRegistrationResponse containing no error and its broker epoch?
> > The text states that "Once the period has elapsed, if the broker has
> > not renewed its registration via a heartbeat, it must re-register."
> > But the broker state machine only mentions any type of
> > registration-related event in the REGISTERING state ("While in this
> > state, the broker tries to register with the active controller");
> > there is no other broker state in the text that mentions the
> > possibility of re-registering, and the broker state machine has no
> > transition back to the REGISTERING state.
> >
> > Also, the text now states that there are "three broker registration
> > states: unregistered, registered but fenced, and registered and
> > active." It would be good to map these onto the formal broker state
> > machine so we know which "registration states" a broker can be in for
> > each state within its broker state machine.  It is not clear if there
> > is a way for a broker to go backwards into the "unregistered" broker
> > registration state.  I suspect it can only flip-flop between
> > registered but fenced/registered and active as the broker flip-flops
> > between ACTIVE and FENCED, and this would imply that a broker is never
> > strictly required to re-register -- though the option isn't precluded.
> >
> > Does a broker JVM keep it's assigned broker epoch throughout the life
> > of the JVM?  The BrokerRegistrationRequest includes a place for the
> > broker to specify its current broker epoch, but that would only be
> > useful if the broker is re-registering.  If a broker were to
> > re-register, the data in the request might seem to imply that it could
> > do so to specify dynamic changes to its features or endpoints, but
> > those dynamic changes happen centrally, so that doesn't seem to be a
> > valid reason to re-register.  So I do not yet see a reason for
> > re-registering despite the text "if the broker has not renewed its
> > registration via a heartbeat, it must re-register."
> >
> > It feels to me that a broker would keep its epoch throughout the life
> > of its JVM and it would never re-register, and the controller would
> > remember/maintain the broker epoch when it fences a broker; the broker
> > would continue to try sending heartbeat requests while it is fenced,
> > and it would continue to do so until the process is killed via an
> > external signal.  If the controller eventually does respond with the
> > broker's next state then that next state will either be ACTIVE
> > (meaning communication has been restored; the return broker epoch will
> > be the same one that the broker JVM h

[GitHub] [kafka-site] guozhangwang merged pull request #309: adding tencent and bytedance to powered by page

2020-11-10 Thread GitBox


guozhangwang merged pull request #309:
URL: https://github.com/apache/kafka-site/pull/309


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-10705) Avoid World Readable RocksDB

2020-11-10 Thread Walker Carlson (Jira)
Walker Carlson created KAFKA-10705:
--

 Summary: Avoid World Readable RocksDB
 Key: KAFKA-10705
 URL: https://issues.apache.org/jira/browse/KAFKA-10705
 Project: Kafka
  Issue Type: Bug
Reporter: Walker Carlson


The state directory could be protected more restrictive by preventing access to 
state directory for group and others. At least other should have no readable 
access



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #245

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[github] migrate remaining RPCs (#9558)


--
[...truncated 6.93 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.k

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-10 Thread Harsha Chintalapani
Thanks Kowshik for the link. Seems reasonable,  as we discussed on the
call, code and completion of this KIP will be taken up by us.
Regarding Milestone 2, what you think it needs to be clarified there?
I believe what we are promising in the KIP along with unit tests, systems
tests will be delivered and we can call that as preview.   We will be
running this in our production and continue to provide the data and metrics
to push this feature to GA.



On Tue, Nov 10, 2020 at 10:07 AM, Kowshik Prakasam 
wrote:

> Hi Harsha/Satish,
>
> Thanks for the discussion today. Here is a link to the KIP-405 development
> milestones google doc we discussed in the meeting today: https://docs.
> google.com/document/d/1B5_jaZvWWb2DUpgbgImq0k_IPZ4DWrR8Ru7YpuJrXdc/edit
> . I have shared it with you. Please have a look and share your
> feedback/improvements. As we discussed, things are clear until milestone 1.
> Beyond that, we can discuss it again (perhaps in next sync or later), once
> you have thought through the implementation plan/milestones and release
> into preview in 3.0.
>
> Cheers,
> Kowshik
>
> On Tue, Nov 10, 2020 at 6:56 AM Satish Duggana 
> wrote:
>
> Hi Jun,
> Thanks for your comments. Please find the inline replies below.
>
> 605.2 "Build the local leader epoch cache by cutting the leader epoch
> sequence received from remote storage to [LSO, ELO]." I mentioned an issue
> earlier. Suppose the leader's local start offset is 100. The follower finds
> a remote segment covering offset range [80, 120). The producerState with
> this remote segment is up to offset 120. To trim the producerState to
> offset 100 requires more work since one needs to download the previous
> producerState up to offset 80 and then replay the messages from 80 to 100.
> It seems that it's simpler in this case for the follower just to take the
> remote segment as it is and start fetching from offset 120.
>
> We chose that approach to avoid any edge cases here. It may be possible
> that the remote log segment that is received may not have the same leader
> epoch sequence from 100-120 as it contains on the leader(this can happen
> due to unclean leader). It is safe to start from what the leader returns
> here.Another way is to find the remote log segment
>
> 5016. Just to echo what Kowshik was saying. It seems that
> RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
> partition, not on the replicas for the __remote_log_segment_metadata
> partition. It's not clear how the leader of __remote_log_segment_metadata
> obtains the metadata for remote segments for deletion.
>
> RLMM will always receive the callback for the remote log metadata topic
> partitions hosted on the local broker and these will be subscribed. I will
> make this clear in the KIP.
>
> 5100. KIP-516 has been accepted and is being implemented now. Could you
> update the KIP based on topicID?
>
> We mentioned KIP-516 and how it helps. We will update this KIP with all
> the changes it brings with KIP-516.
>
> 5101. RLMM: It would be useful to clarify how the following two APIs are
> used. According to the wiki, the former is used for topic deletion and the
> latter is used for retention. It seems that retention should use the former
> since remote segments without a matching epoch in the leader (potentially
> due to unclean leader election) also need to be garbage collected. The
> latter seems to be used for the new leader to determine the last tiered
> segment.
> default Iterator
> listRemoteLogSegments(TopicPartition topicPartition)
> Iterator listRemoteLogSegments(TopicPartition
> topicPartition, long leaderEpoch);
>
> Right,.that is what we are currently doing. We will update the javadocs
> and wiki with that. Earlier, we did not want to remove the segments which
> are not matched with leader epochs from the ladder partition as they may be
> used later by a replica which can become a leader (unclean leader election)
> and refer those segments. But that may leak these segments in remote
> storage until the topic lifetime. We decided to cleanup the segments with
> the oldest incase of size based retention also.
>
> 5102. RSM:
> 5102.1 For methods like fetchLogSegmentData(), it seems that they can use
> RemoteLogSegmentId instead of RemoteLogSegmentMetadata.
>
> It will be useful to have metadata for RSM to fetch log segment. It may
> create location/path using id with other metadata too.
>
> 5102.2 In fetchLogSegmentData(), should we use long instead of Long?
>
> Wanted to keep endPosition as optional to read till the end of the segment
> and avoid sentinels.
>
> 5102.3 Why only some of the methods have default implementation and others
> Don't?
>
> Actually, RSM will not have any default implementations. Those 3 methods
> were made default earlier for tests etc. Updated the wiki.
>
> 5102.4. Could we define RemoteLogSegmentMetadataUpdate and
> DeletePartitionUpdate?
>
> Sure, they will be added.
>
> 5102.5 LogSegmentData: It seems that it's easier to pass i

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #212

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[github] migrate remaining RPCs (#9558)


--
[...truncated 3.44 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache

[jira] [Resolved] (KAFKA-10470) zstd decompression with small batches is slow and causes excessive GC

2020-11-10 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10470.

Fix Version/s: 2.8.0
   Resolution: Fixed

> zstd decompression with small batches is slow and causes excessive GC
> -
>
> Key: KAFKA-10470
> URL: https://issues.apache.org/jira/browse/KAFKA-10470
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Robert Wagner
>Assignee: James Yuzawa
>Priority: Major
> Fix For: 2.8.0
>
>
> Similar to KAFKA-5150 but for zstd instead of LZ4, it appears that a large 
> decompression buffer (128kb) created by zstd-jni per batch is causing a 
> significant performance bottleneck.
> The next upcoming version of zstd-jni (1.4.5-7) will have a new constructor 
> for ZstdInputStream that allows the client to pass its own buffer.  A similar 
> fix as [PR #2967|https://github.com/apache/kafka/pull/2967] could be used to 
> have the  ZstdConstructor use a BufferSupplier to re-use the decompression 
> buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #246

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10470: Zstd upgrade and buffering (#9499)


--
[...truncated 3.46 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@71eac491, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@71eac491, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4acb1e8a, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4acb1e8a, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4848f262, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4848f262, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1bab9fb2, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1bab9fb2, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3c420e1a, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3c420e1a, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e6f6856, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e6f6856, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@21bb441b, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@21bb441b, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@662a0991, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@662a0991, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4ab4e990, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4ab4e990, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1f4c0548, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1f4c0548, 
timestamped = false, caching = false, logging = false] PASSED

org.apa

[GitHub] [kafka-site] harshach opened a new pull request #310: Add Sriharsha as PMC

2020-11-10 Thread GitBox


harshach opened a new pull request #310:
URL: https://github.com/apache/kafka-site/pull/310


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #222

2020-11-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10470: Zstd upgrade and buffering (#9499)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@1bd54af2, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@1bd54af2, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@10ba6c96, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@10ba6c96, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@482f3bfe, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@482f3bfe, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@231b27a0, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@231b27a0, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@12287beb, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.KeyValueStoreBuilder@12287beb, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@51356be4,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@51356be4,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@60745af7,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@60745af7,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1aa8477e,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1aa8477e,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4b8e90e3,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4b8e90e3,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@311ea83d,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.Timestampe

[jira] [Created] (KAFKA-10706) Liveness bug in truncation protocol can lead to indefinite URP

2020-11-10 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10706:
---

 Summary: Liveness bug in truncation protocol can lead to 
indefinite URP
 Key: KAFKA-10706
 URL: https://issues.apache.org/jira/browse/KAFKA-10706
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


We hit an interesting liveness condition in the truncation protocol. Broker A 
was leader in epoch 7, broker B was leader in epoch 8, and then broker A was 
leader in epoch 9 again.

On broker A, we had the following state in the epoch cache:
{code}
epoch 4, offset 3953
epoch 7, offset 3983
epoch 9, offset 3988
{code}

On broker B, we had the following:
{code}
epoch 4, start offset 3953
epoch 8, start offset 3983
{code}

After A was elected, broker B sent epoch 8 in OffsetsForLeaderEpoch. Broker A 
correctly responded with epoch 7 ending at offset 3988. The end offset on 
broker B was in fact 3983, so this truncation had no effect. Broker B then 
retried with epoch 8 again and replication was stuck. 

When a replica becomes leader, it first inserts an entry into the epoch cache 
with the current log end offset. This ensures that that it has a larger epoch 
in the cache than any epoch that could be requested by a valid replica. 
However, I think it is incorrect to turn around and use this epoch when 
becoming a follower. It seems like we need symmetric logic after becoming a 
follower to remove this epoch entry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] showuon opened a new pull request #311: MINOR: Fix the wrong HTML format in powered by page

2020-11-10 Thread GitBox


showuon opened a new pull request #311:
URL: https://github.com/apache/kafka-site/pull/311


   When visiting the powered by page, I suddenly found the hyperlink text is 
weird in SemaText.
   
![image](https://user-images.githubusercontent.com/43372967/98776447-e63dc900-2429-11eb-9e85-1bb0929d0807.png)
   
   After investigation, I found it's because we didn't end the `` tag 
content well. We missed some right angle bracket (>) and some single quote ('), 
so that some text are disappeared on the page. Ex:
   
![image](https://user-images.githubusercontent.com/43372967/98776683-65cb9800-242a-11eb-815a-9427d52fb39b.png)
   
   We should end with the text `here` with hyperlink.
   
   Fix those issues!
   example after fixed:
   
![image](https://user-images.githubusercontent.com/43372967/98776817-a4615280-242a-11eb-8a74-87912f51af8b.png)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] showuon commented on pull request #311: MINOR: Fix the wrong HTML format in powered by page

2020-11-10 Thread GitBox


showuon commented on pull request #311:
URL: https://github.com/apache/kafka-site/pull/311#issuecomment-725231532


   @guozhangwang @scott-confluent , please help review. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #213

2020-11-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-10 Thread Kowshik Prakasam
Hi Harsha,

The goal we discussed is to aim for preview in AK 3.0. In order to get us
there, it will be useful to think about the order in which the code changes
will be implemented, reviewed and merged. Since you are driving the
development, do you want to layout the order of things? For example, do you
eventually want to break up the PR into multiple smaller ones? If so, you
could list the milestones there. Another perspective is that this can be
helpful to budget time suitably and to understand the progress.
Let us know how we can help.


Cheers,
Kowshik

On Tue, Nov 10, 2020 at 3:26 PM Harsha Chintalapani  wrote:

> Thanks Kowshik for the link. Seems reasonable,  as we discussed on the
> call, code and completion of this KIP will be taken up by us.
> Regarding Milestone 2, what you think it needs to be clarified there?
> I believe what we are promising in the KIP along with unit tests, systems
> tests will be delivered and we can call that as preview.   We will be
> running this in our production and continue to provide the data and metrics
> to push this feature to GA.
>
>
>
> On Tue, Nov 10, 2020 at 10:07 AM, Kowshik Prakasam  >
> wrote:
>
> > Hi Harsha/Satish,
> >
> > Thanks for the discussion today. Here is a link to the KIP-405
>  development
> > milestones google doc we discussed in the meeting today: https://docs.
> > google.com/document/d/1B5_jaZvWWb2DUpgbgImq0k_IPZ4DWrR8Ru7YpuJrXdc/edit
> > . I have shared it with you. Please have a look and share your
> > feedback/improvements. As we discussed, things are clear until milestone
> 1.
> > Beyond that, we can discuss it again (perhaps in next sync or later),
> once
> > you have thought through the implementation plan/milestones and release
> > into preview in 3.0.
> >
> > Cheers,
> > Kowshik
> >
> > On Tue, Nov 10, 2020 at 6:56 AM Satish Duggana  >
> > wrote:
> >
> > Hi Jun,
> > Thanks for your comments. Please find the inline replies below.
> >
> > 605.2 "Build the local leader epoch cache by cutting the leader epoch
> > sequence received from remote storage to [LSO, ELO]." I mentioned an
> issue
> > earlier. Suppose the leader's local start offset is 100. The follower
> finds
> > a remote segment covering offset range [80, 120). The producerState with
> > this remote segment is up to offset 120. To trim the producerState to
> > offset 100 requires more work since one needs to download the previous
> > producerState up to offset 80 and then replay the messages from 80 to
> 100.
> > It seems that it's simpler in this case for the follower just to take the
> > remote segment as it is and start fetching from offset 120.
> >
> > We chose that approach to avoid any edge cases here. It may be possible
> > that the remote log segment that is received may not have the same leader
> > epoch sequence from 100-120 as it contains on the leader(this can happen
> > due to unclean leader). It is safe to start from what the leader returns
> > here.Another way is to find the remote log segment
> >
> > 5016. Just to echo what Kowshik was saying. It seems that
> > RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
> > partition, not on the replicas for the __remote_log_segment_metadata
> > partition. It's not clear how the leader of __remote_log_segment_metadata
> > obtains the metadata for remote segments for deletion.
> >
> > RLMM will always receive the callback for the remote log metadata topic
> > partitions hosted on the local broker and these will be subscribed. I
> will
> > make this clear in the KIP.
> >
> > 5100. KIP-516  has been
> accepted and is being implemented now. Could you
> > update the KIP based on topicID?
> >
> > We mentioned KIP-516 
> and how it helps. We will update this KIP with all
> > the changes it brings with KIP-516
> .
> >
> > 5101. RLMM: It would be useful to clarify how the following two APIs are
> > used. According to the wiki, the former is used for topic deletion and
> the
> > latter is used for retention. It seems that retention should use the
> former
> > since remote segments without a matching epoch in the leader (potentially
> > due to unclean leader election) also need to be garbage collected. The
> > latter seems to be used for the new leader to determine the last tiered
> > segment.
> > default Iterator
> > listRemoteLogSegments(TopicPartition topicPartition)
> > Iterator listRemoteLogSegments(TopicPartition
> > topicPartition, long leaderEpoch);
> >
> > Right,.that is what we are currently doing. We will update the javadocs
> > and wiki with that. Earlier, we did not want to remove the segments which
> > are not matched with leader epochs from the ladder partition as they may
> be
> > used later by a replica which can become a leader (unclean leader
> election)
> > and refer those segments. But that may leak thes