[jira] [Resolved] (KAFKA-13287) Upgrade RocksDB to 6.22.1.1

2021-09-13 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-13287.
---
Resolution: Fixed

> Upgrade RocksDB to 6.22.1.1
> ---
>
> Key: KAFKA-13287
> URL: https://issues.apache.org/jira/browse/KAFKA-13287
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: compat_report.html
>
>
> RocksDB 6.22.1.1 is source compatible with RocksDB 6.19.3 that Streams 
> currently used  (see attached compatibility report).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13293) Support client reload of PEM certificates

2021-09-13 Thread Elliot West (Jira)
Elliot West created KAFKA-13293:
---

 Summary: Support client reload of PEM certificates
 Key: KAFKA-13293
 URL: https://issues.apache.org/jira/browse/KAFKA-13293
 Project: Kafka
  Issue Type: Improvement
  Components: clients, security
Affects Versions: 2.7.1, 2.8.0, 2.7.0
Reporter: Elliot West


Since Kafka 2.7.0, clients are able to authenticate using PEM certificates as 
client configuration properties in addition to JKS file based key stores 
(KAFKA-10338). With PEM, certificate chains are passed into clients as simple 
string based key-value properties, alongside existing client configuration. 
This offers a number of benefits: it provides a JVM agnostic security mechanism 
from the perspective of clients, removes the client's dependency on the local 
filesystem, and allows the the encapsulation of the entire client configuration 
into a single payload.

However, the current client PEM implement has a feature regression when 
compared with the JKS implementation. With the JKS approach, clients would 
automatically reload certificates when the key stores were modified on disk. 
This enables a seamless approach for the replacement of certificates when they 
are due to expire; no further configuration or explicit interference with the 
client lifecycle is needed for the client to migrate to renewed certificates.

Such a capability does not currently exist for PEM. One supplies key chains 
when instantiating clients only - there is no mechanism available to either 
directly reconfigure the client, or for the client to observe changes to the 
original properties set reference used in construction. Additionally, no 
work-arounds are documented that might given users alternative strategies for 
dealing with expiring certificates. Given that expiration and renewal of 
certificates is an industry standard practice, it could be argued that the 
current PEM client implementation is not fit for purpose.

In summary, a mechanism should be provided such that clients can automatically 
detect, load, and use updated PEM key chains from some non-file based source 
(object ref, method invocation, listener, etc.)

Finally, It is suggested that in the short-term Kafka documentation be updated 
to describe any viable mechanism for updating client PEM certs (perhaps closing 
existing client and then recreating?).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9569) RemoteStorageManager implementation for HDFS storage.

2021-09-13 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana resolved KAFKA-9569.
---
Resolution: Fixed

> RemoteStorageManager implementation for HDFS storage.
> -
>
> Key: KAFKA-9569
> URL: https://issues.apache.org/jira/browse/KAFKA-9569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Satish Duggana
>Assignee: Ying Zheng
>Priority: Major
>
> This is about implementing `RemoteStorageManager` for HDFS to verify the 
> proposed SPIs are sufficient. It looks like the existing RSM interface should 
> be sufficient. If needed, we will discuss any required changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #466

2021-09-13 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 492602 lines...]
[2021-09-13T09:43:12.813Z] PlaintextConsumerTest > testAutoOffsetReset() STARTED
[2021-09-13T09:43:17.422Z] 
[2021-09-13T09:43:17.422Z] PlaintextConsumerTest > testAutoOffsetReset() PASSED
[2021-09-13T09:43:17.422Z] 
[2021-09-13T09:43:17.422Z] PlaintextConsumerTest > 
testPerPartitionLagWithMaxPollRecords() STARTED
[2021-09-13T09:43:21.844Z] 
[2021-09-13T09:43:21.844Z] PlaintextConsumerTest > 
testPerPartitionLagWithMaxPollRecords() PASSED
[2021-09-13T09:43:21.844Z] 
[2021-09-13T09:43:21.844Z] PlaintextConsumerTest > testFetchInvalidOffset() 
STARTED
[2021-09-13T09:43:26.925Z] 
[2021-09-13T09:43:26.925Z] PlaintextConsumerTest > testFetchInvalidOffset() 
PASSED
[2021-09-13T09:43:26.925Z] 
[2021-09-13T09:43:26.925Z] PlaintextConsumerTest > testAutoCommitIntercept() 
STARTED
[2021-09-13T09:43:27.099Z] 
[2021-09-13T09:43:27.099Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignor() PASSED
[2021-09-13T09:43:27.099Z] 
[2021-09-13T09:43:27.099Z] PlaintextConsumerTest > testInterceptors() STARTED
[2021-09-13T09:43:32.074Z] 
[2021-09-13T09:43:32.074Z] PlaintextConsumerTest > testInterceptors() PASSED
[2021-09-13T09:43:32.074Z] 
[2021-09-13T09:43:32.074Z] PlaintextConsumerTest > 
testConsumingWithEmptyGroupId() STARTED
[2021-09-13T09:43:33.368Z] 
[2021-09-13T09:43:33.368Z] PlaintextConsumerTest > testAutoCommitIntercept() 
PASSED
[2021-09-13T09:43:33.368Z] 
[2021-09-13T09:43:33.368Z] PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst() STARTED
[2021-09-13T09:43:37.674Z] 
[2021-09-13T09:43:37.674Z] PlaintextConsumerTest > 
testConsumingWithEmptyGroupId() PASSED
[2021-09-13T09:43:37.674Z] 
[2021-09-13T09:43:37.674Z] PlaintextConsumerTest > testPatternUnsubscription() 
STARTED
[2021-09-13T09:43:38.021Z] 
[2021-09-13T09:43:38.022Z] PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst() PASSED
[2021-09-13T09:43:38.022Z] 
[2021-09-13T09:43:38.022Z] PlaintextConsumerTest > testCommitSpecifiedOffsets() 
STARTED
[2021-09-13T09:43:43.847Z] 
[2021-09-13T09:43:43.848Z] PlaintextConsumerTest > testCommitSpecifiedOffsets() 
PASSED
[2021-09-13T09:43:43.848Z] 
[2021-09-13T09:43:43.848Z] PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe() STARTED
[2021-09-13T09:43:44.816Z] 
[2021-09-13T09:43:44.816Z] PlaintextConsumerTest > testPatternUnsubscription() 
PASSED
[2021-09-13T09:43:44.816Z] 
[2021-09-13T09:43:44.816Z] PlaintextConsumerTest > testGroupConsumption() 
STARTED
[2021-09-13T09:43:49.947Z] 
[2021-09-13T09:43:49.947Z] PlaintextConsumerTest > testGroupConsumption() PASSED
[2021-09-13T09:43:49.947Z] 
[2021-09-13T09:43:49.947Z] PlaintextConsumerTest > testPartitionsFor() STARTED
[2021-09-13T09:43:50.643Z] 
[2021-09-13T09:43:50.643Z] PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe() PASSED
[2021-09-13T09:43:50.643Z] 
[2021-09-13T09:43:50.643Z] PlaintextConsumerTest > testCommitMetadata() STARTED
[2021-09-13T09:43:54.304Z] 
[2021-09-13T09:43:54.304Z] PlaintextConsumerTest > testPartitionsFor() PASSED
[2021-09-13T09:43:54.304Z] 
[2021-09-13T09:43:54.304Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignorAndVerifyAssignment() STARTED
[2021-09-13T09:43:54.828Z] 
[2021-09-13T09:43:54.828Z] PlaintextConsumerTest > testCommitMetadata() PASSED
[2021-09-13T09:43:54.828Z] 
[2021-09-13T09:43:54.828Z] PlaintextConsumerTest > testRoundRobinAssignment() 
STARTED
[2021-09-13T09:43:58.489Z] 
[2021-09-13T09:43:58.489Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignorAndVerifyAssignment() PASSED
[2021-09-13T09:43:58.489Z] 
[2021-09-13T09:43:58.489Z] PlaintextConsumerTest > testAutoCommitOnRebalance() 
STARTED
[2021-09-13T09:44:03.455Z] 
[2021-09-13T09:44:03.455Z] PlaintextConsumerTest > testRoundRobinAssignment() 
PASSED
[2021-09-13T09:44:03.455Z] 
[2021-09-13T09:44:03.455Z] PlaintextConsumerTest > testPatternSubscription() 
STARTED
[2021-09-13T09:44:05.645Z] 
[2021-09-13T09:44:05.645Z] PlaintextConsumerTest > testAutoCommitOnRebalance() 
PASSED
[2021-09-13T09:44:05.645Z] 
[2021-09-13T09:44:05.645Z] PlaintextConsumerTest > 
testInterceptorsWithWrongKeyValue() STARTED
[2021-09-13T09:44:11.050Z] 
[2021-09-13T09:44:11.050Z] PlaintextConsumerTest > 
testInterceptorsWithWrongKeyValue() PASSED
[2021-09-13T09:44:11.050Z] 
[2021-09-13T09:44:11.050Z] PlaintextConsumerTest > 
testPerPartitionLeadWithMaxPollRecords() STARTED
[2021-09-13T09:44:15.494Z] 
[2021-09-13T09:44:15.494Z] PlaintextConsumerTest > 
testPerPartitionLeadWithMaxPollRecords() PASSED
[2021-09-13T09:44:15.494Z] 
[2021-09-13T09:44:15.494Z] PlaintextConsumerTest > testHeaders() STARTED
[2021-09-13T09:44:15.668Z] 
[2021-09-13T09:44:15.668Z] PlaintextConsumerTest > testPatternSubscription() 
PASSED
[2021-09-13T09:44:17.958Z] 
[2021-09-13T09:44:17.958Z] FAILURE: Build failed with an exception.
[

[jira] [Created] (KAFKA-13294) Upgrade Netty to 4.1.68

2021-09-13 Thread Utkarsh Khare (Jira)
Utkarsh Khare created KAFKA-13294:
-

 Summary: Upgrade Netty to 4.1.68
 Key: KAFKA-13294
 URL: https://issues.apache.org/jira/browse/KAFKA-13294
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.8.0
Reporter: Utkarsh Khare


netty has reported a couple of CVEs regarding the usage of Bzip2Decoder and 
SnappyFrameDecoder. 

Reference :

[CVE-2021-37136 - 
https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv|https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv]

[CVE-2021-37137 - 
https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363|https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363]

 

Can we upgrade Netty to version 4.1.68.Final to fix this ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Contributor Permission Request

2021-09-13 Thread utkarsh . khare
Hi,

Can someone provide contributor permission to me so that I can self-assign
some JIRA tickets to myself?

My JIRA username : 51n15t9r

Thanks,
Utkarsh


Re: Contributor Permission Request

2021-09-13 Thread Bill Bejeck
Hi Utkarsh,

Done.  Thanks for your interest in Apache Kafka!

-Bill

On Mon, Sep 13, 2021 at 8:56 AM  wrote:

> Hi,
>
> Can someone provide contributor permission to me so that I can self-assign
> some JIRA tickets to myself?
>
> My JIRA username : 51n15t9r
>
> Thanks,
> Utkarsh
>


[jira] [Resolved] (KAFKA-13292) InvalidPidMappingException: The producer attempted to use a producer id which is not currently assigned to its transactional id

2021-09-13 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-13292.
-
Resolution: Invalid

Closing this as "invalid" as it seems to be a question, not a bug report. 
Please use the mailing lists to ask questions. Thanks.

> InvalidPidMappingException: The producer attempted to use a producer id which 
> is not currently assigned to its transactional id
> ---
>
> Key: KAFKA-13292
> URL: https://issues.apache.org/jira/browse/KAFKA-13292
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: NEERAJ VAIDYA
>Priority: Major
>
> I have a KafkaStreams application which consumes from a topic which has 12 
> partitions. The incoming message rate into this topic is very low, perhaps 
> 3-4 per minute. Also, some partitions will not receive messages for more than 
> 7 days.
>  
> Exactly after 7 days of starting this application, I seem to be getting the 
> following exception and the application shuts down, without processing 
> anymore messages :
>  
> {code:java}
> 2021-09-10T12:21:59.636 [kafka-producer-network-thread | 
> mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer] 
> INFO  o.a.k.c.p.i.TransactionManager - MSG=[Producer 
> clientId=mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer,
>  transactionalId=mtx-caf-0_2] Transiting to abortable error state due to 
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> 2021-09-10T12:21:59.642 [kafka-producer-network-thread | 
> mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer] 
> ERROR o.a.k.s.p.i.RecordCollectorImpl - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] task [0_2] 
> Error encountered sending record to topic 
> mtx-caf-DuplicateCheckStore-changelog for task 0_2 due to:
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> Exception handler choose to FAIL the processing, no more records would be 
> sent.
> 2021-09-10T12:21:59.740 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] ERROR 
> o.a.k.s.p.internals.StreamThread - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] Encountered the 
> following exception during processing and the thread is going to shut down:
> org.apache.kafka.streams.errors.StreamsException: Error encountered sending 
> record to topic mtx-caf-DuplicateCheckStore-changelog for task 0_2 due to:
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> Exception handler choose to FAIL the processing, no more records would be 
> sent.
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:214)
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$0(RecordCollectorImpl.java:186)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.abort(ProducerBatch.java:159)
>         at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortUndrainedBatches(RecordAccumulator.java:781)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: org.apache.kafka.common.errors.InvalidPidMappingException: The 
> producer attempted to use a producer id which is not currently assigned to 
> its transactional id.
> 2021-09-10T12:21:59.740 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] INFO  
> o.a.k.s.p.internals.StreamThread - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] State 
> transition from RUNNING to PENDING_SHUTDOWN
> {code}
>  
> After this, I can see that all 12 tasks (because there are 12 partitions for 
> all topics) get shutdown and this brings down the whole application.
>  
> I understand that the transactional.id.expi

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #467

2021-09-13 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 489866 lines...]
[2021-09-13T19:11:06.916Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-09-13T19:11:06.916Z] 
[2021-09-13T19:11:06.916Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-09-13T19:11:06.916Z] 
[2021-09-13T19:11:06.916Z] Execution optimizations have been disabled for 4 
invalid unit(s) of work during this build to ensure correctness.
[2021-09-13T19:11:06.916Z] Please consult deprecation warnings for more details.
[2021-09-13T19:11:06.916Z] 
[2021-09-13T19:11:06.916Z] BUILD SUCCESSFUL in 3m 30s
[2021-09-13T19:11:06.916Z] 77 actionable tasks: 39 executed, 38 up-to-date
[2021-09-13T19:11:07.085Z] 
[2021-09-13T19:11:07.085Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[5]
 PASSED
[2021-09-13T19:11:07.085Z] 
[2021-09-13T19:11:07.085Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleanerWithMessageFormatV0(CompressionType)[1]
 STARTED
[Pipeline] sh
[2021-09-13T19:11:10.152Z] + grep ^version= gradle.properties
[2021-09-13T19:11:10.152Z] + cut -d= -f 2
[Pipeline] dir
[2021-09-13T19:11:11.005Z] Running in 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart
[Pipeline] {
[Pipeline] sh
[2021-09-13T19:11:13.734Z] + mvn clean install -Dgpg.skip
[2021-09-13T19:11:14.923Z] [INFO] Scanning for projects...
[2021-09-13T19:11:14.923Z] [INFO] 

[2021-09-13T19:11:14.923Z] [INFO] Reactor Build Order:
[2021-09-13T19:11:14.923Z] [INFO] 
[2021-09-13T19:11:14.923Z] [INFO] Kafka Streams :: Quickstart   
 [pom]
[2021-09-13T19:11:14.923Z] [INFO] streams-quickstart-java   
 [maven-archetype]
[2021-09-13T19:11:14.923Z] [INFO] 
[2021-09-13T19:11:14.923Z] [INFO] < 
org.apache.kafka:streams-quickstart >-
[2021-09-13T19:11:14.923Z] [INFO] Building Kafka Streams :: Quickstart 
3.1.0-SNAPSHOT[1/2]
[2021-09-13T19:11:14.923Z] [INFO] [ pom 
]-
[2021-09-13T19:11:14.923Z] [INFO] 
[2021-09-13T19:11:14.923Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart ---
[2021-09-13T19:11:14.923Z] [INFO] 
[2021-09-13T19:11:14.923Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart ---
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart ---
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart ---
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart ---
[2021-09-13T19:11:16.178Z] [INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.1.0-SNAPSHOT/streams-quickstart-3.1.0-SNAPSHOT.pom
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --< 
org.apache.kafka:streams-quickstart-java >--
[2021-09-13T19:11:16.178Z] [INFO] Building streams-quickstart-java 
3.1.0-SNAPSHOT[2/2]
[2021-09-13T19:11:16.178Z] [INFO] --[ maven-archetype 
]---
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart-java ---
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart-java ---
[2021-09-13T19:11:16.178Z] [INFO] 
[2021-09-13T19:11:16.178Z] [INFO] --- maven-resources-plugin:2.7:resources 
(default-resources) @ streams-quickstart-java ---
[2021-09-13T19:11:16.178Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-09-13T19:11:16.178Z] [INFO] Copying 6 resources
[2021-09-13T19:11:17.111Z] [INFO] Copying 3 resources
[2021-09-13T19:11:17.111Z] [INFO] 
[2021-09-13T19:11:17.111Z] [INFO] --- maven-resources-plugin:2.7:testResources 
(default-testResources) @ streams-quickstart-java ---
[2021-09-13T19:11:17.111Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-09-13T19:11:17.111Z] [INFO] Copying 2 resources
[2021-09-13T19:11:17.111Z] [INFO] Copying 3 resources
[2021-09-13T19:11:17.111Z] [INFO] 
[2021-09-13T19:11:17.111Z] [INFO

Re: [VOTE] 2.8.1 RC0

2021-09-13 Thread Randall Hauch
Thanks, David.

I was able to successfully complete the following:

- Build release archive from the tag, installed locally, and ran a portion
of quickstart
- Installed 2.8.1 RC0 and performed quickstart for broker and Connect
- Verified signatures and checksums
- Verified the tag
- Compared the release notes to JIRA
- Manually spotchecked the Javadocs

However, generated docs at
https://kafka.apache.org/28/documentation.html incorrectly
reference the 2.8.0 version in the following sections:

* https://kafka.apache.org/28/documentation.html#quickstart
* https://kafka.apache.org/28/documentation.html#producerapi
* https://kafka.apache.org/28/documentation.html#consumerapi
* https://kafka.apache.org/28/documentation.html#streamsapi
* https://kafka.apache.org/28/documentation.html#adminapi

The way these are updated during the release process has changed recently.
IIUC they are generated as part of the release build via the
https://github.com/apache/kafka/blob/2.8.1-rc0/docs/js/templateData.js
file, which appears to have not been updated in the tag:
https://github.com/apache/kafka/blob/2.8.1-rc0/docs/js/templateData.js#L22.
Maybe
the https://cwiki.apache.org/confluence/display/KAFKA/Release+Process needs
to be tweaked to update this file even when cutting a patch release RC (it
currently says to update this file in the section for major and minor
releases).

Also, the https://github.com/apache/kafka-site/tree/asf-site/28 history
shows no updates since July. I guess that's possible, but might be worth
double checking.

Thanks!

Randall

On Fri, Sep 10, 2021 at 5:15 AM David Jacot 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 2.8.1.
>
> Apache Kafka 2.8.1 is a bugfix release and fixes 49 issues since the 2.8.0
> release. Please see the release notes for more information.
>
> Release notes for the 2.8.1 release:
> https://home.apache.org/~dajac/kafka-2.8.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, September 17, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~dajac/kafka-2.8.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~dajac/kafka-2.8.1-rc0/javadoc/
>
> * Tag to be voted upon (off 2.8 branch) is the 2.8.1 tag:
> https://github.com/apache/kafka/releases/tag/2.8.1-rc0
>
> * Documentation:
> https://kafka.apache.org/28/documentation.html
>
> * Protocol:
> https://kafka.apache.org/28/protocol.html
>
> * Successful Jenkins builds for the 2.8 branch:
> Unit/integration tests:
> https://ci-builds.apache.org/job/Kafka/job/kafka/job/2.8/80/
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/2.8/214/
>
> /**
>
> Thanks,
> David
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #468

2021-09-13 Thread Apache Jenkins Server
See 




Re: [kafka-clients] Re: [VOTE] 3.0.0 RC2

2021-09-13 Thread Colin McCabe
Hi Konstantine,

I validated the RC by doing the following:

* Downloading the source and building using Java 11
* Running unit tests
* Setting up a three-node KRaft cluster in combined mode
* Testing creating a topic, producing and consuming from it, then restarting 
the kraft brokers
* Testing Kafka metadata shell

+1 (binding)

best,
Colin

On Fri, Sep 10, 2021, at 11:12, Bill Bejeck wrote:
> Hi Konstantine,
> 
> Thanks for that; I can get to the docs now.
> 
> 
> 
> I've validated the release by doing the following
> 
>  * built from source
>  * ran all unit tests
>  * verified all checksums and signatures
>  * spot-checked the Javadoc
>  * worked through the quick start
>  * worked through the Kafka Streams quick start application
>  * ran KRaft in preview mode
>* created a topic
>* produced and consumed from the topic
>* ran metadata shell
> 
> I did find some minor errors in the docs (all in quickstart)
> 
>  * The beginning of the quickstart still references version 2.8
>  * The command presented to create a topic in the quickstart is missing the 
> --partitions and --replication-factor params
>  * The link for "Kafka Streams demo" and "app development tutorial" points to 
> version 2.5
> 
> But considering we can update the documentation directly and, more 
> importantly, independently of the code, IMHO, I don't think these should 
> block the release.
> 
> 
> 
> So it's a +1(binding) for me.
> 
> 
> 
> Thanks for running the release!
> 
> Bill
> 
> 
> 
> On Fri, Sep 10, 2021 at 2:36 AM Konstantine Karantasis 
>  wrote:
>> Hi Bill,
>> 
>> I just added folder 30 to the kafka-site repo. Hadn't realized that this
>> separate manual step was part of the RC process and not the official
>> release (even though, strangely enough, I was expecting myself to be able
>> to read the docs online). I guess I needed a second nudge after Gary's
>> first comment on RC1 to see what was missing. I'll update the release doc
>> to make this more clear.
>> 
>> Should be accessible now. Please take another look.
>> 
>> Konstantine
>> 
>> 
>> 
>> On Fri, Sep 10, 2021 at 12:50 AM Bill Bejeck  wrote:
>> 
>> > Hi Konstantine,
>> >
>> > I've started to do the validation for the release and the link for docs
>> > doesn't work.
>> >
>> > Thanks,
>> > Bill
>> >
>> > On Wed, Sep 8, 2021 at 5:59 PM Konstantine Karantasis <
>> > kkaranta...@apache.org> wrote:
>> >
>> > > Hello again Kafka users, developers and client-developers,
>> > >
>> > > This is the third candidate for release of Apache Kafka 3.0.0.
>> > > It is a major release that includes many new features, including:
>> > >
>> > > * The deprecation of support for Java 8 and Scala 2.12.
>> > > * Kafka Raft support for snapshots of the metadata topic and other
>> > > improvements in the self-managed quorum.
>> > > * Deprecation of message formats v0 and v1.
>> > > * Stronger delivery guarantees for the Kafka producer enabled by default.
>> > > * Optimizations in OffsetFetch and FindCoordinator requests.
>> > > * More flexible Mirror Maker 2 configuration and deprecation of Mirror
>> > > Maker 1.
>> > > * Ability to restart a connector's tasks on a single call in Kafka
>> > Connect.
>> > > * Connector log contexts and connector client overrides are now enabled
>> > by
>> > > default.
>> > > * Enhanced semantics for timestamp synchronization in Kafka Streams.
>> > > * Revamped public API for Stream's TaskId.
>> > > * Default serde becomes null in Kafka Streams and several other
>> > > configuration changes.
>> > >
>> > > You may read and review a more detailed list of changes in the 3.0.0 blog
>> > > post draft here:
>> > >
>> > https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6
>> > >
>> > > Release notes for the 3.0.0 release:
>> > > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html
>> > >
>> > > *** Please download, test and vote by Tuesday, September 14, 2021 ***
>> > >
>> > > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > > https://kafka.apache.org/KEYS
>> > >
>> > > * Release artifacts to be voted upon (source and binary):
>> > > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/
>> > >
>> > > * Maven artifacts to be voted upon:
>> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> > >
>> > > * Javadoc:
>> > > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/javadoc/
>> > >
>> > > * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
>> > > https://github.com/apache/kafka/releases/tag/3.0.0-rc2
>> > >
>> > > * Documentation:
>> > > https://kafka.apache.org/30/documentation.html
>> > >
>> > > * Protocol:
>> > > https://kafka.apache.org/30/protocol.html
>> > >
>> > > * Successful Jenkins builds for the 3.0 branch:
>> > > Unit/integration tests:
>> > >
>> > >
>> > https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/129/
>> > > (1 flaky test failure)
>> > > System tests:
>> > > https://jenkins.confluent.io/job/system-test-kafka

[jira] [Created] (KAFKA-13295) Long restoration times for new tasks can lead to transaction timeouts

2021-09-13 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-13295:
--

 Summary: Long restoration times for new tasks can lead to 
transaction timeouts
 Key: KAFKA-13295
 URL: https://issues.apache.org/jira/browse/KAFKA-13295
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: A. Sophie Blee-Goldman
 Fix For: 3.1.0


In some EOS applications with relatively long restoration times we've noticed a 
series of ProducerFencedExceptions occurring during/immediately after 
restoration. The broker logs were able to confirm these were due to 
transactions timing out.

In Streams, it turns out we automatically begin a new txn when calling {{send}} 
(if there isn’t already one in flight). A {{send}} occurs often outside a 
commit during active processing (eg writing to the changelog), leaving the txn 
open until the next commit. And if a StreamThread has been actively processing 
when a rebalance results in a new stateful task without revoking any existing 
tasks, the thread won’t actually commit this open txn before it goes back into 
the restoration phase while it builds up state for the new task. So the 
in-flight transaction is left open during restoration, during which the 
StreamThread only consumes from the changelog without committing, leaving it 
vulnerable to timing out when restoration times exceed the configured 
transaction.timeout.ms for the producer client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #469

2021-09-13 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 490821 lines...]
Progress (1): 7.4 kB

Downloaded from central: 
https://repo.maven.apache.org/maven2/org/rocksdb/rocksdbjni/6.22.1.1/rocksdbjni-6.22.1.1.pom
 (7.4 kB at 15 kB/s)
[2021-09-14T01:29:53.163Z] Downloading from central: 
https://repo.maven.apache.org/maven2/org/rocksdb/rocksdbjni/6.22.1.1/rocksdbjni-6.22.1.1.jar
[2021-09-14T01:29:54.104Z] Progress (1): 0.2/37 MB
Progress (1): 0.4/37 MB
Progress (1): 0.6/37 MB
Progress (1): 0.8/37 MB
Progress (1): 0.9/37 MB
Progress (1): 1.1/37 MB
Progress (1): 1.3/37 MB
Progress (1): 1.5/37 MB
Progress (1): 1.7/37 MB
Progress (1): 1.9/37 MB
Progress (1): 2.1/37 MB
Progress (1): 2.3/37 MB
Progress (1): 2.5/37 MB
Progress (1): 2.6/37 MB
Progress (1): 2.8/37 MB
Progress (1): 3.0/37 MB
Progress (1): 3.2/37 MB
Progress (1): 3.4/37 MB
Progress (1): 3.6/37 MB
Progress (1): 3.8/37 MB
Progress (1): 4.0/37 MB
Progress (1): 4.1/37 MB
Progress (1): 4.3/37 MB
Progress (1): 4.5/37 MB
Progress (1): 4.7/37 MB
Progress (1): 4.9/37 MB
Progress (1): 5.1/37 MB
Progress (1): 5.3/37 MB
Progress (1): 5.5/37 MB
Progress (1): 5.7/37 MB
Progress (1): 5.8/37 MB
Progress (1): 6.0/37 MB
Progress (1): 6.2/37 MB
Progress (1): 6.4/37 MB
Progress (1): 6.6/37 MB
Progress (1): 6.8/37 MB
Progress (1): 7.0/37 MB
Progress (1): 7.2/37 MB
Progress (1): 7.4/37 MB
Progress (1): 7.5/37 MB
Progress (1): 7.7/37 MB
Progress (1): 7.9/37 MB
Progress (1): 8.1/37 MB
Progress (1): 8.3/37 MB
Progress (1): 8.5/37 MB
Progress (1): 8.7/37 MB
Progress (1): 8.9/37 MB
Progress (1): 9.0/37 MB
Progress (1): 9.2/37 MB
Progress (1): 9.4/37 MB
Progress (1): 9.6/37 MB
Progress (1): 9.8/37 MB
Progress (1): 10.0/37 MB
Progress (1): 10/37 MB  
Progress (1): 10/37 MB
Progress (1): 11/37 MB
Progress (1): 11/37 MB
Progress (1): 11/37 MB
Progress (1): 11/37 MB
Progress (1): 11/37 MB
Progress (1): 11/37 MB
Progress (1): 12/37 MB
Progress (1): 12/37 MB
Progress (1): 12/37 MB
Progress (1): 12/37 MB
Progress (1): 12/37 MB
Progress (1): 13/37 MB
Progress (1): 13/37 MB
Progress (1): 13/37 MB
Progress (1): 13/37 MB
Progress (1): 13/37 MB
Progress (1): 14/37 MB
Progress (1): 14/37 MB
Progress (1): 14/37 MB
Progress (1): 14/37 MB
Progress (1): 14/37 MB
Progress (1): 15/37 MB
Progress (1): 15/37 MB
Progress (1): 15/37 MB
Progress (1): 15/37 MB
Progress (1): 15/37 MB
Progress (1): 15/37 MB
Progress (1): 16/37 MB
Progress (1): 16/37 MB
Progress (1): 16/37 MB
Progress (1): 16/37 MB
Progress (1): 16/37 MB
Progress (1): 17/37 MB
Progress (1): 17/37 MB
Progress (1): 17/37 MB
Progress (1): 17/37 MB
Progress (1): 17/37 MB
Progress (1): 18/37 MB
Progress (1): 18/37 MB
Progress (1): 18/37 MB
Progress (1): 18/37 MB
Progress (1): 18/37 MB
Progress (1): 18/37 MB
Progress (1): 19/37 MB
Progress (1): 19/37 MB
Progress (1): 19/37 MB
Progress (1): 19/37 MB
Progress (1): 19/37 MB
Progress (1): 20/37 MB
Progress (1): 20/37 MB
Progress (1): 20/37 MB
Progress (1): 20/37 MB
Progress (1): 20/37 MB
Progress (1): 21/37 MB
Progress (1): 21/37 MB
Progress (1): 21/37 MB
Progress (1): 21/37 MB
Progress (1): 21/37 MB
Progress (1): 21/37 MB
Progress (1): 22/37 MB
Progress (1): 22/37 MB
Progress (1): 22/37 MB
Progress (1): 22/37 MB
Progress (1): 22/37 MB
Progress (1): 23/37 MB
Progress (1): 23/37 MB
Progress (1): 23/37 MB
Progress (1): 23/37 MB
Progress (1): 23/37 MB
Progress (1): 24/37 MB
Progress (1): 24/37 MB
Progress (1): 24/37 MB
Progress (1): 24/37 MB
Progress (1): 24/37 MB
Progress (1): 24/37 MB
Progress (1): 25/37 MB
Progress (1): 25/37 MB
Progress (1): 25/37 MB
Progress (1): 25/37 MB
Progress (1): 25/37 MB
Progress (1): 26/37 MB
Progress (1): 26/37 MB
Progress (1): 26/37 MB
Progress (1): 26/37 MB
Progress (1): 26/37 MB
Progress (1): 27/37 MB
Progress (1): 27/37 MB
Progress (1): 27/37 MB
Progress (1): 27/37 MB
Progress (1): 27/37 MB
Progress (1): 28/37 MB
Progress (1): 28/37 MB
Progress (1): 28/37 MB
Progress (1): 28/37 MB
Progress (1): 28/37 MB
Progress (1): 28/37 MB
Progress (1): 29/37 MB
Progress (1): 29/37 MB
Progress (1): 29/37 MB
Progress (1): 29/37 MB
Progress (1): 29/37 MB
Progress (1): 30/37 MB
Progress (1): 30/37 MB
Progress (1): 30/37 MB
Progress (1): 30/37 MB
Progress (1): 30/37 MB
Progress (1): 31/37 MB
Progress (1): 31/37 MB
Progress (1): 31/37 MB
Progress (1): 31/37 MB
Progress (1): 31/37 MB
Progress (1): 31/37 MB
Progress (1): 32/37 MB
Progress (1): 32/37 MB
Progress (1): 32/37 MB
Progress (1): 32/37 MB
Progress (1): 32/37 MB
Progress (1): 33/37 MB
Progress (1): 33/37 MB
Progress (1): 33/37 MB
Progress (1): 33/37 MB
Progress (1): 33/37 MB
Progress (1): 34/37 MB
Progress (1): 34/37 MB
Progress (1): 34/37 MB
Progress (1): 34/37 MB
Progress (1): 34/37 MB
Progress (1): 34/37 MB
Progress (1): 35/37 MB
Progress (1): 35/37 MB
Progress (1): 35/37 MB
Progress (1): 35/37 MB
Progress (1): 35/37 MB
Progress (1): 36/37 MB
Progress (1): 36/37 MB
Progress (1): 36/37 MB
Progress (1): 36/

[jira] [Reopened] (KAFKA-13292) InvalidPidMappingException: The producer attempted to use a producer id which is not currently assigned to its transactional id

2021-09-13 Thread NEERAJ VAIDYA (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NEERAJ VAIDYA reopened KAFKA-13292:
---

As indicated in my previous comments.

> InvalidPidMappingException: The producer attempted to use a producer id which 
> is not currently assigned to its transactional id
> ---
>
> Key: KAFKA-13292
> URL: https://issues.apache.org/jira/browse/KAFKA-13292
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: NEERAJ VAIDYA
>Priority: Major
>
> I have a KafkaStreams application which consumes from a topic which has 12 
> partitions. The incoming message rate into this topic is very low, perhaps 
> 3-4 per minute. Also, some partitions will not receive messages for more than 
> 7 days.
>  
> Exactly after 7 days of starting this application, I seem to be getting the 
> following exception and the application shuts down, without processing 
> anymore messages :
>  
> {code:java}
> 2021-09-10T12:21:59.636 [kafka-producer-network-thread | 
> mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer] 
> INFO  o.a.k.c.p.i.TransactionManager - MSG=[Producer 
> clientId=mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer,
>  transactionalId=mtx-caf-0_2] Transiting to abortable error state due to 
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> 2021-09-10T12:21:59.642 [kafka-producer-network-thread | 
> mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer] 
> ERROR o.a.k.s.p.i.RecordCollectorImpl - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] task [0_2] 
> Error encountered sending record to topic 
> mtx-caf-DuplicateCheckStore-changelog for task 0_2 due to:
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> Exception handler choose to FAIL the processing, no more records would be 
> sent.
> 2021-09-10T12:21:59.740 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] ERROR 
> o.a.k.s.p.internals.StreamThread - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] Encountered the 
> following exception during processing and the thread is going to shut down:
> org.apache.kafka.streams.errors.StreamsException: Error encountered sending 
> record to topic mtx-caf-DuplicateCheckStore-changelog for task 0_2 due to:
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> Exception handler choose to FAIL the processing, no more records would be 
> sent.
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:214)
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$0(RecordCollectorImpl.java:186)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.abort(ProducerBatch.java:159)
>         at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortUndrainedBatches(RecordAccumulator.java:781)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: org.apache.kafka.common.errors.InvalidPidMappingException: The 
> producer attempted to use a producer id which is not currently assigned to 
> its transactional id.
> 2021-09-10T12:21:59.740 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] INFO  
> o.a.k.s.p.internals.StreamThread - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] State 
> transition from RUNNING to PENDING_SHUTDOWN
> {code}
>  
> After this, I can see that all 12 tasks (because there are 12 partitions for 
> all topics) get shutdown and this brings down the whole application.
>  
> I understand that the transactional.id.expiration.ms = 7 days (default) will 
> likely cause the application thread from getting expired, but why does this 
> spec

[jira] [Created] (KAFKA-13296) Verify old assignment within StreamsPartitionAssignor

2021-09-13 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-13296:
---

 Summary: Verify old assignment within StreamsPartitionAssignor
 Key: KAFKA-13296
 URL: https://issues.apache.org/jira/browse/KAFKA-13296
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


`StreamsPartitionAssignor` is responsible to assign partitions and tasks to all 
StreamsThreads within an application.

While it ensures to not assign a single partition/task to two threads, there is 
limited verification about it. In particular, we had one incident for with a 
zombie thread/consumer did not cleanup its own internal state correctly due to 
KAFKA-12983. This unclean zombie-state implied that the _old assignment_ 
reported to `StreamsPartitionAssignor` contained a single partition for two 
consumers. As a result, both threads/consumers later revoked the same partition 
and the zombie-thread could commit it's unclean work (even if it should have 
been fenced), leading to duplicate output under EOS_v2.

We should consider to add a check to `StreamsPartitionAssignor` if the _old 
assignment_ is valid, ie, no partition should be missing and no partition 
should be assigned to two consumers. For this case, we should log the invalid 
_old assignment_ and send an error code back to all consumer that indicates 
that they should shut down "unclean" (ie, without and flushing and no 
committing any offsets or transactions).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)