[GitHub] [kafka] yufeiyan1220 commented on pull request #13125: KAFKA-14626: Kafka Consumer Coordinator does not cleanup all metrics after shutdown

2023-01-21 Thread via GitHub


yufeiyan1220 commented on PR #13125:
URL: https://github.com/apache/kafka/pull/13125#issuecomment-1399206848

   > Seems like we aren't particularly consistent at removing these metrics and 
sensors, fetcher would be another example. Mind making the clean up more 
comprehensive?
   
   I have made the change that is ensure all metrics about consumer are removed 
after shutting down. When it comes to producer, I found that all producer 
metrics are already closed, so I don't need to add more stuff for producer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-13663) IllegalMonitorStateException in ProducerMetadata.awaitUpdate

2023-01-21 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-13663.
-
Resolution: Duplicate

Marking as duplicate of KAFKA-10902 since that has a reference to the JDK bug.

> IllegalMonitorStateException in ProducerMetadata.awaitUpdate
> 
>
> Key: KAFKA-13663
> URL: https://issues.apache.org/jira/browse/KAFKA-13663
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.0
>Reporter: RivenSun
>Assignee: Guozhang Wang
>Priority: Major
>
> Since our service kafka-clients was upgraded to {*}2.8.1{*}, sometimes the 
> producer will throw an exception when sending a message, and once this 
> exception occurs, it will continue and will not recover by itself.
> {code:java}
> java.lang.IllegalMonitorStateException: current thread is not owner
>         at java.base/java.lang.Object.wait(Native Method)
>         at 
> org.apache.kafka.common.utils.SystemTime.waitObject(SystemTime.java:55)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerMetadata.awaitUpdate(ProducerMetadata.java:119)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:1047)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:906)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
>  {code}
>  
>  
> The version of kafka-clients used before is {*}2.2.2{*}, and this exception 
> has never occurred
> Comparing the changes in the kafkaClient source code, I found that there is a 
> change, it is very likely that this is a *regression bug.*
> In version 2.2.2, the `Metadata#awaitUpdate` method is called in 
> `KafkaProducer#waitOnMetadata`
> But since version {*}2.3.0{*}, `ProducerMetadata#awaitUpdate` is called in 
> `KafkaProducer#waitOnMetadata`
> The most crucial point is:
> `Object#wait(long timeoutMillis)` method,
> In the `Metadata#awaitUpdate` method, thread safety is ensured by the 
> `synchronized` keyword on the `Metadata#awaitUpdate` method;
> In the `ProducerMetadata#awaitUpdate` method, in addition to the 
> `synchronized` on the `ProducerMetadata#awaitUpdate` method, 
> {color:#FF}*there is a second `synchronized` in the 
> `SystemTime#waitObject` method;*{color}
> Although we all know that `synchronized` is reentrant, it is not clear 
> whether the implementation of `ProducerMetadata#awaitUpdate` is inconsistent 
> in different versions of JDK, resulting in this exception; what is certain is 
> that this exception has never occurred before the 2.2.2 version of 
> KafkaProducer
>  
> h2. Suggestion:
> We can keep only one `synchronized` and remove the `synchronized` on the 
> `ProducerMetadata#awaitUpdate` method



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10637) KafkaProducer: IllegalMonitorStateException

2023-01-21 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-10637.
-
Resolution: Duplicate

Marking as duplicate of KAFKA-10902 since that has a reference to the JDK bug.

> KafkaProducer: IllegalMonitorStateException 
> 
>
> Key: KAFKA-10637
> URL: https://issues.apache.org/jira/browse/KAFKA-10637
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.5.1
>Reporter: Lefteris Katiforis
>Priority: Major
>
> Kafka producer throws the following exception:
> {code:java}
> {\"log\":\"java.lang.IllegalMonitorStateException: current thread is not 
> owner\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415014714Z\"}"}
>  java.base/java.lang.Object.wait(Native 
> Method)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.41502027Z\"}"}
> org.apache.kafka.common.utils.SystemTime.waitObject(SystemTime.java:55)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415024923Z\"}"}
> at 
> org.apache.kafka.clients.producer.internals.ProducerMetadata.awaitUpdate(ProducerMetadata.java:119)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415029863Z\"}"}
> org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:1029)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415034336Z\"}"}
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:883)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415038722Z\"}"}
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415042939Z\"}"}
> org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:781)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415047238Z\"}"}
> org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:562)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415051555Z\"}"}
> org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:369)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415055882Z\"}"}{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14463) ConnectorClientConfigOverridePolicy is not closed at worker shutdown

2023-01-21 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov reassigned KAFKA-14463:
---

Assignee: Nikolay Izhikov

> ConnectorClientConfigOverridePolicy is not closed at worker shutdown
> 
>
> Key: KAFKA-14463
> URL: https://issues.apache.org/jira/browse/KAFKA-14463
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Greg Harris
>Assignee: Nikolay Izhikov
>Priority: Trivial
>
> The ConnectorClientConfigOverridePolicy is marked AutoCloseable, but is never 
> closed by the worker on shutdown.
> This is currently not a critical issue, as all known implementations of the 
> policy have a no-op close. But a possible implementation which does 
> instantiate background resources that must be closed in close() would leak 
> those resources in a test environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] nizhikov opened a new pull request, #13144: KAFKA-14463 Invoke of ConnectorClientConfigOverridePolicy#close

2023-01-21 Thread via GitHub


nizhikov opened a new pull request, #13144:
URL: https://github.com/apache/kafka/pull/13144

   `ConnectorClientConfigOverridePolicy` implements `AutoCloseable` but close 
method not called.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14463) ConnectorClientConfigOverridePolicy is not closed at worker shutdown

2023-01-21 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679436#comment-17679436
 ] 

Nikolay Izhikov commented on KAFKA-14463:
-

Hello, [~gharris1727] , [~ChrisEgerton]. Can, you, please, take a look at my 
changes?

 

https://github.com/apache/kafka/pull/13144

> ConnectorClientConfigOverridePolicy is not closed at worker shutdown
> 
>
> Key: KAFKA-14463
> URL: https://issues.apache.org/jira/browse/KAFKA-14463
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Greg Harris
>Assignee: Nikolay Izhikov
>Priority: Trivial
>
> The ConnectorClientConfigOverridePolicy is marked AutoCloseable, but is never 
> closed by the worker on shutdown.
> This is currently not a critical issue, as all known implementations of the 
> policy have a no-op close. But a possible implementation which does 
> instantiate background resources that must be closed in close() would leak 
> those resources in a test environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14599) MirrorMaker pluggable interfaces missing from public API

2023-01-21 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov reassigned KAFKA-14599:
---

Assignee: Nikolay Izhikov

> MirrorMaker pluggable interfaces missing from public API
> 
>
> Key: KAFKA-14599
> URL: https://issues.apache.org/jira/browse/KAFKA-14599
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Mickael Maison
>Assignee: Nikolay Izhikov
>Priority: Major
>
> MirrorMaker exposes a few pluggable APIs, including:
> ConfigPropertyFilter
> GroupFilter
> TopicFilter
> ForwardingAdmin
> These are currently missing from our javadoc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] vamossagar12 commented on a diff in pull request #13131: KAFKA-14628: Move CommandLineUtils and CommandDefaultOptions shared classes

2023-01-21 Thread via GitHub


vamossagar12 commented on code in PR #13131:
URL: https://github.com/apache/kafka/pull/13131#discussion_r1083271427


##
core/src/main/scala/kafka/admin/ConfigCommand.scala:
##
@@ -771,8 +772,8 @@ object ConfigCommand extends Logging {
   .withRequiredArg
   .describedAs("command config property file")
   .ofType(classOf[String])
-val alterOpt = parser.accepts("alter", "Alter the configuration for the 
entity.")
-val describeOpt = parser.accepts("describe", "List configs for the given 
entity.")
+val alterOpt: OptionSpec[String] = parser.accepts("alter", "Alter the 
configuration for the entity.").withOptionalArg()
+val describeOpt: OptionSpec[String] = parser.accepts("describe", "List 
configs for the given entity.").withOptionalArg()

Review Comment:
   The addition of `OptionSpec` and `withOptionalArg` is not necessary strictly 
right?



##
server-common/src/main/java/org/apache/kafka/server/util/CommandLineUtils.java:
##
@@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.util;
+
+import joptsimple.OptionParser;
+import joptsimple.OptionSet;
+import joptsimple.OptionSpec;
+import org.apache.kafka.common.utils.AppInfoParser;
+import org.apache.kafka.common.utils.Exit;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Helper functions for dealing with command line utilities.
+ */
+public class CommandLineUtils {
+/**
+ * Check if there are no options or `--help` option from command line.
+ *
+ * @param commandOpts Acceptable options for a command
+ * @return true on matching the help check condition
+ */
+public static boolean isPrintHelpNeeded(CommandDefaultOptions commandOpts) 
{
+return commandOpts.args.length == 0 || 
commandOpts.options.has(commandOpts.helpOpt);
+}
+
+/**
+ * Check if there is `--version` option from command line.
+ *
+ * @param commandOpts Acceptable options for a command
+ * @return true on matching the help check condition
+ */
+public static boolean isPrintVersionNeeded(CommandDefaultOptions 
commandOpts) {
+return commandOpts.options.has(commandOpts.versionOpt);
+}
+
+/**
+ * Check and print help message if there is no options or `--help` option
+ * from command line, if `--version` is specified on the command line
+ * print version information and exit.
+ * NOTE: The function name is not strictly speaking correct anymore
+ * as it also checks whether the version needs to be printed, but
+ * refactoring this would have meant changing all command line tools
+ * and unnecessarily increased the blast radius of this change.
+ *
+ * @param commandOpts Acceptable options for a command
+ * @param message Message to display on successful check
+ */
+public static void printHelpAndExitIfNeeded(CommandDefaultOptions 
commandOpts, String message) {
+if (isPrintHelpNeeded(commandOpts)) {
+printUsageAndDie(commandOpts.parser, message);
+}
+if (isPrintVersionNeeded(commandOpts)) {
+printVersionAndDie();
+}
+}
+
+/**
+ * Check that all the listed options are present.
+ */
+public static void checkRequiredArgs(OptionParser parser, OptionSet 
options, OptionSpec... requiredList) {
+for (OptionSpec arg : requiredList) {
+if (!options.has(arg)) {
+printUsageAndDie(parser, String.format("Missing required 
argument \"%s\"", arg));
+}
+}
+}
+
+/**
+ * Check that none of the listed options are present.
+ */
+public static void checkInvalidArgs(OptionParser parser,
+OptionSet options,
+OptionSpec usedOption,
+OptionSpec... invalidOptions) {
+if (options.has(usedOption)) {
+for (OptionSpec arg : invalidOptions) {
+if (options.has(arg)) {
+printUsageAndDie(parser, 

[jira] [Assigned] (KAFKA-14153) UnknownTopicOrPartitionException should include the topic/partition in the returned exception message

2023-01-21 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao reassigned KAFKA-14153:
-

Assignee: (was: Sagar Rao)

> UnknownTopicOrPartitionException should include the topic/partition in the 
> returned exception message
> -
>
> Key: KAFKA-14153
> URL: https://issues.apache.org/jira/browse/KAFKA-14153
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alyssa Huang
>Priority: Minor
>
> Exception would be more useful if it included the topic or partition that was 
> not found. Message right now is just 
> `This server does not host this topic-partition.`
> Background: [https://github.com/apache/kafka/pull/12479#discussion_r938988993]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] vamossagar12 commented on pull request #11592: KAFKA-13501: Avoid state restore via rebalance if standbys are enabled

2023-01-21 Thread via GitHub


vamossagar12 commented on PR #11592:
URL: https://github.com/apache/kafka/pull/11592#issuecomment-1399237152

   @mjsax , This is a very old PR of mine which didn't get merged. On the 
ticket I see `new-streams-runtime-should-fix` added as a label. Is this fix 
needed anymore or should I close it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-13296) Verify old assignment within StreamsPartitionAssignor

2023-01-21 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao resolved KAFKA-13296.
---
Resolution: Fixed

> Verify old assignment within StreamsPartitionAssignor
> -
>
> Key: KAFKA-13296
> URL: https://issues.apache.org/jira/browse/KAFKA-13296
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sagar Rao
>Priority: Major
>
> `StreamsPartitionAssignor` is responsible to assign partitions and tasks to 
> all StreamsThreads within an application.
> While it ensures to not assign a single partition/task to two threads, there 
> is limited verification about it. In particular, we had one incident for with 
> a zombie thread/consumer did not cleanup its own internal state correctly due 
> to KAFKA-12983. This unclean zombie-state implied that the _old assignment_ 
> reported to `StreamsPartitionAssignor` contained a single partition for two 
> consumers. As a result, both threads/consumers later revoked the same 
> partition and the zombie-thread could commit it's unclean work (even if it 
> should have been fenced), leading to duplicate output under EOS_v2.
> We should consider to add a check to `StreamsPartitionAssignor` if the _old 
> assignment_ is valid, ie, no partition should be missing and no partition 
> should be assigned to two consumers. For this case, we should log the invalid 
> _old assignment_ and send an error code back to all consumer that indicates 
> that they should shut down "unclean" (ie, without and flushing and no 
> committing any offsets or transactions).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] vamossagar12 closed pull request #9756: KAFKA-10652: Adding size based linger semnatics to Raft metadata

2023-01-21 Thread via GitHub


vamossagar12 closed pull request #9756: KAFKA-10652: Adding size based linger 
semnatics to Raft metadata
URL: https://github.com/apache/kafka/pull/9756


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-10652) Raft leader should flush accumulated writes after a min size is reached

2023-01-21 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao resolved KAFKA-10652.
---
Resolution: Won't Fix

Not sure this is needed anymore.

> Raft leader should flush accumulated writes after a min size is reached
> ---
>
> Key: KAFKA-10652
> URL: https://issues.apache.org/jira/browse/KAFKA-10652
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Sagar Rao
>Priority: Major
>
> In KAFKA-10601, we implemented linger semantics similar to the producer to 
> let the leader accumulate a batch of writes before fsyncing them to disk. 
> Currently the fsync is only based on the linger time, but it would be helpful 
> to make it size-based as well. In other words, if we accumulate a 
> configurable N bytes, then we should not wait for linger expiration and 
> should just fsync immediately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] ijuma commented on pull request #13121: MINOR: Remove some connect tests from Java 17 block list

2023-01-21 Thread via GitHub


ijuma commented on PR #13121:
URL: https://github.com/apache/kafka/pull/13121#issuecomment-1399275209

   The JDK 17 failed tests are not one of the ones I unblocked in this PR:
   
   >  Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.SmokeTestDriverIntegrationTest.[1] true
  10 min  1
   >  Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.SmokeTestDriverIntegrationTest.[2] false   
  1 min 11 sec1
   >  Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.SmokeTestDriverIntegrationTest.[2] false


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma merged pull request #13121: MINOR: Remove some connect tests from Java 17 block list

2023-01-21 Thread via GitHub


ijuma merged PR #13121:
URL: https://github.com/apache/kafka/pull/13121


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] fvaleri commented on a diff in pull request #13131: KAFKA-14628: Move CommandLineUtils and CommandDefaultOptions shared classes

2023-01-21 Thread via GitHub


fvaleri commented on code in PR #13131:
URL: https://github.com/apache/kafka/pull/13131#discussion_r1083318714


##
core/src/main/scala/kafka/admin/ConfigCommand.scala:
##
@@ -771,8 +772,8 @@ object ConfigCommand extends Logging {
   .withRequiredArg
   .describedAs("command config property file")
   .ofType(classOf[String])
-val alterOpt = parser.accepts("alter", "Alter the configuration for the 
entity.")
-val describeOpt = parser.accepts("describe", "List configs for the given 
entity.")
+val alterOpt: OptionSpec[String] = parser.accepts("alter", "Alter the 
configuration for the entity.").withOptionalArg()
+val describeOpt: OptionSpec[String] = parser.accepts("describe", "List 
configs for the given entity.").withOptionalArg()

Review Comment:
   Correct, it is a left over from a previous attempt. Reverting.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] fvaleri commented on pull request #13131: KAFKA-14628: Move CommandLineUtils and CommandDefaultOptions shared classes

2023-01-21 Thread via GitHub


fvaleri commented on PR #13131:
URL: https://github.com/apache/kafka/pull/13131#issuecomment-1399304541

   @clolov @vamossagar12 I took your suggestions. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] fvaleri commented on a diff in pull request #13131: KAFKA-14628: Move CommandLineUtils and CommandDefaultOptions shared classes

2023-01-21 Thread via GitHub


fvaleri commented on code in PR #13131:
URL: https://github.com/apache/kafka/pull/13131#discussion_r1083319516


##
server-common/src/main/java/org/apache/kafka/server/util/CommandLineUtils.java:
##
@@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.util;
+
+import joptsimple.OptionParser;
+import joptsimple.OptionSet;
+import joptsimple.OptionSpec;
+import org.apache.kafka.common.utils.AppInfoParser;
+import org.apache.kafka.common.utils.Exit;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Helper functions for dealing with command line utilities.
+ */
+public class CommandLineUtils {
+/**
+ * Check if there are no options or `--help` option from command line.
+ *
+ * @param commandOpts Acceptable options for a command
+ * @return true on matching the help check condition
+ */
+public static boolean isPrintHelpNeeded(CommandDefaultOptions commandOpts) 
{
+return commandOpts.args.length == 0 || 
commandOpts.options.has(commandOpts.helpOpt);
+}
+
+/**
+ * Check if there is `--version` option from command line.
+ *
+ * @param commandOpts Acceptable options for a command
+ * @return true on matching the help check condition
+ */
+public static boolean isPrintVersionNeeded(CommandDefaultOptions 
commandOpts) {
+return commandOpts.options.has(commandOpts.versionOpt);
+}
+
+/**
+ * Check and print help message if there is no options or `--help` option
+ * from command line, if `--version` is specified on the command line
+ * print version information and exit.
+ * NOTE: The function name is not strictly speaking correct anymore
+ * as it also checks whether the version needs to be printed, but
+ * refactoring this would have meant changing all command line tools
+ * and unnecessarily increased the blast radius of this change.
+ *
+ * @param commandOpts Acceptable options for a command
+ * @param message Message to display on successful check
+ */
+public static void printHelpAndExitIfNeeded(CommandDefaultOptions 
commandOpts, String message) {
+if (isPrintHelpNeeded(commandOpts)) {
+printUsageAndDie(commandOpts.parser, message);
+}
+if (isPrintVersionNeeded(commandOpts)) {
+printVersionAndDie();
+}
+}
+
+/**
+ * Check that all the listed options are present.
+ */
+public static void checkRequiredArgs(OptionParser parser, OptionSet 
options, OptionSpec... requiredList) {
+for (OptionSpec arg : requiredList) {
+if (!options.has(arg)) {
+printUsageAndDie(parser, String.format("Missing required 
argument \"%s\"", arg));
+}
+}
+}
+
+/**
+ * Check that none of the listed options are present.
+ */
+public static void checkInvalidArgs(OptionParser parser,
+OptionSet options,
+OptionSpec usedOption,
+OptionSpec... invalidOptions) {
+if (options.has(usedOption)) {
+for (OptionSpec arg : invalidOptions) {
+if (options.has(arg)) {
+printUsageAndDie(parser, String.format("Option \"%s\" 
can't be used with option \"%s\"", usedOption, arg));
+}
+}
+}
+}
+
+/**
+ * Check that none of the listed options are present.
+ */
+public static void checkInvalidArgs(OptionParser parser,
+OptionSet options,
+OptionSpec usedOption,
+Set> invalidOptions) {
+OptionSpec[] array = new OptionSpec[invalidOptions.size()];
+invalidOptions.toArray(array);
+checkInvalidArgs(parser, options, usedOption, array);
+}
+
+/**
+ * Check that none of the listed options are present with the combination 
of used options.
+ */
+public static void