gharris1727 commented on code in PR #13120:
URL: https://github.com/apache/kafka/pull/13120#discussion_r1084469223


##########
connect/api/src/main/java/org/apache/kafka/connect/connector/policy/ConnectorClientConfigOverridePolicy.java:
##########
@@ -23,25 +23,25 @@
 import java.util.List;
 
 /**
- * <p>An interface for enforcing a policy on overriding of client configs via 
the connector configs.
- *
- * <p>Common use cases are ability to provide principal per connector, 
<code>sasl.jaas.config</code>
+ * An interface for enforcing a policy on overriding of Kafka client configs 
via the connector configs.
+ * <p>
+ * Common use cases are ability to provide principal per connector, 
<code>sasl.jaas.config</code>
  * and/or enforcing that the producer/consumer configurations for 
optimizations are within acceptable ranges.
  */
 public interface ConnectorClientConfigOverridePolicy extends Configurable, 
AutoCloseable {
 
 
     /**
-     * Worker will invoke this while constructing the producer for the 
SourceConnectors,  DLQ for SinkConnectors and the consumer for the
-     * SinkConnectors to validate if all of the overridden client 
configurations are allowed per the
-     * policy implementation. This would also be invoked during the validation 
of connector configs via the Rest API.
-     *
+     * Workers will invoke this while constructing producer for 
SourceConnectors, DLQs for SinkConnectors and
+     * consumers for SinkConnectors to validate if all of the overridden 
client configurations are allowed per the

Review Comment:
   We probably don't need to enumerate all of the use-cases for kafka clients 
here, and can keep that scoped to the ConnectorClientConfigRequest javadoc.
   ```suggestion
        * Workers will invoke this before configuring per-connector Kafka 
admin, producer, and consumer
        * client instances to validate if all of the overridden client 
configurations are allowed per the
   ```



##########
connect/api/src/main/java/org/apache/kafka/connect/connector/Connector.java:
##########
@@ -131,8 +131,8 @@ public void reconfigure(Map<String, String> props) {
     /**
      * Validate the connector configuration values against configuration 
definitions.
      * @param connectorConfigs the provided configuration values
-     * @return List of Config, each Config contains the updated configuration 
information given
-     * the current configuration values.
+     * @return {@link Config}, essentially a list of {@link ConfigValue}s 
containing the updated configuration
+     * information given the current configuration values.

Review Comment:
   ```suggestion
        * @return a parsed and validated {@link Config} containing any relevant 
validation errors with the raw
        * {@code connectorConfigs} which should prevent this configuration from 
being used.
   ```



##########
connect/api/src/main/java/org/apache/kafka/connect/sink/SinkTask.java:
##########
@@ -88,15 +88,15 @@ public void initialize(SinkTaskContext context) {
     public abstract void start(Map<String, String> props);
 
     /**
-     * Put the records in the sink. Usually this should send the records to 
the sink asynchronously
-     * and immediately return.
-     *
+     * Put the records in the sink. This should either write them to the 
downstream system or batch them for
+     * later writing

Review Comment:
   I agree that it's not necessary for this javadoc to prescribe asynchronous 
behavior, but it should certainly point out the pitfall that asynchronous users 
need to take special care.
   ```suggestion
        * Put the records in the sink. If this method returns before the 
records are durably written,
        * the task must implement {@link #flush(Map)} or {@link 
#preCommit(Map)} to ensure that
        * only durably written record offsets are committed, and that no 
records are dropped during failures.
   ```



##########
connect/api/src/main/java/org/apache/kafka/connect/connector/policy/ConnectorClientConfigRequest.java:
##########
@@ -44,25 +44,25 @@ public ConnectorClientConfigRequest(
     }
 
     /**
-     * Provides Config with prefix {@code producer.override.} for {@link 
ConnectorType#SOURCE}.
-     * Provides Config with prefix {@code consumer.override.} for {@link 
ConnectorType#SINK}.
-     * Provides Config with prefix {@code producer.override.} for {@link 
ConnectorType#SINK} for DLQ.
-     * Provides Config with prefix {@code admin.override.} for {@link 
ConnectorType#SINK} for DLQ.
+     * <p>Provides Config with prefix "{@code producer.override.}" for {@link 
ConnectorType#SOURCE}.
+     * <p>Provides Config with prefix "{@code consumer.override.}" for {@link 
ConnectorType#SINK}.
+     * <p>Provides Config with prefix "{@code producer.override.}" for {@link 
ConnectorType#SINK} for DLQ.
+     * <p>Provides Config with prefix "{@code admin.override.}" for {@link 
ConnectorType#SINK} for DLQ.

Review Comment:
   We should also mention some more recent usages of this that haven't been 
updated in this documentation:
   * Admin for fencing zombies in EOS source
   * Admin for creating connector-specific source offset topics
   * Consumer for connector-specific source offset topics
   * Producer for connector-specific source offset topics



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to