C0urante commented on code in PR #6934:
URL: https://github.com/apache/kafka/pull/6934#discussion_r1592860284


##########
connect/runtime/src/test/java/org/apache/kafka/connect/integration/ConnectWorkerIntegrationTest.java:
##########
@@ -773,6 +773,41 @@ public void 
testDeleteConnectorCreatedWithPausedOrStoppedInitialState() throws E
         connect.assertions().assertConnectorDoesNotExist(CONNECTOR_NAME, 
"Connector wasn't deleted in time");
     }
 
+    @Test
+    public void testPatchConnectorConfig() throws Exception {
+        connect = connectBuilder.build();
+        // start the clusters
+        connect.start();
+
+        connect.assertions().assertAtLeastNumWorkersAreUp(NUM_WORKERS,
+                "Initial group of workers did not start in time.");
+
+        connect.kafka().createTopic(TOPIC_NAME);
+
+        Map<String, String> props = defaultSinkConnectorProps(TOPIC_NAME);
+        props.put("unaffected-key", "unaffected-value");
+        props.put("to-be-deleted-key", "value");
+        props.put(TASKS_MAX_CONFIG, "1");
+
+        Map<String, String> patch = new HashMap<>();
+        patch.put(TASKS_MAX_CONFIG, "2");  // this plays as a value to be 
changed
+        patch.put("to-be-added-key", "value");
+        patch.put("to-be-deleted-key", null);
+
+        connect.configureConnector(CONNECTOR_NAME, props);
+        connect.patchConnectorConfig(CONNECTOR_NAME, patch);
+
+        
connect.assertions().assertConnectorAndExactlyNumTasksAreRunning(CONNECTOR_NAME,
 2,
+                "connector and tasks did not start in time");
+
+        Map<String, String> expectedConfig = new HashMap<>(props);
+        expectedConfig.put("name", CONNECTOR_NAME);
+        expectedConfig.put("to-be-added-key", "value");
+        expectedConfig.put(TASKS_MAX_CONFIG, "2");
+        expectedConfig.remove("to-be-deleted-key");
+        assertEquals(expectedConfig, 
connect.connectorInfo(CONNECTOR_NAME).config());

Review Comment:
   I think it's possible for poor timing (which Jenkins is notorious for...) to 
create flakiness here. The connector and both of its tasks may be started, but 
it's possible that the worker we hit with this request won't have read the 
patched connector config from the config topic yet if it's not the leader of 
the cluster.
   
   As a quick hack, we could tweak the order of operations and rely on existing 
retry logic in `ConnectAssertions::assertConnectorAndExactlyNumTasksAreRunning` 
to prevent this:
   
   1. Configure connector with `tasks.max = 2`
   2. Ensure connector is started and 2 tasks are running
   3. Patch connector, and change `tasks.max` to `3`
   4. Ensure connector is started and 3 tasks are running
   5. Perform the assertion on this line (i.e., that the connector config as 
reported by an arbitrary worker in the cluster matches the expected patch 
config)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to