vvcephei commented on a change in pull request #8541:
URL: https://github.com/apache/kafka/pull/8541#discussion_r414899203



##########
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/LagFetchIntegrationTest.java
##########
@@ -147,6 +149,9 @@ private void shouldFetchLagsDuringRebalancing(final String 
optimization) throws
         // create stream threads
         for (int i = 0; i < 2; i++) {
             final Properties props = (Properties) streamsConfiguration.clone();
+            // this test relies on the second instance getting the standby, so 
we specify
+            // an assignor with this contract.
+            
props.put(StreamsConfig.InternalConfig.INTERNAL_TASK_ASSIGNOR_CLASS, 
PriorTaskAssignor.class.getName());

Review comment:
       This is not a TODO. I'm planning to leave the test like this. (Just 
opening the floor for objections) 

##########
File path: tests/kafkatest/services/streams.py
##########
@@ -562,6 +568,8 @@ def prop_file(self):
                       consumer_property.SESSION_TIMEOUT_MS: 60000}
 
         properties['input.topic'] = self.INPUT_TOPIC
+        # TODO KIP-441: consider rewriting the test for 
HighAvailabilityTaskAssignor
+        properties['internal.task.assignor.class'] = 
"org.apache.kafka.streams.processor.internals.assignment.StickyTaskAssignor"

Review comment:
       These will become follow-on tasks to fix each test. Thankfully, there 
aren't many.

##########
File path: tests/kafkatest/tests/streams/streams_broker_bounce_test.py
##########
@@ -164,7 +164,7 @@ def setup_system(self, start_processor=True, num_threads=3):
 
         # Start test harness
         self.driver = StreamsSmokeTestDriverService(self.test_context, 
self.kafka)
-        self.processor1 = StreamsSmokeTestJobRunnerService(self.test_context, 
self.kafka, num_threads)
+        self.processor1 = StreamsSmokeTestJobRunnerService(self.test_context, 
self.kafka, "at_least_once", num_threads)

Review comment:
       This accounted for most of the test failures, and it's already fixed on 
trunk.

##########
File path: tests/kafkatest/services/streams.py
##########
@@ -477,6 +477,10 @@ def __init__(self, test_context, kafka):
                                                                  "")
         self.UPGRADE_FROM = None
         self.UPGRADE_TO = None
+        self.extra_properties = {}
+
+    def set_config(self, key, value):
+        self.extra_properties[key] = value

Review comment:
       I've added this as a general mechanism in a couple of places to pass 
specific configs into Streams, so we don't have to make new constructors for 
every different parameterization.

##########
File path: tests/kafkatest/tests/streams/streams_broker_down_resilience_test.py
##########
@@ -144,7 +144,11 @@ def test_streams_runs_with_broker_down_initially(self):
     def test_streams_should_scale_in_while_brokers_down(self):
         self.kafka.start()
 
-        configs = 
self.get_configs(extra_configs=",application.id=shutdown_with_broker_down")
+        # TODO KIP-441: consider rewriting the test for 
HighAvailabilityTaskAssignor
+        configs = self.get_configs(
+            extra_configs=",application.id=shutdown_with_broker_down" +
+                          
",internal.task.assignor.class=org.apache.kafka.streams.processor.internals.assignment.StickyTaskAssignor"
+        )

Review comment:
       This one already had a different mechanism to add more configs, so I 
just left it alone.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to