mjsax commented on code in PR #19507:
URL: https://github.com/apache/kafka/pull/19507#discussion_r2051287955


##########
streams/src/main/java/org/apache/kafka/streams/processor/internals/TaskManager.java:
##########
@@ -553,7 +553,7 @@ private void handleTasksWithStateUpdater(final Map<TaskId, 
Set<TopicPartition>>
     private void handleTasksPendingInitialization() {
         // All tasks pending initialization are not part of the usual 
bookkeeping
         for (final Task task : tasks.drainPendingTasksToInit()) {
-            closeTaskClean(task, Collections.emptySet(), 
Collections.emptyMap());
+            closeTaskClean(task, new HashSet<>(), new HashMap<>());

Review Comment:
   I was looking into the logs we collected more plus the code, and it seems 
possible that we get an error on close with EOSv2.
   
   When we close the unitized tasks, we close the `RecordCollector` and we 
check for pending errors: 
https://github.com/apache/kafka/blob/4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L568
   
   Because we shared a single producer across multiple tasks, the initialized 
task could observe an error from some other task. -- So it seems to be actually 
safe to swallow this exception when closing a pending task. It's just not 
totally clear to me, how/where to implement this swallowing correctly?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to