vrozov commented on code in PR #50594: URL: https://github.com/apache/spark/pull/50594#discussion_r2056390062
########## core/src/main/scala/org/apache/spark/util/UninterruptibleThread.scala: ########## @@ -92,11 +110,17 @@ private[spark] class UninterruptibleThread( * interrupted until it enters into the interruptible status. */ override def interrupt(): Unit = { - uninterruptibleLock.synchronized { - if (uninterruptible) { - shouldInterruptThread = true - } else { + if (uninterruptibleLock.synchronized { + shouldInterruptThread = uninterruptible + awaitInterruptThread = !shouldInterruptThread + awaitInterruptThread Review Comment: I don't see how `CountDownLatch` can be used instead of the boolean flag. It is one time use only while the same instance of `UninterruptibleThread` may be used multiple times to `run` or `runUninterruptibly`. It should be possible to use `Condition` instead of boolean flags, but that will require more changes. The fix follows the same approach as what was taken by `UninterruptibleThread` author. > The key is that once we are in the process of interrupting, `def runUninterruptibly` must wait for interrupting to finish before moving on to clear the interrupted status. Correct, in the `run` case we can call `super.interrupt()` without blocking or waiting and for `runUninterruptibly` it is necessary to ensure that either `super.interrupt()` would not be called (`shouldInterruptThread`) or wait for `super.interrupt()` to be called (`awaitInterruptThread`) and clear the interrupt. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org