On Tue, 11 Jul 2023 17:03:26 GMT, Alan Bateman <al...@openjdk.org> wrote:

> StructuredTaskScope.shutdown can sometimes not interrupt the thread for a 
> newly forked subtask, leading to join blocking until subtask completes. The 
> "hang" can be duplicated with a stress test that shuts down the scope while a 
> long running subtask is being forked. The bug is in the underlying thread 
> flock code where it filters the threads to just the live threads and so 
> filters out new/unstarted threads, that filtering was left over from some 
> refactoring in the loom repo a long time ago and should have been removed.

test/jdk/java/util/concurrent/StructuredTaskScope/StressShutdown.java line 83:

> 81:             // fork subtask to shutdown
> 82:             scope.fork(() -> {
> 83:                 scope.shutdown();

Hello Alan, the proposed source change in `ThreadFlock` and the reason for that 
change appear good to me.
However, as far as I understand, it appears to me that this test may not be 
reproducing the issue where shutdown gets called when there are threads that 
are about to start. From what I see in the API and implementation of 
`scope.fork()`, when the `fork()` returns, it's guaranteed that a `Thread` for 
the subtask has been started (keeping aside the failed to start cases). So in 
this test here, when we reach this point where we attempt a `shutdown()` all 15 
"beforeShutdown" subtasks would already have threads that are started and 
alive. i.e. there won't be any "about to be started thread". As for the 15 
"afterShutdown" forks() that follow, they would already notice that the scope 
is shutdown and won't start the new threads.
Did I misunderstand this test?

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/14833#discussion_r1261060214

Reply via email to