On Wed, 19 Jun 2024 16:50:22 GMT, Daniel Jeliński <djelin...@openjdk.org> wrote:

>> We use 2 ParkEvent instances per thread. The ParkEvent objects are never 
>> freed, but they are recycled when a thread dies, so the number of live 
>> ParkEvent instances is proportional to the maximum number of threads that 
>> were live at any time.
>> 
>> On Windows, the ParkEvent object wraps a kernel Event object. Kernel objects 
>> are a limited and costly resource. In this PR, I replace the use of kernel 
>> events with user-space synchronization.
>> 
>> The new implementation uses WaitOnAddress and WakeByAddressSingle methods to 
>> implement synchronization. The methods are available since Windows 8. We 
>> only support Windows 10 and newer, so OS support should not be a problem.
>> 
>> WaitOnAddress was observed to return spuriously, so I added the necessary 
>> code to recalculate the timeout and continue waiting.
>> 
>> Tier1-5 tests passed. Performance tests were... inconclusive. For example, 
>> `ThreadOnSpinWaitProducerConsumer` reported 30% better results, while 
>> `LockUnlock.testContendedLock` results were 50% worse. 
>> 
>> Thoughts?
>
> Daniel Jeliński has updated the pull request incrementally with one 
> additional commit since the last revision:
> 
>   Update comment

I don't fully understand it either, but:
- the table is in user space, not in the kernel
- the entries are only inserted when a wait is active, and are removed when the 
wait is done
- when there's a race in waking up the thread, the kernel remembers that the 
thread needs to be woken up immediately the next time it waits; I suppose 
that's a flag in the thread structure, no extra memory required, but couldn't 
find a definitive answer in the docs.

Sources:
- Windows internals book 
https://www.oreilly.com/library/view/windows-internals-part/9780135462348/ch08.xhtml
- Raymond Chen's blog: 
https://devblogs.microsoft.com/oldnewthing/20160826-00/?p=94185

-------------

PR Comment: https://git.openjdk.org/jdk/pull/19778#issuecomment-2182234703

Reply via email to