On Fri, 1 Nov 2024 15:21:50 GMT, Axel Boldt-Christmas <abold...@openjdk.org> 
wrote:

>> Patricio Chilano Mateo has updated the pull request incrementally with two 
>> additional commits since the last revision:
>> 
>>  - add comment to ThreadService::find_deadlocks_at_safepoint
>>  - Remove assignments in preempt_kind enum
>
> src/hotspot/share/oops/stackChunkOop.cpp line 445:
> 
>> 443: 
>> 444: void stackChunkOopDesc::transfer_lockstack(oop* dst) {
>> 445:   const bool requires_gc_barriers = is_gc_mode() || requires_barriers();
> 
> Given how careful we are in `Thaw` to not call `requires_barriers()` twice 
> and use `_barriers` instead it would probably be nicer to pass in `_barriers` 
> as a bool. 
> 
> There is only one other place we do the extra call and it is in 
> `fix_thawed_frame`, but that only happens after we are committed to the slow 
> path, so it might be nice for completeness, but should be negligible for 
> performance. Here however we might still be in our new "medium" path where we 
> could still do a fast thaw.

Good, passed as argument now.

> src/hotspot/share/oops/stackChunkOop.cpp line 460:
> 
>> 458:     } else {
>> 459:       oop value = *reinterpret_cast<oop*>(at);
>> 460:       HeapAccess<>::oop_store(reinterpret_cast<oop*>(at), nullptr);
> 
> Using HeapAccess when `!requires_gc_barriers` is wrong. This would crash with 
> ZGC when/if we fix the flags race and changed `relativize_chunk_concurrently` 
> to only be conditioned `requires_barriers() / _barriers` (and allowing the 
> retry_fast_path "medium" path). 
> So either use `*reinterpret_cast<oop*>(at) = nullptr;` or do what my initial 
> suggestion with `clear_lockstack` did, just omit the clearing. Before we 
> requires_barriers(), we are allowed to reuse the stackChuncks, so trying to 
> clean them up seems fruitless.

Ok, I just omitted clearing the oop.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/21565#discussion_r1826149674
PR Review Comment: https://git.openjdk.org/jdk/pull/21565#discussion_r1826148888

Reply via email to