On Sun, 28 Apr 2024 07:02:40 GMT, Dean Long <dl...@openjdk.org> wrote:
>> Move immutable nmethod's data from CodeCache to C heap. It includes >> `dependencies, nul_chk_table, handler_table, scopes_pcs, scopes_data, >> speculations`. It amounts for about 30% (optimized VM) of space in CodeCache. >> >> Use HotSpot's `os::malloc()` to allocate memory in C heap for immutable >> nmethod's data. Call `vm_exit_out_of_memory()` if allocation failed. >> >> Shuffle fields order and change some fields size from 4 to 2 bytes to avoid >> nmethod's header size increase. >> >> Tested tier1-5, stress,xcomp >> >> Our performance testing does not show difference. >> >> Example of updated `-XX:+PrintNMethodStatistics` output is in JBS comment. > > It only makes sense if the immutable data heap is not also used for other > critical resources. If malloc or metaspace were used as the immutable data > heap, normally failures in those heaps are fatal, because other critical > resources (monitors, classes, etc) are allocated from there, so any failure > means the JVM is about to die. There's no reason to find a fall-back method > to allocate a new nmethod in that case. > Just to be clear @dean-long , you're saying failure to allocate immutable > data in the C heap should result in a fatal error? Makes sense to me as the > VM must indeed be very close to crashing anyway in that case. It also, > obviates the need for propagating `out_of_memory_error` to JVMCI code. I hadn't thought it through that far, actually. I was only pointing out that the proposed fall-back: > increase nmethod size and put immutable data there (as before). isn't worth the trouble. But making the C heap failure fatal immediately is reasonable, especially if it simplifies JVMCI error reporting. ------------- PR Comment: https://git.openjdk.org/jdk/pull/18984#issuecomment-2082083104