Hi Jan, thank you so much for this report!
java.lang.invoke is part of core libraries, so I replied to the core-libs list 
instead.
I see the bottom-up view of MethodHandleNatives.resolve missing, but I need a 
top-down view to understand better what's happening. Is 
InvokerBytecodeGenerator.lookupPregenerated part of this flame graph?

Fyi, there's a preexisting problem with pre-generated MethodHandle 
infrastructure lookup; we use that resolve, but internally JVM will create 
Error instances and throw them (as part of regular method resolution) which 
will call this stacktrace filling. The stack will look weird but here's how 
they are generated and eliminated in case you are interested: 
https://github.com/openjdk/jdk/blob/3cf3f300de1e9d2c8767877ed3a26679e34b7d22/src/hotspot/share/prims/methodHandles.cpp#L792

If this is the problem you are encountering, we are currently thinking about 
generating a list of present methods so we don't go through VM's resolution. A 
temporary workaround is to generate a CDS (Class Data Sharing 
(oracle.com)<https://docs.oracle.com/en/java/javase/22/vm/class-data-sharing.html>)
 archive from a boot run so these methods will be present, and it will also 
slightly improve your startup speed. Or you can generate a jlink image (which 
is more complicated). Both solutions are not valid for the libraries. (You can 
try grasp how these two work at https://github.com/openjdk/jdk/pull/19164)

Maybe another way to verify the problem is to run with the system property 
-Djava.lang.invoke.MethodHandle.TRACE_RESOLVE=true, which jli will start 
printing if it can resolve methods; each time it fails, it will go through this 
error creation and stacktrace filling, unfortunately. If you have CDS archive, 
these resolution may print success if those infrastructures are generated in 
the run that generated the CDS archive.

Finally, before we take action, I still need you to verify my said issue is the 
root cause; you can confirm either by looking at the callers to MHN.resolve, or 
by running with -Djava.lang.invoke.MethodHandle.TRACE_RESOLVE=true and check 
its outputs. I would prefer you to test on the latest versions, such as 22 or 
the test builds at https://jdk.java.net/23, in addition to your baseline 
supported version 17. I am more than glad to help if you encounter further 
issues, or if this turns out not to be the cause. And again, thank you for 
taking your time to diagnose and report!

Best regards,
Chen Liang
________________________________
From: discuss <discuss-r...@openjdk.org> on behalf of Jan Bernitt 
<jaanbern...@gmail.com>
Sent: Friday, August 9, 2024 6:15 AM
To: disc...@openjdk.org <disc...@openjdk.org>
Subject: Costs of caught exceptions during method handle resolve

Hi everyone!

I recently found a performance issue in my library that goes back to large 
numbers of exceptions being thrown and caught where the costs are dominated by 
filling in the stack trace and resolving method handles.

So I investigated ways to avoid causing the exceptions. One of the reasons for 
the exceptions
was somewhere internally in the method handle resolve. I ended up caching the 
method handles to avoid this happening over and over again as my library uses 
method handles all the time :D

As far as I can tell this should be something the JDK can solve differently 
internally to avoid the cost of throwing and catching the exception which 
should improve performance of method handles significantly if you use them 
(including the lookup) in a hot loop. You can find more details here 
https://github.com/dhis2/json-tree/pull/66

If this is of interest to some feel free to ask for more details.
Also if anyone sees that I am just using methods handles wrong and therefore 
cause the internal exception please let me know.

Best
Jan

Reply via email to