On Wed, 25 Oct 2023 09:41:33 GMT, Jorn Vernee <jver...@openjdk.org> wrote:

> But it sounds like you're saying that plain user code should never result in 
> a VM error (if we can help it).

Yes, exactly. Granted, there are resource exhaustion situations where the 
overall progress could be sluggish (e.g. if we near the Java heap OOME), but we 
don't usually elevate that to globally shutting down the JVM, unless user 
explicitly requests this, e.g. with `-XX:+ExitOnOutOfMemoryError`.

I think the situation for `RuntimeStub`-s is a bit different, as we expect to 
have more or less constant number of them, mostly allocated upfront. Failing to 
allocate `RuntimeStub` then looks like a configuration issue. But for 
`UpcallStub`-s -- correct me if I am wrong here -- we can have unbounded number 
of them, right? Which exposes us to globally visible VM failure if there is a 
misbehaving code.

It is not the problem with this concrete PR, which I think is fine, but it 
exposes the larger, more important architectural question.

>> src/hotspot/share/code/codeBlob.cpp line 761:
>> 
>>> 759:     MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag);
>>> 760:     blob = new (size) UpcallStub(name, cb, size, receiver, 
>>> frame_data_offset);
>>> 761:     if (blob == nullptr) {
>> 
>> I think it would be safer to call into `fatal` without having 
>> `CodeCache_lock` held. Move it out of `MutexLocker` scope?
>
> This pattern follows what is done in `RuntimeStub::new_runtime_stub`, which 
> also calls `fatal` under the lock.
> 
> I agree it's probably better to call outside of the lock (and that is 
> something I noticed in the original change for RuntimeStub as well). I'm 
> happy to fix it for both.

It is okay to handle `RuntimeStub` in a separate (and more cleanly 
backportable) PR. Let's make the new code do the right thing from the start.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/16311#issuecomment-1778900128
PR Review Comment: https://git.openjdk.org/jdk/pull/16311#discussion_r1371465261

Reply via email to