On Tue, Oct 21, 2025, at 15:33, Edmond Dantes wrote:
> > When I await(), do I need to do it in a loop, or just once?
> 
> It depends on what is being awaited.
> On one hand, it would probably be convenient to have many different
> operations for different cases, but then we make the language
> semantics more complex.

> > Coroutines become Future that only await once, while Signal is something 
> > that can be awaited many times.
> At the moment, only such objects exist. It’s hard to say whether there
> will be others.
> Although one can imagine an Interval object, there are some doubts
> about whether such an object should be used in a while await loop,
> because from a performance standpoint, it’s not very efficient.

It might be good to clarify this when we talk about the Awaitable Interface 
then? Maybe something like:

"In PHP 8.6 the only awaitables are single-completion. Future versions may add 
multi-event awaitables." just to clear it up for early adopters?

> > But the example setChildScopeExceptionHandler does exactly this!
> The Scope-level handler does not interfere with coroutine completion.
> And it is not called because the cancellation exception is "absorbed"
> by the coroutine.

:thumbsup:

> > Further, much framework/app code uses the $previous to wrap exceptions as 
> > they bubble up,
> 
> If a programmer wants to wrap an exception in their own one let them.
> No one forbids catching exceptions; they just shouldn’t be suppressed.

> 
> > Async\isCancellation(Throwable): bool
> Why make a separate function if you can just walk through the chain?

If everyone writes their own isCancellation() we risk divergence (not to 
mention, it will be faster in C and basically need to be checked on every catch 
that might await); having one blessed function guarantees consistent detection 
and can allow for static-analysis support.

> 
> > Minor nit: in the Async\protect section, it would be nice to say that 
> > cancellations being AFTER the protect() are guaranteed, and also specify 
> > reentry/nesting of protect(). Like what happens here:
> 
> That’s a good case! Re-entering protect should be forbidden that must
> not be allowed.

<3 that's good to know! It definately needs to be in the RFC. If you don't mind 
me asking: why is this the case?

> 
> > Also, if I'm reading this correctly, a coroutine can mark itself as 
> > canceled, yet run to completion; however anyone await()'ing it, will get a 
> > CancellationException instead of the completed value?
> If a coroutine is canceled, its return value will be ignored.
> However, of course, it can still call return, and that will work
> without any issues.
> I considered issuing a warning for such behavior but later removed it,
> since I don’t see it as particularly dangerous.
> This point requires attention, because there’s a certain “flexibility”
> here that can be confusing. However, the risk in this case is low.

I would find it surprising behaviour -- if you cancel a context in go, it may 
or may not complete, but you get back both the completion (if it completed) 
and/or the error. In C#, it throws an exception and it never completes. 
Languages have different ways to do it, but it should be documented in the RFC 
what the behaviour is and how to handle this case. Ergonomics matter as much as 
the feature existing.

As far as observability goes, it might be a good idea to issue a notice instead 
of a warning. Notice is often suppressed and rarely causes any issues, but in 
development, seeing that would at least let me know something was going on that 
I should investigate.

> 
> > Allowing destructors to spawn feels extremely dangerous to me (but 
> > powerful). These typically -- but not always -- run between the return 
> > statement and the next line (typically best to visualize that > as the "}" 
> > since it runs in the original scope IIRC). That could make it 'feel like' 
> > methods/functions are hanging or never returning if a library abuses this 
> > by suspending or awaiting something.
> 
> Launching coroutines in destructors is indeed a relatively dangerous
> operation, but for different reasons mainly related to who owns such
> coroutines. However, I didn’t quite understand what danger you were
> referring to?
> 
> Asynchronous operations, as well as coroutine launching, are indeed
> used in practice. The code executes properly, so I don’t quite see
> what risks there could be, apart from potential resource leaks caused
> by faulty coroutines.

I think we missed each other here. Consider the following code:

function test() {
    $r = new AsyncResource();
    return 42; // destructor suspends here
}

Would this delay the caller's return until the destructor's coroutine finished, 
or is it detached? If detached, can it interleave safely with subsequent code? 
This should be documented in the RFC so people can plan for it and use it 
appropriately (such as managing transactions or locks inside destructors).

> > async.zombie_coroutine_timeout says 2 seconds in the text, but 5 seconds in 
> > the php.ini section.
> Thanks.
> 
> > What is defined as "application considered finished?" FrankenPHP workers, 
> > for instance, don’t "finish" — is there a way to reap zombies manually?
> 
> The Scheduler keeps track of the number of coroutines being executed.
> When the number of active coroutines reaches zero, the Scheduler stops
> execution. Zombie coroutines are not counted among those that keep the
> execution running. If PHP is running in worker mode, then the worker
> code must correctly keep the execution active. But even workers
> sometimes need to shutdown.

I have some workers that haven't restarted since April. :) So, having a way to 
manually reap zombies (much like we do with OS-level code when running as PID 
1) and track them, would be nice to have. At least, as part of the scheduler 
API.

> 
> >  it would be good to specify ordering and finally/onFinally execution here
> Doesn’t the RFC define the order of onFinally handler execution?
> onFinally handlers are executed after the coroutine or the Scope has 
> completed.
> onFinally is not directly related to dispose() in any way.
> 
> When dispose() is called, coroutine cancellation begins. This process
> may take some time. Only after the last coroutine has stopped will
> onFinally be invoked. In other words, you should not attempt to link
> the calls of these methods in any way.

This should be documented on the RFC, it still doesn't explain what the order 
of operations is though. This matters because if you are doing cleanup during 
disposal, you need to know what things will still be around (for reference, 
order of operations for GC is well documented and defined 
https://www.php.net/manual/en/features.gc.collecting-cycles.php which is what 
I'm expecting to see here).

> 
> > Maybe something like Async\isEnabled() to know whether I should use fibers 
> > or not.
> Good idea!,
> considering that such a function actually exists at the C code level.
> 
> > Is this "same exception" mean this literally, or is it a clone? If it is 
> > the same, what prevents another code path from mutating the original 
> > exception before it gets to me?
> It’s the exact same object that is, a reference to the same instance.
> So if someone modifies it, those changes will, of course, take effect.

This should probably be documented in the RFC: "Exceptions and returned objects 
are shared objects; mutating them is undefined behavior if there are multiple 
awaiters."

> 
> > Is there also a timeout on Phase 1 shutdown? Otherwise, if it is only an 
> > exception, then this could hang forever.
> 
> That’s true! A hang is indeed possible. I’m still not sure whether
> it’s worth adding an auxiliary mechanism to handle such cases, because
> that would effectively make PHP “smarter” than the programmer. I
> believe that a language should not try to be smarter than the
> programmer if the application runs in a certain way, then it’s
> probably meant to be that way.

I think of it more as observability than smarts (esp if the timeout is 
configurable) ... otherwise, how would you even know if it is hanging on 
shutdown vs. doesn't even know it is supposed to be shutting down? I'm reminded 
of certain CLI tools that require me issuing a SIGTSTP (ctrl-z) to issue a 
SIGTERM or SIGKILL because SIGINT (ctrl-c) doesn't appear to work. If it is my 
program, is it that there is a bug with SIGINT handlers -- or is it hanging 
during shutdown? Having a timeout there would at least protect me from my 
customers/users getting stuck due to a bug, and I could always set the timeout 
to something infinite-ish (0? -1?) if that is the behaviour I want.

— Rob

Reply via email to