On 7/9/24 6:12 AM, Zachary Santer wrote:
On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey <chet.ra...@case.edu> wrote:On 6/29/24 10:51 PM, Zachary Santer wrote: so you were then able to wait for each process substitution individually, as long as you saved $! after they were created. `wait' without arguments would still wait for all process substitutions (procsub_waitall()), but the man page continued to guarantee only waiting for the last one. This was unchanged in bash-5.2. I changed the code to match what the man page specified in 10/2022, after https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00107.htmlIs what's being reported there undesirable behavior?
Yes, of course. It shouldn't hang, even if there is a way to work around it. The process substitution and the subshell where `wait' is running don't necessarily have a strict parent-child relationship, even if bash optimizes away another fork for the subshell.
On the other hand, allowing 'wait' without arguments to wait on all process substitutions would allow my original example to work, in the case that there aren't other child processes expected to outlive this pipeline.
So you're asking for a new feature, probably controlled by a new shell option.
We've discussed this before. `wait -n' waits for the next process to terminate; it doesn't look back at processes that have already terminated and been added to the list of saved exit statuses. There is code tagged for bash-5.4 that allows `wait -n' to look at these exited processes as long as it's given an explicit set of pid arguments.I read through some of that conversation at the time. Seemed like an obvious goof. Kind of surprised the fix isn't coming to bash 5.3, honestly.
Not really, since the original intent was to wait for the *next* process to terminate. That didn't change when the ability to wait for explicit pids was added.
And why "no such job" instead of "not a child of this shell"?
Because wait -n takes pid arguments that are part of jobs.
They're similar, but they're not jobs. They run in the background, but you can't use the same set of job control primitives to manipulate them. Their scope is expected to be the lifetime of the command they're a part of, not run in the background until they're wanted.Would there be a downside to making procsubs jobs?
If you want to treat them like jobs, you can do that. It just means doing more work using mkfifo and giving up on using /dev/fd. I don't see it as being worth the work to do it internally.
Consider my original example: command-1 | tee >( command-2 ) >( command-3 ) >( command-4 ) Any nontrivial command is going to take more time to run than it took to be fed its input.In some cases, yes.The idea that no process in a process substitution will outlive its input stream precludes a reading process substitution from being useful.It depends on whether or not it can cope with its input (in this case) file descriptor being invalidated. In some cases, yes, in some cases, no.When you say "invalidated," are you referring to something beyond the process in a reading process substitution simply receiving EOF? Everything should be able to handle that much.
They're pipes, so there are more semantics beyond receiving EOF on reading. Writing on a pipe where the reader has gone away, for example, like below.
And nevermind exec {fd}< <( command ) I shouldn't do this?Sure, of course you can. You've committed to managing the file descriptor yourself at this point, like any other file descriptor you open with exec.But then, if I 'exec {fd}<&-' before consuming all of command's output, I would expect it to receive SIGPIPE and die, if it hasn't already completed. And I might want to ensure that this child process has terminated before the calling script exits.
Then save $! and wait for it. The only change we're talking about here is to accommodate your request to be able to wait for multiple process substitutions created before you have a chance to save all of the pids.
Why should these be different in practice? (1) mkfifo named-pipe child process command < named-pipe & { foreground shell commands } > named-pipe (2) { foreground shell commands } > >( child process command )Because you create a job with one and not the other, explicitly allowing you to manipulate `child' directly?Right, but does it have to be that way? What if the asynchronous processes in process substitutions were jobs?
If you want them to work that way, take a shot at it. I don't personally think it's worth the effort.
If you need to capture all the PIDs of all your background processes, you'll have to launch them one at a time. This may mean using FIFOs (named pipes) instead of anonymous process substitutions, in some cases.Bash is already tracking the pids for all child processes not waited on, internally. So I imagine it wouldn't be too much work to make that information available to the script it's running.
So an additional feature request.
Maybe a single middle-ground array variable, listing the pids of all child processes forked (and not waited on) since the last time the array variable was referenced, would be more easily implemented.
Maybe in some future version. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU c...@case.edu http://tiswww.cwru.edu/~chet/
OpenPGP_signature.asc
Description: OpenPGP digital signature