On Tue, Aug 6, 2024 at 10:21 AM Oğuz <oguzismailuy...@gmail.com> wrote: > > On Tuesday, August 6, 2024, Zachary Santer <zsan...@gmail.com> wrote: >> >> How bash is actually used should guide its development. > > Correct. No one waits for procsubs in their scripts or on the command line.
On Wed, Jul 3, 2024 at 8:40 PM Zachary Santer <zsan...@gmail.com> wrote: > > In my actual use cases, I have: > > (1) > A couple different scripts that alternate reading from multiple > different processes, not entirely unlike > sort -- <( command-1 ) <( command-2 ) <( command-3 ) > except it's using exec and automatic fds. > > ( 2 ) > shopt -s lastpipe > exec {fd}> >( command-2 ) > command-1 | > while [...]; do > [...] > if [[ ${something} == 'true' ]]; then > printf '%s\x00' "${var}" >&"${fd}" > fi > done > # > exec {fd}>&- > > This whole arrangement is necessary because I need what's going on in > the while loop to be in the parent shell if I'm going to use coproc > fds directly. What's going on in the process substitution will more or > less only begin when the fd it's reading from is closed, because it > involves at least one call to xargs. "No one." It is necessary in both of the above use-cases to wait for the procsub(s) to terminate. In (1), the script can close the reading end of the pipe to one of the procsubs before all the output from the command within has been read. It would be best to ensure that all procsub child processes have terminated before the script exits. In (2), the procsub child process really only does its work once the writing end of the pipe has been closed. Again, It is necessary to wait for that child process to terminate before the script exits. Because the versions of these scripts that actually get used run in bash-4.2, which was incapable of waiting for procsubs, other solutions had to be found: FIFOs in the case of (1) and making what would've been in a procsub in (2) the final element of a pipeline. Managing the FIFOs in (1) adds roughly 29 lines, collectively, to my scripts that work this way, compared to the bash-4.4 versions that use procsubs. No clue how much creating a temporary directory and the pipes to put in it adds to the run time. Without procsubs or a FIFO, there's no way to get the middle element of the pipeline in (2) into the parent shell environment. Just a matter of luck that it didn't need to be there. This I guess makes a further point about the need to anticipate what shell programmers might want or need to be able to do. Which is what I'm trying to help with here. >> In a script, a child process being a job or not makes no difference, >> from the shell programmer's perspective, unless you've got job control >> on for some reason. > > > That's not true. `jobs' works even if job control is disabled. `kill' accepts > jobspecs and bash expands the `\j' escape sequence in prompt strings. So it > does make a difference. Things I have no experience with. Fine. >> Only as much noise as how many procsubs you expand on the command line. > > > And that's too many. Much more than async jobs. Async jobs like graphical applications you've launched from the shell, shall-based text editors, 'man', 'info', etc., maybe a build you're running in the background. You use procsubs in the interactive shell more than this type of stuff?