While trying to modify some code I found on an earlier post for running N jobs in parallel I came across the interesting behavior illustrated below. It appears that the wait command proceeds before my SIGUSR's are all processed. Is this a bug or just a fact of life? I understand that it isn't possible to know if a process will receive a signal in the future but I am surprised that the signals aren't received and processed in time in this case.
On a related note, I think it would be very nice if there were a way to wait for ANY background job to finish. Currently it seems like one can only wait for either ALL jobs or else a single job with a given PID. Would it be possible to have something like 'wait -' that would block until any of the current background jobs completes? This would make writing simple parallel loops much easier. The busy-wait/SIGUSR solution is kindof a hack and for such a simple problem I would prefer not to depend on gnu parallel. #!/bin/bash nrunning=0 nmax=3 function job_wrap { echo "sleeping: $2 nrunning: $nrunning" eval "$@" kill -s USR2 $$ } trap ': $(( --nrunning ))' USR2 for x in {1..20} do while [[ nrunning -ge nmax ]] do : # busy wait done : $(( ++nrunning )) job_wrap sleep $(( RANDOM % 3 )) & done echo 'start wait' wait trap - USR2 echo 'end wait' $ ./par_sigusr sleeping: 0 nrunning: 1 sleeping: 2 nrunning: 2 sleeping: 0 nrunning: 3 sleeping: 1 nrunning: 3 sleeping: 0 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 0 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 0 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 0 nrunning: 3 sleeping: 1 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 2 nrunning: 3 sleeping: 1 nrunning: 3 sleeping: 2 nrunning: 3 start wait sleeping: 2 nrunning: 3 end wait $ ./par_sigusr: line 10: kill: (16287) - No such process ./par_sigusr: line 10: kill: (16287) - No such process Thanks! ------- Elliott Forney