Chet Ramey wrote in <7402031f-424c-4766-ba70-71771c9dc...@case.edu>: |On 11/8/23 8:12 PM, Steffen Nurpmeso wrote: |> The "problem" with the current way bash is doing it is that bash's |> job handling does not recognize jobs die under the hood: |> |> $ jobs |> [1]- Stopped LESS= less -RIFe README |> [2]+ Stopped LESS= less -RIFe TODO |> $ kill $(jobs -p) |> $ |> |> ^ nothing |> |> $ jobs |> [1]- Stopped LESS= less -RIFe README |> [2]+ Stopped LESS= less -RIFe TODO | |Yes, the jobs are still stopped, and will remain stopped until they get |a SIGCONT. Do you think that kill, when given a pid argument, should look |up any job associated with that pid and send it a SIGCONT? Or should it |send a SIGCONT to the pid unconditionally? If so, what about other |processes in that job?
Hm coming from this other side is also an interesting thing, but which i did not think about. I mean .. "if the lookup is fast" kill(1) could of course throw all of its arguments against the list of tracked processes, not only %job specifications. (Having said that i am personally still hoping for a shell syntax extension that closes the race condition of PID reuse against kill(1), in that if i "kill -TERM ID" where ID is known to be a monitored process, but which has terminated, the shell will reject doing the kill as such. Also the "kill -0 && kill -X" gap is a race in itself. But that is of course a different topic.) --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt)