On Sat, May 1, 2021, 10:58 Robert Elz, > | You also would almost always want to enable the > | subshell to avoid the parent from getting its parameters altered. > > Many cases, yes, the point is that not forking works in the simple > cases where it is most desired to not fork for speed $(echo ...) or > more probably $(printf ... ) but in more than just those cases, > extracting info from many shell built-in commands ( nfiles=$(ulimit -Sn) ) > can be handled without forking. >
The problem is most people wouldn't want this optimization for simple commands but rather on function calls where alteration of parameters is unpredictable. > Don't misunderstand though, getting this right is not trivial, detecting > when it is safe requires a bunch of code, and handling issues like very > large > output streams (which would normally simply fill the pipe and hang a forked > process until read) take care. > > It is however possible, and when implemented, simply works in the cases > where it is possible, with all scripts, new and old. > It's very fragile to implement and the outcome of reliability is uncertain. Having a set of new commands that allow it to be done with clearness is better and avoids intrusive modifications. > The problem with new invented features is that they tend to only work in > one shell (at least initially) which means people prefer not to use them, > in order to make their scripts more portable, which means other > implementors > are under no pressure to copy the feature... Implemented optimisations > for the standard shell syntax simply work, and improve performance, while > still allowing the script to work anywhere. > I'm not really concerned about portability for this. Bash has always made its own fruitful implementations. -- konsolebox