OK, I see what is going on now. The difference for you is your initial ulimit -n value. Not that it is big, but that when reduced the way that test does it, getting smaller and smaller till < 16, it happens to land on a value < 10 as the first such limit it tries. Using the default max fd value doesn't do that, it reaches 15 or something and stops. sh does not work well with less than 10 available fds.
Two things I need to do ... second make the test more rational in its ulimit choices, not going under 10 where things get weird. But first, make sh give a better inducation what the problem is when this kind of thing does happen. The error message produced is one that should normally never be seen, while it is related to the >&18 it was doing, that operation is not where the error happened (if that failed you"d get a much more rational error msg). Rather, the problem is that echo is a builtin (change it to be /bin/echo and apart from being slower it will probably work) When there are redirections in builtins, the existing fd (if any) must be moved elsewhere so it can be restored after the builtin exits. sh always moves to a fd > 10 for this use. When the limit is < 10 that fails, and that's the cause of the problem. I will fix, sometime soon - sh first so you can confirm a more rational error msg, and then the test to avoid generating this situation. kre ps: attempting to follow fd usages inside sh is not something for the faint of heart.