Bruno Haible wrote: > On HP-UX 11.31 with cc: > FAIL: rm/deep-2 > > > FAIL: rm/deep-2 (exit: 1) > ========================= > + : perl > + perl -e 'my $d = "x" x 200; foreach my $i (1..52)' -e ' { mkdir ($d, > 0700) && chdir $d or die "$!" }' > + cd .. > + echo n > + rm ---presume-input-tty -r x > rm: cannot remove x/xxxxxxxxx... ...': > File name too long > + fail=1
That name is 1207 bytes long, which must be larger than HPUX 11.31's PATH_MAX. remove.c's write_protected_non_symlink must be calling euidaccess_stat with the long "full_name". Obviously, that would fail with "File name too long". This is a problem on HPUX because it lacks *at-function support. One way to work around it would be to change this: - if (!openat_needs_fchdir ()) + if (1) But that would make rm use the fully emulated faccessat, which may actually call fchdir, and which fails in the unusual event that save_cwd fails. This is all in a very deep dark corner, so my first reaction reluctance to compromise the implementation just to accommodate systems that lack openat and/or /proc/self/fd support. However, once my brain engaged, I realized that using "imperfect" at-function emulation here would have no impact. What happens when we determine this file is removable? We unlink it via unlinkat. That unlinkat function uses the very same underlying emulation code that faccessat does, so there is no reason to limit faccessat use to when we have adequate openat/proc support. Just use it all of the time and remove the ugly hacks. Patch coming up...