On 08/23/2011 08:41 AM, Sam Steingold wrote:
* Eric Blake<roy...@erqung.pbz> [2011-08-23 08:03:02 -0600]:
On 08/23/2011 07:56 AM, Sam Steingold wrote:
Let me reiterate that the size of canonicalize is plain absurd:
<https://lists.gnu.org/archive/html/bug-gnulib/2011-05/msg00143.html>.
150+ files to implement a single function which takes ~160 lines of C code.
These files includes things like hash.c and fchownat.c (why?!)
hash.c in order to properly detect ELOOP, which must be done as part of
an unlimited-depth link following algorithm. (If we didn't have the GNU
mantra of no arbitrary limits, then we could declare ELOOP at
SYMLOOP_MAX instead.)
ELOOP at SYMLOOP_MAX sounds good to me.
Does Hurd have SYMLOOP_MAX? If so, then yes, that would be a reasonable
change. If not, then how do you propose implementing canonicalize on
Hurd, without imposing a limit not already present by the system?
Most other systems have SYMLOOP_MAX, at which point, canonicalize
succeeding where the native system would fail due to ELOOP does indeed
sound fishy (what good is it to know what a symlink ultimately resolved
to if the system can't do the same resolution?)
But someone has to write the patches, and it's not my highest priority
at the moment.
Please note that gnulib's mandate (as far as _I_ understand it) is
to turn a random system into a POSIX system, not a GNU system.
Please provide a separate module when you want to follow the GNU mantra,
like you do with fnmatch.
Thanks!
fchownat.c because the openat module has too many functions.
I would say that all the "f*" (i.e., using FILE*) files are an overkill.
fchownat.c does NOT use FILE*. It operates on fd, the same as openat.
That is, fchownat() is roughly a superset of both fchown() (fds) and
chown() (names).
--
Eric Blake ebl...@redhat.com +1-801-349-2682
Libvirt virtualization library http://libvirt.org