On Fri, 10 Feb 2012, Martin Cracauer wrote:

Log:
 Fix bin/164947: tee looses data when writing to non-blocking file descriptors

 tee was not handling EAGAIN

 patch submitted by Diomidis Spinellis <d...@aueb.gr>. Thanks so much

 reproduced and re-tested locally

This seems to give a buffer overrun (apart from reducing to a write()
spinloop) when the number of output files is enormous and non-blocking
mode is broken.

Modified: head/usr.bin/tee/tee.c
==============================================================================
--- head/usr.bin/tee/tee.c      Fri Feb 10 22:14:34 2012        (r231448)
+++ head/usr.bin/tee/tee.c      Fri Feb 10 22:16:17 2012        (r231449)
...
@@ -106,9 +109,14 @@ main(int argc, char *argv[])
                        bp = buf;
                        do {
                                if ((wval = write(p->fd, bp, n)) == -1) {
-                                       warn("%s", p->name);
-                                       exitval = 1;
-                                       break;
+                                       if (errno == EAGAIN) {
+                                               waitfor(p->fd);

p->fd is limited only by the number of args (which is limited by
{ARG_MAX}, which defaults to 256K and is hard to change) and {OPEN_MAX}
(which is about 12000 on all machines I can test quickly, including
2 very different ones; it is easy to reduce but not so easy to increase).
However, this code should be unreachable except for fd == 0, since we
opened all the other fd's for itself and we didn't ask for O_NONBLOCK
on them).  However2, I think some device files still have broken
non-blocking mode, where the non-blocking flag is set in the device
state instead of in the open file state and is thus too global.

+                                               wval = 0;
+                                       } else {
+                                               warn("%s", p->name);
+                                               exitval = 1;
+                                               break;
+                                       }
                                }
                                bp += wval;
                        } while (n -= wval);
@@ -137,3 +145,15 @@ add(int fd, const char *name)
        p->next = head;
        head = p;
}
+
+/* Wait for the specified fd to be ready for writing */
+static void
+waitfor(int fd)
+{
+       fd_set writefds;

The number of bits is this is limited by FD_SETSIZE, which defaults
to 1024.  This is much smaller than the other limits, so fd can
easily exceed FD_SETSIZE.  FD_SETSIZE is easy to change, but it is
not changed by this program.

+
+       FD_ZERO(&writefds);
+       FD_SET(fd, &writefds);

This gives a buffer overrun when fd >= FD_SETSIZE.  FD_SET() does
no internal range checking, so callers must do it.

+       if (select(fd + 1, NULL, &writefds, NULL, NULL) == -1)
+               err(1, "select");

This might even work if the buffer overrun doesn't trash too much,
since (fd + 1) is not too large for the kernel.

+}


The easiest fix is to return immediately if fd >= FD_SETSIZE or even
if fd > 0.  This intentionally reduces to the write() spinloop if
someone makes fd >= FD_SETSIZE, or if someone uses tee on a buggy
device.

An enormous number of output files enlarges chances for other interesting
bugs.  It only takes one in the middle to have an i/o error (EIO for
hangup is most likely) to break the i/o for all.  Blocking on one while
the others could be displaying output is not best.

Bruce
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"

Reply via email to