On 24/01/12 11:27, Damjan Jovanovic wrote:


On Mon, Jan 23, 2012 at 7:09 PM, Michael McMahon <michael.x.mcma...@oracle.com <mailto:michael.x.mcma...@oracle.com>> wrote:

    Can I get the following change reviewed please?

    http://cr.openjdk.java.net/~michaelm/7131399/webrev.1/
    <http://cr.openjdk.java.net/%7Emichaelm/7131399/webrev.1/>

    The problem is that poll(2) doesn't seem to work in a specific
    edge case tested by JCK,
    namely, when a zero length UDP message is sent on a
    DatagramSocket.  The problem is only
    detected on timed reads, ie. normal blocking reads work fine.

    The fix is to make the NET_Timeout() function use select() instead
    of poll().

    Thanks,
    Michael.





Hi

I don't work at Oracle or anything, but IMHO this is a bad idea.

The finite length bitset used by select() means that there is a limit on the maximum integer that can fit in the bitset. With 1024 bits (a common value), you only have to create >= 1021 file descriptors (and of course stdin/stdout/stderr) to exceed this limit, and end up with a file descriptor for which FD_SET breaks. This will be the case even if that file descriptor is the only file descriptor you are trying to add to the bitset.

Please reconsider.

Regards
Damjan Jovanovic


Damjan,

We can only deal with a finite number of file descriptors already in this file, although the actual value can be set as high as required through setrlimit(). getFdEntry() checks that the fd number is within the particular limits and all I/O operations will return EBADF if they happen to be outside.
This was the case even when poll() was used.

- Michael

Reply via email to