Hello list

While working on the test for JDK-8228580 on core-libs-dev[1] we came
across inconsistent behaviour of Socket::setSoTimeout over different
platforms:
In case there is no data to read, on Windows, client
socket.getInputStream().read(...) can return *earlier* than timeout
specified, while on unix it always returns *later*. (Let's ignore
interrupts.)

Consider this code example:

public static void main(String[] args) throws IOException {
    var serverSocket = new ServerSocket();
    serverSocket.bind(new InetSocketAddress(0));
    var socket = new Socket((String) null, serverSocket.getLocalPort());
    socket.setSoTimeout(1000);
    var is = socket.getInputStream();
    long t1 = System.nanoTime();
    try {
        is.read();
    } catch (SocketTimeoutException e) {
    }
    System.out.println(System.nanoTime() - t1);
}

On Windows, it may (and often does) print a number less than
1_000_000_000. In Linux it's always larger.

I looked into it and found this:
The difference is how Java_sun_nio_ch_Net_poll is implemented. On unix
it uses poll(2), on Windows it uses select(2). Regarding timeouts,
poll() has "wait at least"[2] semantics and overruns by design, while
select() on windows has "waits at most" semantics, or how they put
it: "specifies the maximum time that select should wait before
returning."[3]. It returns early by design! Old, "plain" socket impl
are not much different.

Is this a known thing? Is it an issue at all?
Java's soTimeout docs do not specify exact semantics: "only this
amount of time"[4] is vague to me.


[1] 
http://mail.openjdk.java.net/pipermail/core-libs-dev/2019-September/062557.html
[2] http://man7.org/linux/man-pages/man2/poll.2.html
[3] 
https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-select
[4] 
https://docs.oracle.com/javase/9/docs/api/java/net/Socket.html#setSoTimeout-int-



-- 
Milan Mimica

Reply via email to