On Wed, Dec 20, 2017 at 02:48:43PM +0000, Jorgen S. Hansen wrote:
> 
> > On Dec 13, 2017, at 3:49 PM, Stefan Hajnoczi <stefa...@redhat.com> wrote:
> > 
> > The vsock_diag.ko module already has a test suite but the core AF_VSOCK
> > functionality has no tests.  This patch series adds several test cases that
> > exercise AF_VSOCK SOCK_STREAM socket semantics (send/recv, connect/accept,
> > half-closed connections, simultaneous connections).
> > 
> > The test suite is modest but I hope to cover additional cases in the future.
> > My goal is to have a shared test suite so VMCI, Hyper-V, and KVM can ensure
> > that our transports behave the same.
> > 
> > I have tested virtio-vsock.
> > 
> > Jorgen: Please test the VMCI transport and let me know if anything needs to 
> > be
> > adjusted.  See tools/testing/vsock/README for information on how to run the
> > test suite.
> > 
> 
> I tried running the vsock_test on VMCI, and all the tests failed in one way or
> another:

Great, thank you for testing and looking into the failures!

> 1) connection reset test: when the guest tries to connect to the host, we
>   get EINVAL as the error instead of ECONNRESET. I’ll fix that.

Yay, the tests found a bug!

> 2) client close and server close tests: On the host side, VMCI doesn’t
>   support reading data from a socket that has been closed by the
>   guest. When the guest closes a connection, all data is gone, and
>   we return EOF on the host side. So the tests that try to read data
>   after close, should not attempt that on VMCI host side. I got the
>   tests to pass by adding a getsockname call to determine if
>   the local CID was the host CID, and then skip the read attempt
>   in that case. We could add a vmci flag, that would enable
>   this behavior.

Interesting behavior.  Is there a reason for disallowing half-closed
sockets on the host side?

> 3) send_byte(fd, -EPIPE): for the VMCI transport, the close
>  isn’t necessarily visible immediately on the peer. So in most
>  cases, these send operations would complete with success.
>  I was running these tests using nested virtualization, so I
>  suspect that the problem is more likely to occur here, but
>  I had to add a sleep to be sure to get the EPIPE error.

Good point, you've discovered a race condition that affects all
transports.  The vsock close state transition might not have occurred
yet when the TCP control channel receives the "CLOSED" message.

test_stream_client_close_server() needs to wait for the socket status to
change before attempting send_byte(fd, -EPIPE).  I guess I'll have to
use vsock_diag or another kernel interface to check the socket's state.

> 5) multiple connections tests: with the standard socket sizes,
>   VMCI is only able to support about 100 concurrent stream
>   connections so this test passes with MULTICONN_NFDS
>   set to 100.

The 1000 magic number is because many distros have a maximum number of
file descriptors ulimit of 1024.  But it's an arbitrary number and we
could lower it to 100.

Is this VMCI concurrent stream limit a denial of service vector?  Can an
unprivileged guest userspace process open many sockets to prevent
legitimate connections from other users within the same guest?

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to