I am using s_time and s_server to benchmark two computers running OpenWrt
Linux and openssl-0.9.8p. Each host has a Gigabit Ethernet adapter and
they are connected by a crossover cable. I am testing how long it takes
to transfer a 620MB file.

The strange result is a discrepency between using new sessions (-new)
and reused sessions (-reuse). Given the size of the file, I would
expect that its transfer renders any speed up due to reusing session
very small. However, this is not the case.

>From a 30 second test, I get:

        650,117,164   bytes read in 44 seconds *without* connection reuse (1 
conn)

        1,300,234,328 bytes read in 37 seconds *with*    connection reuse (2 
conns)

Does anyone know what is going on? Another thing I have observed is
that CPU goes down drastically when running the connection reuse case,
after s_time prints "starting." I have looked at the source to s_time
and see that it does a "dummy" download before it starts the clock to
ensure all connections reuse a session. But I don't know why CPU usage
would go down when running the rest of the test (after all, things should
still be crypto-bound).

-- 
Mike

:wq
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to