Hi all,

I'm trying to debug a weird performance artifact: for large payloads,
session reuse appears to slow things down significantly relative to
creating new sessions.

I'm using an apache (2.2.14) + openssl server and openssl s_time to
benchmark, e.g.,

openssl s_time -connect <server> -www /10000.bytes -cipher HIGH -time 5

With small payloads (10,000 random bytes), session reuse dramatically
outperforms new sessions (3999 completed connections vs. 247 completed
connections). This is also true at 100,000 bytes.

But, if I try a larger file (1,000,000 random bytes), session reuse is
much worse! (with session reuse: 89 completed connections vs. no
reuse: 149 completed connections).

I've seen this between separate server/bench machines in the same
rack. But, it's easy to replicate even on a single system. Run httpd
-X (single threaded mode) and then use s_time as described (I
generated the random files with dd if=/dev/urandom)

In an interesting twist, if I change the protocol to -ssl2, -reuse
outperforms -new, as expected, (81 completed vs. 77 completed). For
such large transfer volumes, I wouldn't expect a large penalty from
session negotiation in the first place, but it certainly shouldn't be
worse.

This is basically the same problem as described in the previous post
linked below, with the possible tweak that it only manifests for me at
large file sizes. But, I didn't find a resolution in the archives...
http://www.mail-archive.com/openssl-users@openssl.org/msg33604.html
I've seen the problem on x86-Linux and OS X 10.6.3.

Any tips would be appreciated, thanks!

-M
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to