>       From: owner-openssl-us...@openssl.org On Behalf Of Stéphane Charette
>       Sent: Sunday, 15 April, 2012 20:31

>       I'm using Openssl to talk to a server that expects to re-use ssl 
> sessions when a client needs to open many SSL connections.  I have 
> the same code working on Linux and Windows.

Using classic resumption (sessionid) or RFC4507 ticket? 

>       But when I try to run on the Mac, the new SSL connection that 
> attempts to reuse the SSL session just sits there and hangs.  After 
> several minutes the server times out the connection, thinking no 
> requests are being made, and then the client unblocks once the tcp 
> connection is closed.  Looking at packet traces and trying to inspect 
> the SSL object in a debugger, I would guess the client is waiting for 
> the server to do the full SSL handshake, even though I'm trying to 
> reuse an existing session.

I don't see how. Mostly if the client requests resumption the 
ServerHello distinguishes whether the server agrees (and an 
abbreviated handshake is used) or not (full handshake is used).
Even for ticket without sessionid, the server must send something 
which the client should recognize and would give an error if not.
Your posted code below doesn't check for error from SSL_connect; 
if you do check what do you see?

What does the packet trace show? Does ClientHello contain a 
valid sessionid, or none, or a valid ticket? Does ServerHello 
contain the same sessionid, or different, or none? If using TLS 
are there any other extensions, and what? (I don't recall others 
that should interfere with resumption, but I might have missed.)
What if any message(s) occur next?

Can you recreate the problem with commandline s_client with -sess_out 
on the first connection and -sess_in on the second, with or without 
-no_ticket? If so, -debug and -state will probably be helpful.
        
>       I desperately need to know:  am I doing it wrong?  Or is there a 
> serious problem on the Mac that prevents SSL sessions from being re-used?

I don't use Mac myself, but I don't recall hearing such a problem.

>       Here are the relevant openssl calls I'm making:
        
>       1) In the single context I'm using, I am making this call prior to 
> establishing any SSL connections:
>           SSL_CTX_set_session_cache_mode( ctx, SSL_SESS_CACHE_BOTH );

Specifically, prior to doing (any/all) SSL_new(ctx) I assume.
And I assume you aren't changing settings like cipherlist and 
compression between connections. Sharing the session *should* 
override these, but maybe something might slip through a crack.
Even if so, I don't see any reason it would differ on Mac.

>       2) When it is time to start the 2nd SSL session, here is how 
> I get the session from the initial working connection:

(Nit: second connection using same session. Often people don't 
distinguish these carefully, and usually it doesn't matter, but 
here it's exactly the area of your apparent problem.)

>           SSL_SESSION *savedSession = SSL_get1_session( ctrlSSL );
>           SSL_set_session( dataSSL, savedSession );
>           SSL_connect( dataSSL );

As general practice you should probably check the return value 
from SSL_set_session for 0, although I doubt it happens.
You definitely should check SSL_connect for <=0; even though 
one connection has succeeded and not (visibly) failed, that 
doesn't always guarantee another connection will succeed.

Both get1_session and set_session increment the refcount, so 
I believe your session object(s?) will not get cleaned up even 
if all connections using them go away and the cache times-out.
But in the usage you describe this is probably just a quite 
small memory leak and doesn't matter.


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to