Hi Henrik,

On August 4, 2003 11:16 am, Henrik Nordstrom wrote:
> I am looking into how to best add client session reuse to Squid when
> acting as a SSL client. (yes, Squid does SSL these days)

As an avid user of Squid, I'd certainly be chuffed if I can help.

> I think I have got the SSL_get1_sess and SSL_set_session picture, and
> have some kind of idea of how to use this using your own cache
> structure. Still needs to look into how to correctly manage
> time-to-live etc.
>
> But what confuses me is the fact that there is a SSL_SESS_CACHE_CLIENT
> session cache mode (SSL_set_session_cache_mode). Can this cache mode be
> used to make life easier somehow? I understand it will cache client
> sessions, but how to access the cached sessions? And how to find the
> correct set of cached sessions among all different sessions used in
> this SSL_CTX if the same SSL_CTX is used for connecting to different
> SSL servers?

If I were you, I would attempt to disable SSL_CTX-internal session caching 
completely. Your later suggestion (providing cache callbacks) is a 
better/more-reliable way to go, and I'm thinking that with Squid you want 
to very carefully managing your own storage (you are pretty 
leak-intolerant after all :-).

>   * Set up application data fields to identify which server connection
> the session belongs to (the keys needed to later look up the session
> etc, i.e. ip:port).
>
>   * Register a SSL_CTX_sess_set_new_cb to index the cached sessions
> using the data set in 1.

Yes. There's normally no reason to cache more than one (client) session 
for any given server, and usually the best strategy is to cache the most 
recent one (or more correctly, the one that expires last). Question: how 
are you handling client-authentication if it arises? There is a can of 
worms waiting for you here, perhaps you're already aware of it? If squid 
is proxying on behalf of multiple clients, then you risk introducing some 
weird security issues. For better or worse, HTTPS is often used in a 
rather layer-violating way. Eg. if web-server logic attaches/indexes 
state based on SSL/TLS session details, then if you (as squid) reuse that 
session for a different client, you risk having that client be 
interpreted by the server as the client who happened to be proxying when 
the session was negotiated. So my offhand comments about how to cache the 
sessions should be taken with a rock of salt - how you match 
squid-accepts up to squid-connects influences greatly how you should 
handle this. Also, https-https proxying is a different kettle of fish to 
http-https proxying. How are you doing this anyway? Do you specify a new 
CA cert to the clients that, once accepted, allows you to do dynamic MITM 
SSL/TLS proxying by faking server certificates according to the CONNECT 
string? Or something else?

>    * Should the SSL session be reused for multiple concurrent
> connections to the same server where possible, or only one connection
> at a time?

There's no harm reusing it for multiple concurrent connections - this is 
the most common application (a browser negotiating a session for an html 
page will then typically fire off various concurrent session resumes to 
go back and pick up all the image files too). As I say, the question is 
more how you identify/index SSL sessions in a satisfactory way (and with 
suitable granularity) so that you get the maximum performance pay-off 
from resumes, but without creating mistaken identities for any server 
that matches up browser-clients to corresponding SSL/TLS state.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to