On Jan 16, 2004, at 8:26 PM, David Schwartz wrote:


The AUTO_RETRY flag disables a case where the SSL/TLS code would
signal a retry even
though the underlying transport did not during a session
renegotiation. This is
there to support some applications which brokenly use select() and
blocking I/O.

Now you have me curious:  What would be a broken use of select and
blocking I/O? I use select before a call to SSL_read in order to
facilitate a timeout. Is this wrong (or broken)?

Yes, it's wrong/broken.

(If I receive one of
the "WANT" errors, I just restart the I/O however.) My program makes
the assumption that if it hears nothing on the read side of the socket
during a period of time, that something is wrong.

But what if SSL_read didn't get enough data to decode anything? Then it
will wind up blocking on the socket, which is exactly what you did't want to
happen.

Currently, I don't like the way my I/O loop is working so I'm probably
going to switch to non-blocking anyway.

If you never, ever want to block, just set the socket non-blocking. Otherwise, there can always be corner cases where you can block indefinitely.



Now that I think it through, I can imagine a situation where this would be true. Select would only indicate that there was "something" on the read fd. That data might be protocol related (a re-negotiate, or only part of a record) and there might be NO application-level data. My program would then call SSL_read() and block forever since no application data has arrived, just as you described.

I think the thing that is most lacking in OpenSSL is the use of library-level threads apart from the application's main threads. I understand the need to be cross-platform, but if the library created a couple threads for handling I/O even when the application wasn't, I think it would go a long way to making the application programmer's life easier.

Perhaps this could be done similarly to the way mutexes are set up, by asking the application programmer to register a function that creates new threads. Obviously, those threads would need to be detached by default to avoid memory leaks.

Or maybe, there could be a "heartbeat" function supplied by OpenSSL that an application could call periodically to simulate threads. Basically, the application would call this "heartbeat" function repeatedly in order to give the library CPU time to perform its functions. An application programmer could just wrap this in a platform-specific threaded function. This would be similar to the way a unix process gives up CPU time by making system calls. Any time the heartbeat was called, the library could move data in and out of its various IO objects into buffers. The downside of this would be that the application could be burning a lot of CPU if nothing needs be done.

I vote to move SSL into the kernel! :)

Sigh... I guess the only real way to let OpenSSL do its thing most effectively is to use non-blocking I/O. Which means I'll need to get unlazy and actually design a decent I/O loop.


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to