I’ve read that when SSL_read / SSL_write returns a SSL_ERROR_WANT_READ / 
SSL_ERROR_WANT_WRITE that when the required readable / writeable condition has 
been met that the call to SSL_read / SSL_write must be made with EXACTLY the 
same parameters as the previous call that returned the error.

Can anyone explain why this requirement exists?  

I can understand how partially read / written cipher blocks could be annoying 
or similarly how an HMAC might cover lots of data and so the library being able 
to access the previously read / written user data could be helpful, but in both 
cases it seems like the library must keep whatever internal state is necessary 
to do the proper computations, considering that a user could call SSL_write a 
single byte at a time if they so chose and the library would still have to work.

In my existing TCP code (that I’m now porting to use TLS), whenever a higher 
level write would block, I would make a copy of the remaining bytes to be 
written and throw it on a low level queue to be sent later, allowing the caller 
to not worry about coordinating memory management of their write buffers with 
my write function — kind of a send it and forget it functionality (until the 
low level queue fills up).  This approach doesn’t work with OpenSSL because 
when a write blocks and I later call SSL_write again, it apparently requires 
the EXACT same parameters — meaning even the same pointers — even if the 
buffers that are pointed at contain the same unwritten data as previous calls.

So, what am I missing?  Why does this requirement exist?

Cheers!

-----
John Lane Schultz
Spread Concepts LLC
Cell: 443 838 2200

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to