On 6/26/06, Darryl Miles <[EMAIL PROTECTED]> wrote:
Bodo Moeller wrote:
> On Thu, Jun 22, 2006 at 10:41:14PM +0100, Darryl Miles wrote:
>
>> SSL_CTX_set_mode(3)
>>
>> SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
>> Make it possible to retry SSL_write() with changed buffer
>> location (the buffer contents must stay the same). This is not the
>> default to avoid the mis-
>> conception that non-blocking SSL_write() behaves like
>> non-blocking write().
>>
>> What is that all about ? My application makes no guarantee what the
>> exact address given to SSL_write() is, it only guarantees the first so
>> many bytes are my valid data. Why do I need to give it such guarantees ?
Thanks for this clearest explanation so far.
> When using SSL_write() over a non-blocking transport channel, you may
> have to call SSL_write() multiple times until all your data has been
> transferred. In this case, the data buffer needs to stay constant
> between calls until SSL_write() finally returns a positive number
> since (unless you are using SSL_MODE_ENABLE_PARTIAL_WRITE) some of the
> calls to SSL_write() may read some of your data, and if the buffer
> changes, you might end up inadvertantly transferring incoherent data.
> To help detect such potential application bugs, OpenSSL includes a
> simple sanity check -- if SSL_write() is called again but the data
> buffer *location* has changed, OpenSSL suspects that this is a mistake
> and returns an error.
"Some of the calls to SSL_write() may read some of your data", I am
still not such how the reading of data impacts the write operation. Are
you saying that when WANT_READ is returned from SSL_write() the OpenSSL
library has already committed some number of bytes from the buffer given
but because its returning -1 WANT_READ it is failing to report that
situation back to the application during the first SSL_write() call ?
An under reporting of committed bytes if you want to call it that. This
would also imply you can't reduce the amount of data to SSL_write()
since a subsequent call that failed. Or implies that OpenSSL may access
bytes outside of the range given by the currently executing SSL_write(),
in that its somehow still using the buffer address given during a
previous SSL_write() call.
The protocol has some quirks, not the least of which is the need to
handle alerts (of which many are fatal), and the ability to handle a
maximum segment size. If there's data in the queue to be read, it
needs to be read before any data can be sent out (as several alerts
require the entire connection to be severed).
I still have not gotten to the bottom of the entire scope of
situation(s) can cause an SSL_write() to return -1 WANT_READ. If its
only renegotiation that can; then this is always instigated by a
SSL_renegotiate() (from my side) or and SSL_read() that causes a
re-negotiate request (from the remote side) to be processed.
For maximum security, you need to read any pending data before you
write. This is because there's another type of data that the protocol
uses: alerts. A good number of which are fatal and require the entire
connection to be destroyed and recreated from scratch.
Back to your clarification on the modes.
It is still unclear how this would work, here is the strictest pseudo
code case I can think up. This is where:
* the exact address for the 4096th byte to send it always at the same
address for every repeated SSL_write() call and
* I don't change or reduce the amount of data to be written during
subsequent SSL_write(), until all 4096 bytes of the first SSL_write()
have been committed into OpenSSL.
char pinned_buffer[4096];
int want_write_len = 4096;
int offset = 0;
int left = want_write_len;
do {
int n = SSL_write(ssl, &pinned_buffer[offset], left);
if(n < 0) {
sleep_as_necessary();
} else if(n > 0) {
offset += n;
left -= n;
}
while(left > 0);
In practice many applications may copy their data to a local stack
buffer and give that stack buffer to SSL_write(). This means the data
shuffles up and the next 4096 byte window is use for SSL_write().
So what I am asking now is what is the _LEAST_ strict case that can be
allowed too if the one above the what I see as the most strict usage.
The need for this dumbfounds me. If SSL_write() is returning (<= 0)
then it should not have taken any data from my buffer, nor be retaining
my buffer address (or accessing data outside the scope of the function
call).
I understand that you have prioritized traffic, but you've already
stated that /you have committed that data to the connection/. OpenSSL
has every right to take some of the data (for example, if you pass a
buffer with a length of 4096 and the underlying interface handles only
1440-byte writes at a time) and run its internal operations -- such as
updating its packet count, calculating the HMAC, and running the part
of the buffer it can process next through the block or stream cipher,
before it determines if it needs to read. (This is because
cryptographic operations can be time-consuming, and more data can come
in while all those operations are being done.)
Basically, it's akin to a database that has autocommit set to 1. It's
already lost its previous state before the data was pulled out of the
buffer in the first place.
It is also valid for me to "change my mind" about exactly what
application data I want to write at the next SSL_write() call. This
maybe a change of application data contents or a change of amount of
data to write (length).
It's valid for you to change your mind at write(). SSL_write() does
not have precisely the same semantics... because SSL_write has already
changed the state of the SSL object. (This is a case where multiple
return values would be useful, but we don't have them, so SSL_write
returns -1 to indicate that none of the application data has yet gone
out on the interface.) It can't "roll back" to the state it was in
before it returned WANT_READ.
Infact I have an application that does exactly this, it implements a
priority queue of packetized data and the decision about what to send
next is made right at the moment it knows it can call write().
So it depends on a write() semantic that isn't matched by SSL_write().
The best thing to do in that case is to loop it through the reading
process until it says WANT_WRITE.
When you say "change the buffer location" do you mean the exact offset
given to SSL_write() in 2nd argument ? Or do you mean for repeated
calls to SSL_write() the last byte (4096th byte from example) address
remains constant until OpenSSL gives indication that the last byte has
been committed ?
The address of the buffer that you send. The contents and length of
the buffer need to stay the same (at least until the TLS 1.2 maximum
segment length extension is put in, at which point you can know what
amount of data to send to SSL_write and know what it hasn't sent yet).
Here I am asking "which buffer when?" and "what location?" in relation
to previous failed SSL_write() ?
In the case of my example usage when copying to the stack, I have a much
larger buffering system in place and a small temporary window on the
stack is used to prepare data for SSL_write(). This is because
SSL_write() doesn't support IOV/writev() scatter-gather buffers which I
am using with unencrypted sockets.
Its still unclear what guarantees my application needs to make for
OpenSSL, the sanity check looks at first glance to be there for its own
sake and redundant. Is there a direct relationship between
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER and SSL_MODE_ENABLE_PARTIAL_WRITE ?
I do not know the answer to this question. (I'm sorry. I will look
at the code and see what goes on in its state machine.)
Sorry for more questions. Seeing an example of what is right and what
is wrong in annotated code form would be ideal.
Darryl
The documentation needs to be much more clear, I agree.
-Kyle H
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
Automated List Manager [EMAIL PROTECTED]