We currently have an SSL client/server setup that uses a basic "send request, receive response" architecture. In one scenario, we did something similar to the following:
----------------------------- Client: 1. Send request 2. Delete connection Server: 1. Wait for connection 2. Process request 3. Send response ----------------------------- The issue here was that the client never tried to receive a response (since it was unnecessary) and simply deleted the connection. However, the server was trying to send a response, even though the client closed the connection. We expected the SSL_write() function to handle such a scenario, returning an error code or something similar. However, it simply crashed. The last line of code that executes is the following: int ret = SSL_write( ssl, &buffer[ bytesWritten ], length - bytesWritten ); We know that bytesWritten is within the bounds of the 'buffer' array and that 'length - bytesWritten' is always greater than 0. Therefore we believe the issue to be with SSL_write() itself. We are using OpenSSL 0.9.8. Anyone ever run into something like this, or have any ideas on what might be happening? Any suggestions would be appreciated. Thanks Dusty ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]