Well, it sounds like someone needs to modify the client, then, if you want to use SSL/TLS.
"should return a FIN before RST" is an oversimplification, and possibly incorrect, depending on what "should" means in this context. There is no simple explanation of this. When a process closes a TCP connection normally, the stack will send a packet with FIN set to the peer, if the conversation is still open (i.e., if the stack hasn't received notification that the conversation is aborted). That packet might be the last outbound data packet, or it might not contain any application data. A normal close is the only case where a FIN should be generated (though note that the application doesn't necessarily have to do the close explicitly; many OSes will perform a normal close if the application exits without explicitly closing the conversation, and there's no unreceived inbound data). An RST can be generated for several reasons. An application can cause an RST to be generated by closing the conversation without having read all the inbound data that the stack has received; the RST tells the peer that not all the data it sent was processed by the application. An application can also cause an RST by disabling normal close handshaking (e.g. by setting the SO_LINGER option, with the sockets API) and closing the conversation. And an RST may be generated by the stack if it receives data it can't deliver. For example, the application might receive all available data and close the conversation normally (causing a FIN to be sent), but then the peer might try to send more data. That generates an RST - the application on the other end of the conversation can't receive that data. Note that a FIN doesn't in itself mean the sending side can't receive any more data. TCP uses a half-close mechanism, and FIN just means "I'm not going to send any more data". An application can close its outbound side of the conversation and still be able to receive; with the sockets API, that's done using the shutdown system call. But most applications just close their end of the conversation entirely, so the stack marks the conversation as no longer able to receive, and inbound data will generate an RST to the sender. The normal TCP conversation close looks like this: - One side sends a FIN, possibly also ACKing received data, to say "hey, I'm done sending". - When the other side is done sending too, it sends a FIN of its own. This FIN will ACK the first FIN, if the sending side hasn't already ACKed that FIN. - The side that sent the first FIN sends back a bare ACK of the second FIN, and the conversation is done. This is typically described as "FIN, FIN-ACK, ACK", but in fact the first FIN can be ACKed as part of a regular send from the other side, *if* the first side only half-closed (and so can still receive data). In practice, most applications end up closing the conversation when it's idle, or when there's just a little bit of data to be ACKed by the side that does the passive close, and so what you see in the trace is "FIN-ACK, FIN-ACK, ACK". The traces don't contain enough information to say for sure what the applications are doing. (This is true in general of what appears on the wire, with TCP.) But one plausible scenario for the first trace is this: 1. Server sends close_notify. 2. Client ACKs server's packet containing the close_notify. (This packet also contains data.) 3. Server calls close(), terminating its side of the conversation for both sending and receiving. This causes a FIN to flow to the client. (This packet also ACKs the data from #2.) 4. Client receives server's FIN; it knows the server won't send any more data, but not whether the server is willing to receive more data. 5. Client sends close_notify. 6. Server is no longer willing to receive, so generates an RST. Note that while the trace shows steps 4 and 5 in that order, they could be reversed; this is just an accident of timing. In fact, the server side could send its FIN at any point from 1 to 5, and the client could notice it at any point after it's sent. Most importantly, for our purposes, there's no reason to believe the client "waits to receive the 'Ack Fin' before sending the close_notify", as you say below. It's almost certainly just a timing artifact. The client application sees the close_notify and says, "OK, I'll send a close_notify back". It very likely hasn't done a second receive operation, which it would need to do to know that a FIN has been received. And in fact the FIN may *not* have been received when the client application calls the API to send that close_notify. Just because that's how things show up in the trace doesn't mean that's the order in which the two applications issued their API calls. (I also don't know where this trace came from - if it's coming from the stack, or a wire trace, or what. I don't recognize the format.) And in any case, waiting for a FIN wouldn't help with the failing case. The FIN just means the other side is saying it won't send any more data. In the second trace, on the other hand, the server *might* have explicitly aborted the conversation, which would explain why we see an RST and no FIN. That would be poor behavior, but certainly not unheard of; many people who can't be bothered to learn how to use TCP write programs with it anyway. Or it might have done something strange, like half-close the receiving side of the conversation, but not the sending side. Or it might have done a normal close, but the FIN doesn't show up in the trace for timing reasons. But I suspect the RST was generated because the server closed the conversation normally without reading all the data the stack had already received from the client. That will cause an RST immediately, to let the client side know that not all the data it sent was processed by the application. The data in question might have been from the packet immediately preceding the close_notify from the server, or from the one after it, which could have arrived between the time the server sent its close-notify and when it called the API to close its end of the conversation. (The Winsock API actually recommends using a half-close followed by receiving and discarding data until the peer's half-close is received, to prevent this very condition. But again, few people bother to learn TCP or the sockets API before trying to use them.) And *this* is why Rescorla's book says "sometimes you'll get an RST, so deal with it". There are a lot of conditions where an RST can occur. TLS adds an application-level conversation protocol (the close_notify fault) on top of TCP's conversation protocol so that you'll know whether the peer was done sending data. What you won't know is whether it processed all of your data, so if you care, you have to add another application-level protocol on top of TLS if you need that guarantee. (In this case, FTP will supply that, in the form of its response messages.) So I don't see a simple solution to your problem. I'd be tempted to wrap the FTP client in another program and filter out the failing return code if I've received the server's response message to my last command. I don't remember off the top of my head whether there's a straightforward FTP API on zOS. -- Michael Wojcik Technology Specialist, Micro Focus > -----Original Message----- > From: owner-openssl-us...@openssl.org [mailto:owner-openssl- > us...@openssl.org] On Behalf Of Donald J. > Sent: Friday, 08 August, 2014 22:11 > To: openssl-users@openssl.org > Subject: Re: RST after close_notify > > The FTP client is a batch mainframe process which > must get return code zero, or someone gets called > in the middle of the night. I have been working > with IBM support which claims that the server should > return a Fin before Rst. So I will probably turn this > problem over to our PC server group. > > I don't really understand why in the successful sequence, > the client sends "Ack PSh" and waits to receive the > "Ack Fin" before sending the close_notify. But in the > failing sequence the client sends "Ack Psh", then > immediately sends lose_notify without any waiting. > > If the server is closing its connection after sending the > close_notify, it probably wouldn't send the "Ack Fin" in > the successful sequence? > > I guess IBM is saying the server should send "Ack Fin", > wait for Ack from client, and server then would send > the "AckRst"? > > -- > Donald J. > dona...@4email.net > > On Fri, Aug 8, 2014, at 02:03 PM, Michael Wojcik wrote: > > > -----Original Message----- > > > From: owner-openssl-us...@openssl.org [mailto:owner-openssl- > > > us...@openssl.org] On Behalf Of Donald J. > > > Sent: Friday, 08 August, 2014 15:34 > > > To: openssl-users@openssl.org > > > Subject: RST after close_notify > > > > > > I have an issue with an FTP client issuing a DIR command to a Windows > > > FTP server. > > > A normal packet trace is shown in sequence 1 below. An "Ack Fin" is > > > received > > > from the Windows FTP server and the DIR command completes successfully. > > > > Both of your traces below show an RST in the final packet, not a FIN. > > > > > > > In the 2nd sequence, each side exchanges close_notify, but no "Fin" > > > flags are set. > > > Windows FTP server ends with an "Ack Rst". After receiving the Reset > > > packet, > > > the FTP client issues a "connection reset' message" and sets an error > > > code. > > > Is that the correct thing to do? > > > > Are you questioning the server's behavior, or the client's? > > > > Probably what happened is the server sent its close_notify and then > > closed its end of the connection without waiting for the client's > > close_notify response. See Eric Rescorla's /SSL and TLS/ book, 8.10, for > > further discussion. This is unfriendly behavior by the server, in my > > opinion, but common enough for Rescorla to discuss it. > > > > It's also possible the server did an abortive close, which would be the > > Wrong Thing to do, but the former case is more likely. And in any event, > > your client couldn't distinguish between the two. (And what would you do > > about it anyway? If someone else's server behaves badly, you have to deal > > with it in some fashion.) > > > > How the client handles receiving a RST (generally manifests as a return > > code of -1 from send or recv, with errno set to ECONNRESET [1]; with > > OpenSSL you should get SSL_ERROR_SYSCALL and check errno) is a matter of > > taste. Often you do want to report that the connection was reset. In this > > case, though, since a reset is not unexpected AND you know you've > > received all the data from the server - you got its close_notify - it's > > better to silently ignore it. > > > > In short, the logic should be something like this: > > > > if RST-received > > if we were trying to send data > > check for a close_notify from the peer > > end-if > > if close_notify not already recevied from peer > > treat as failure > > end-if > > close socket and clean up > > end-if > > > > > > [1] This assumes the application, if it's running in a POSIX environment, > > has set the disposition of the SIGPIPE signal to "ignore". SIGPIPE is a > > kluge for applications that don't check the result of the write/send > > family of system calls. Any well-written application should ignore it. > > > > > > -- > > Michael Wojcik > > Technology Specialist, Micro Focus > > > > > > > > This message has been scanned for malware by Websense. www.websense.com > > ______________________________________________________________________ > > OpenSSL Project http://www.openssl.org > > User Support Mailing List openssl-users@openssl.org > > Automated List Manager majord...@openssl.org > > -- > http://www.fastmail.fm - Same, same, but different... > > ______________________________________________________________________ > OpenSSL Project http://www.openssl.org > User Support Mailing List openssl-users@openssl.org > Automated List Manager majord...@openssl.org ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org