Raimo Niskanen wrote:
Interesting for me too, and most probably for others. It became an
interesting discussion of my CLOSE_WAIT "problem" after all...
To summarize (as I see it):
* pf "synproxy state" does not affect these CLOSE_WAIT sockets since
the SYN proxy is only active during connection establishement.
But it is a good to use anyway since it prevents IP spoofing.
Why not? Just test it out. What happen if you get a DDoS on your httpd
as an example, or try to connect to it. You send a packet to httpd, it
will create a socket to reply to your connection request and send the
source IP ACK and then wait for the reply ACK that will never come. So,
what does this do to your httpd then??? How many sockets will you have
pending responses here? You use one socket per user connection to your
httpd. You have 25 real users accessing your httpd and 1,000 fake users
without pf in the path. I will aksed you this simple question then.
How many sochets will your httpd use and how many will end up waiting on
reply ans then wait to close?
* Reducing httpd.conf:KeepAliveTimeout decreases the number of
sockets in CLOSE_WAIT. I had it at 150 seconds (my mistake,
probably the problem origin). The default is 15 seconds.
My setting is now 10 seconds, problem probably solved.
Thanks to all contributing to the solution!
Glad it provided you where to look.
* A httpd server socket enters CLOSE_WAIT when the client
closes (or half-closes) its end and sends FIN to the
server TCP stack that replies ACK and enters CLOSE_WAIT.
The socket proceeds out of CLOSE_WAIT when httpd calls
close() on the socket.
the close process have three stage as well. The client asked to close,
the server reply and the client confirmed. So, close, ACK and ACK.
Did you verify that the client sent the last required ACK to the
original request of the server to close?
There is also a keep alive in the tcp stack and if I remember well I
think it is set by default by the RFC is not a small amount of time.
So, if that was the case for each connection, don't you think you would
run out of socket in just a few minutes after starting httpd?
Something can be done to help the lost one, but leaving them alone is no
problem after you fixed what was your original problem above.
So, the remaining question is why httpd does not close the socket.
Even though KeepAlive is in effect, since the client has closed its
end there can come no more request on it, and the server
should be able to notice that the client has closed its
socket end either by recv() returning 0, or from a poll()
return value. The server also should be able to know if
it has more data to send to complete the reply.
I see no reason to hold the socket in CLOSE_WAIT the whole
KeepAliveTimeout time, and am interested to learn why.
Again, are you sure all the RFC process was done? Who is waiting on who
here? Also, I think you may be confusing a few things here. httpd not
closing a socket and having "KeepAlive is in effect" are contradictory.
That's the point of "KeepAlive" in httpd to keep the socket opne for the
next possible request from that same users. Not that KeepAlive is what
will make the socket close. httpd will keep it open specially to avoid
the usage of resources to fork an other httpd process if need be that is
way more costly in the OS then just keeping the one already open ready
for that user.
If you are so stuck on this, then disable "httpd KeepAlive" all
together. I sure wouldn't recommend to do that, but if that's what make
your life better, then please so do it.
Then there is some possible adjustment in the sysctl part for the tcp
stack in the OS, but I will not go back there again if just httpd
KeepAlive gives you a conceptual problem. doing so, I would only provide
you lots of rope to hang your httpd with it, or may be you. (;>