knitti wrote:
On 12/12/07, Daniel Ouellet <[EMAIL PROTECTED]> wrote:
I am only
saying that using PF in front of httpd will reduce the possible number
of httpd close_wait you might see. By default httpd can only support up
to 256 connections, unless you increase it and compile it again.

I don't understand why pf would reduce this. Every single CLOSE_WAIT
stems from a former established connection, and pf can nothing do
to convince httpd to close its socket. No rogue clients involved here.

lead you in that path, then I am sorry. What will affect your close_wait
time (when you reach that point) are the tcp stack value, witch I am
reluctant to suggest to adjust as they sure can create way more harm
then goods.

I don't think there is a systl for that. TCP connections don't expire by
default, if you not make them, and the same should go for a half-closed
one. There are perfectly legit reasons for long open half-closed
TCP connections.

TCP does handshake and take actions based on some timeout values, some fix in the RFC, so that can be affected in good or bad ways. So, a proper combinations of them with proper value can achieve the requested effect. Some example on top of may head I think could be, and I am not saying it's wise to go and changed them for no good reason, but they will affect the efficiency of your sockets in various ways, witch I wouldn't venture to say I could explain it all fully without mistakes. I only point out that there is ways to achieve what you are looking for, may be indirectly, but I think there is.


net.inet.tcp.keepidle           # Time connection must be idle before
                                  keepalive sent.

net.inet.tcp.keepinittime       # Used by the syncache to timeout SYN
                                  request.

net.inet.tcp.keepintvl          # Interval between keepalive sent to
                                  remote machines.

net.inet.tcp.rstppslimit        # This variable specifies the maximum
                                # number of outgoing TCP RST
                                # packets per second. TCP RST packets
                                # exceeding this value are subject
                                #  to rate limitation and will not go
                                # out from the node.  A negative value
                                # disables rate limitation.

net.inet.tcp.synbucketlimit     # The maximum number of entries allowed
                                # per hash bucket in
                                # the TCP SYN cache.

net.inet.tcp.syncachelimit      # The maximum number of entries allowed
                                # in the TCP SYN cache.


My point with PF here was that it would reduce the possible numbers of
close_wait state you could possibly see in the first place, witch is one
of the original goal of the question.

Why?

OK, I could be wrong and I am sure someone with a huge stick will hit me with it if I say something stupid, and/or there might be something I am overlooking or not understanding fully, witch is sure possible as well. (;>

But if httpd received a fake connection that do not do the full handshake, isn't it there a socket open and/or use by httpd for that fake connection anyway. Meaning it tries to communicate with that fake source and can't and eventually will close and (that's where may be I am failing here) will end up in close_wait may be?

Or, are you saying that the ONLY possible way a socket end up in close_wait state is ONLY when and ONLY possible if it was fully open properly in the first place? If so, then I stand corrected and I was/am wrong about that part of my suggestions. So, is it the case then?

Best,

Daniel

Reply via email to