knitti wrote:
The problem would be to "forget" calling ap_bclose() after ending a
connection, either because all data has been sent or the connection has
been aborted. What I can read with some confidence, is that keeping a
socket open beyond sending any data is not intentional, and there is
nothing (for me) which suggests that it would happen at all.

Logically if that was the case, wouldn't you think you would run out of sockets in just a few minutes after starting httpd? I am not saying there isn't any bugs in httpd, or that there is. Fair to assume there is some, but to that extend, I couldn't imagine so. Just think about it for a second. What the effect of it would be if that was the case?

Noob questions/statements ahead:

The code, which implications (aside from the clear visible intention what the
code *should do) are least clear to me for lingering_close() and lingerout()
(is this a signal handler for SIG_ALRM?).

I would suspect some kind of (signal?) race (not nessessarily there), in
which ap_bclose() gets called on a different socket than intended (thus
shutting down another connection as a side effect). BUT since the whole
code doesn't run threaded, I can't come up with something which would
actually suggest that.

I would appreciate if someone told me whether my interpretation is rather
wrong or rather right ;)

I can't say either way in a knowing fashion here. After cleaning up a lots of code in httpd in 204 & 2005, I got real sick of looking a it. May be one day, will go back again and do more.

But here are a few things to think about that I think will point you were it is and how you might be able to affect how it react. I am not putting a judgment on that however as I think in most cases, more harm can be done then good, and is way to dependent on each one setup. But never the less just think about this.

- Application needs sockets and send request to create and destroy them and keep using them after they are created. Who does that, kernel or application?

- Who receive the sockets creation and destroy requests and will create them or destroy them and pass the handle to the application when ready. The Kernel, or the applications?

- Who is handling the signaling, meaning handshake, opening, close_wait, retransmitions, etc. Application or kernel?

- So, in the end, if a socket is in close_wait, is it the application, or the kernel at that point? Meaning, was it already requested to be close and is now a signaling issue, or an application that hasn't asked to close the socket yet? (;>

- If jam in close_wait state, is it because it hasn't send the ACK on the request from the client to close the socket?

- Or is it that it did send the ACK to the client and is now waiting on the final ACK from that client to do it?

- Or is it that it reach that point because it was an none fully open three way handshake establish connection to start with may be?

- Or it is because the client just open a socket, get what it needed and didn't bother to do the proper closing of the sockets as it should be?

- Now, where is the application, in the case httpd involved here?

- Where can keep alive in httpd help, or not?

- Where pf proxy help or not?

- Where keep alive in tcp stack (sysctl) help or not?

So, I think I am done with this one, knowing where you are in the exchange process and the answers to the above will tell you where you can have the impact you are looking for.

I think I try to help as much as I should here and provided where to look for what part on the issue that is not in a single place and is not affected by a single aspect of the httpd usage.

That's why there isn't a single answer to the questions here and it will always depend on your specific setup, traffic patterns and load, etc.

Hope it help you some never the less and provide you something to think about.

I had also in the archive many tests on httpd already done and some changes and effect done for sysctl value. Some good, some bad, but it's there is you want to know more, however, I can't really recommend a specific solution as it is way to dependent on your situation.

Example, you could reduce the keep alive in sysctl a lots if you want to help the close_wait, but at the same time this will increase all the exchange messages between valid connections as well. So, on one hand to will affect the delay in closing your sockets sooner, but at the same time you will increase the load on other already active connections. Witch one is right and where is the best setup for you. I can't say, nor anyone else really. The defaults are pretty sane, you can change some of them yes, but then it's always a trade off between two or more things.

In all honesty, I can't tell you witch one is best for you. I did many tests for myself and what works for me, by no mean might work for you. But it's in the archive if you care to look however. There was some very valid feedback on it as well pointing me in the pro/cons of it.

That's why I need to let it go now as it is not that it's not interesting, but it's way to dependent on each one setup to go deeper here. I think the original issue was fixed and address, as for what's left, unless it does give you a problem, other then a feeling of wanting it to look different, you should put it to rest I think.

But there is a lots more that can be done to improve httpd load for sure, but it's to much users and setup specific. Unless you have more specific issues, with more data to show it, it's better to be left alone.

Best,

Daniel

Reply via email to