Did some test locally. Got 22-24% failed with with keep-alive and 10-11% failed without keep-alive.
First observation: 64 connections in not enough for serving 500 clients. Added '#define FD_SETSIZE 2048' to the top of daemon.c. Got 0.01-0.02% failed (keep-alive) and 11-12% failed (close). Second observation: 1 second timeout is too unrealistic. Changed timeout to 240 seconds. Got 0.00% failed (keep-alive) and 10-11% failed (close). Reverted FD_SETSIZE and use only timeout 240. Got 22-24% failed (keep-alive), 10-11% failed (close). Same in first test. Get back FD_SETSIZE 2048 and fully removed timeout from MHD start settings Got 0.00% failed (keep-alive) and 11-12% failed (close). Will fix FD_SETSIZE redefinition on W32. Strange that we have so many failed connection without keep-alive. Christian, should we somehow change the logic to check that connection has some pending action before calculating timeout? -- Best Wishes, Evgeny Grin 07.12.2015, 11:22, "Christian Grothoff" <[email protected]>: > Evgeny: I'm not sure I see how this would relates to enabling/disabling > keep-alive. What seemed more likely was that somehow keep-alive doesn't > work for the specific example silvioprog is testing, so the 2nd request > on the same TCP connection always fails (hence the ~50% failure rate). > > At least that is my guess. > > On 12/07/2015 09:07 AM, Evgeny Grin wrote: >> By default winsock support only 64 connections on single select(). >> You can override this be defining FD_SETSIZE to some higher value before >> including winsock headers. MHD detect redefined value and use it when >> needed. >> Just add -DFD_SETSIZE=16000 to command line when building MHD. >> Note that MHD automatically limits number of connections to FD_SETSIZE. >> >> One problem is discovered: MHD was tried to redefine FD_SETSIZE to 1024 >> on W32, but this code was non-functional. >> >> -- >> Best Wishes, >> Evgeny Grin >> >> 07.12.2015, 01:47, "silvioprog" <[email protected]>: >>> Hello bro, >>> >>> Yes, it killed the problem definitively! 3:) >>> >>> I did many tests using different machines and now it works like a >>> charm, thanks a lot for this fix. >>> >>> However ( ^^' ), I saw a small problem, but it is related only to the >>> keep-alive feature, even using previous versions like 0.9.46: when I >>> use keep-alive, I get many errors in many requests. When I disable the >>> keep-alive, I get only some errors. It is very easy to be reproduced, >>> but you need to use some tool like JMeter. Let's go to see that: >>> >>> . get JMeter here [1] and execute its jar file (on Windows, I just >>> double click in the `ApacheJMeter.jar` file); >>> . get this jmx file[2] file and open it in the JMeter (if you prefer, >>> I can explain you what I used in this test); >>> . expand the "Thread Group" hree, select the "Aggregate Report" >>> options, and click the `Start` button after compile and run this[3] >>> example. >>> >>> In my machine, when I kept the line [4] commented (ie, using >>> keep-alive), the result was: >>> >>> Samples: 15000 >>> 95% line: 75 >>> *Error: 51,52%* >>> >>> But, when I uncommented this line (ie, using connection close instead >>> of keep-alive) and retested it, the result was: >>> >>> Samples: 15000 >>> 95% line: 572 // yes, it's OK for `connection: close` >>> *Error: 0,56%* >>> >>> In short: using keep-alive you get ~51,52% errors and connection close >>> only 0,56%. It is a little bit strange, because I did some tests using >>> other servers (NodeJS, Jetty and Nginx) and it works fine, 0,0% erros. :-/ >>> >>> [1] >>> >>> http://mirror.nbtelecom.com.br/apache//jmeter/binaries/apache-jmeter-2.13.zip >>> [2] https://www.dropbox.com/s/wiu5gtsflj8omz0/HTTP%20Request.jmx?dl=1 >>> [3] http://pastebin.com/3wC5035F >>> [4] MHD_add_response_header(response, MHD_HTTP_HEADER_CONNECTION, "close") >>> >>> On Sat, Dec 5, 2015 at 1:33 PM, Christian Grothoff >>> <[email protected] <mailto:[email protected]>> wrote: >>> >>> Hi! >>> >>> Reading the code I noticed an #ifdef WINDOWS'ed call to shutdown() >>> that >>> would only be executed (in your particular setting) whenever yet >>> another >>> connection was accepted, possibly delaying the TCP connection tear >>> down. >>> I've tried to move the respective logic to happen earlier in SVN >>> 36731. >>> Please try this version, and let me know if this fixes your problem. >>> (Again, the problem doesn't really hit me on GNU/Linux, so this may or >>> may not be related.) >>> >>> Happy hacking! >>> >>> Christian >>> >>> -- >>> Silvio Clécio
