Hi,

it seems this problem is more related to apache and not to tomcat. There might 
be a race condition with HTTP keepalive.

I will ask my questions on the apache mailing list.

kind regards
Janning

On Wednesday 25 March 2009 10:51:55 Janning Vygen wrote:
> Hi,
>
> we encounter some strange errors with our apache (v2.2)  fronting 4 tomcat
> (v5.5) server.
>
> We are loadbalancing with sticky sessions to our tomcats. Everything runs
> fine beside the fact that 0.1% of all requests get a 503 http status code.
> The rate raises under load, but it is at least 0.1% (way too much!!) and
> occurs even at the early morning hours. The request is not logged on the
> tomcat side (no exceptions, no access.log), only on apache access.log with
> 503 and error log as stated below.
>
> 1. This is from our apache log file:
>
> [Wed Mar 25 07:08:44 2009] [error] [client 74.6.22.94] (70014)End of file
> found: proxy: error reading status line from remote server tc1
> [Wed Mar 25 07:08:44 2009] [error] [client 74.6.22.94] proxy: Error reading
> from remote server returned by /site.html
>
> 2. This is from our apache config  (prefork engine)
>
> KeepAlive Off
>
> ServerLimit           650
> ListenBackLog          50
> StartServers           50
> MinSpareServers       100
> MaxSpareServers       200
> MaxClients            300
> MaxRequestsPerChild 10000
>
> ProxyRequests Off
>
> <Proxy balancer://tomcatcluster>
>   BalancerMember http://tc1:80 max=100 route=tc1
>   BalancerMember http://tc2:80 max=100 route=tc2
>   BalancerMember http://tc3:80 max=100 route=tc3
>   BalancerMember http://tc4:80 max=100 route=tc4
>   Order Deny,Allow
>   Allow from all
> </Proxy>
>
> ProxyPass / balancer://tomcatcluster/ stickysession=JSESSIONID|jsessionid
> ProxyPassReverse / http://tc1/
> ProxyPassReverse / http://tc2/
> ProxyPassReverse / http://tc3/
> ProxyPassReverse / http://tc4/
> ProxyPreserveHost On
>
> 3. This is our connector from our tomcat configuration
> <Connector
>    port="80" URIEncoding="UTF-8" maxHttpHeaderSize="8192"
>    maxThreads="200" minSpareThreads="100" maxSpareThreads="200"
>    enableLookups="false" acceptCount="25"
>    connectionTimeout="10000" disableUploadTimeout="true"
>    maxKeepAliveRequests="100"
>    compression="on" compressionMinSize="2048"
>    noCompressionUserAgents="gozilla, traviata"
>    compressableMimeType="text/html,text/xml,text/css"
> />
>
> I tried to investigate this problem but wasn't even able to reproduce it on
> my test engine. Googling it, i found some other people having this problem
> but no solution.
>
> As far as i understand from my investigations "BalancerMember http://tc1:80
> max=100" does not make sense on "prefork". max should be 1 with prefork
> model. But i think this is not the reason.
>
> if max=100 is not effective, there "could" be situations where MaxClients
> of 300 all go to one tomcat because of session affinity. But this is rather
> theoretical.
>
> Maybe i can't use keepalive on the tomcat side. But it seems to work. But
> honestly: i am clueless.
>
> Do you have any hints why i do encounter these connection errors?
>
> Or does anybody has a running configuration with apache2.2,
> mod_proxy_balancer and tomcat 5.5. where these problems does not occur?
>
> kind regards
> Janning
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to