I have altered the code to write out which webapp generated the page
to the html, so i can see where the partial responses are comming
from.

What I see is....

I stop the webapp on box 2. I make a request and I see part of the
page. The html shows that the partial page was generated by the
running webapp on box 1.

On a slightly different note, I have the mod_jk logging set to debug.
When it logs out the packets that its received from the webapps it
doesn't show all the html page. For instance I see two full packets
traced out then a half full one and that's it. But the page renders ok
in the browser. I have a example log if anyone would like to see it.


On 7/31/07, ben short <[EMAIL PROTECTED]> wrote:
> Rainer,
>
> Thanks for that. Yes we are going for a mix of both really. But I'll
> run some bench marks against both sticky and none sticky to see how it
> gets on.
>
> Yes in production if we want to stop/undeploy/deploy a webapp we will
> set the worker status to stopped. This issue came up as more of a what
> if test.
>
> Regards
>
> Ben
>
> On 7/30/07, Rainer Jung <[EMAIL PROTECTED]> wrote:
> > Using sticky sessions will allow only requests without sessions to be
> > balanced freely. If you've either got many sessions, or your sessions
> > are relatively short, than load balancing will statistically still good.
> > Only in case of few long lasting sessions, you could experience the
> > problem, that some heavy-use sessions might go to the same node.
> >
> > In case you've got only two nodes and you are building an HA
> > infrastructure, the optimality of the load balancing is not important,
> > because one node needs to be able to carry the full load anyhow.
> >
> > Throughput oriented webapps balance best with method "Request".
> >
> > Most installations I know observe a good load balancing although they
> > are using stickyness. I would rate a deviation of +/- 15% load
> > difference relative to the arithmetic mean over a 10 minute interval as
> > "good".
> >
> > Periods of low load don't count at all.
> >
> > Regards,
> >
> > Rainer
> >
> > ben short wrote:
> > > So how does setting sticky sessions to true and the default value for
> > > the Load Balancing Directive 'method' (defaults to request) interact
> > > then?
> > >
> > >
> > > On 7/30/07, Rainer Jung <[EMAIL PROTECTED]> wrote:
> > >> Apart from all the other things I wrote: don't turn off session
> > >> stickyness, even if you use replication. Turn it off only, if you've got
> > >> a really good reason. The fact that switching the backend between
> > >> requests is possible with replication should not lead to the assumption,
> > >> that it is a good idea to do this continuously.
> > >>
> > >> ben short wrote:
> > >>> Hi Rainer,
> > >>>
> > >>> By shutdown I mean I have clicked the 'stop' link on the tomcat manager 
> > >>> page.
> > >>>
> > >>> Im also using session replication between the two tomcats.
> > >>>
> > >>> I have just tried turning off firefoxes cache and I see the same result.
> > >>>
> > >>> On 7/30/07, Rainer Jung <[EMAIL PROTECTED]> wrote:
> > >>>> Hi Ben,
> > >>>>
> > >>>> I don't know what exactly you mean by "shutdown", but mod_jk has no
> > >>>> memory/cache/buffer for parts or all of an earlier response. It does
> > >>>> buffer parts of a request for reusing it during failover, but not with
> > >>>> responses and not between different requests.
> > >>>>
> > >>>> If the webapp is not available on the target system, there is no way 
> > >>>> how
> > >>>> mod_jk could return with 50 lines of correct response. Those 50 lines
> > >>>> might either come from your backend (what is "shutdown"), or from some
> > >>>> other cache (browser, between browser and Apache, mod_cache_* inside
> > >>>> Apache, between Apache and Tomcat).
> > >>>>
> > >>>> Nevertheless for production, I would always use a cleaner way of
> > >>>> disabling a context: before undeploying first set the activation of the
> > >>>> worker to stooped, which means it will no longer forward any requests
> > >>>> and the load balancer will transparantly choose another worker. No
> > >>>> recovery and errors.
> > >>>>
> > >>>> If you use sessions without replication, you could also set a worker to
> > >>>> disabled before going into stopped. With disabled requests for existing
> > >>>> sessions will still be forwarded, but no requests without sessions.
> > >>>> Depending on your session timing the target might thus get slowly out 
> > >>>> of
> > >>>> use.
> > >>>>
> > >>>> Also add timeouts to your config. We have a new docs page for 1.2.24
> > >>>> (which will go live tomorrow). You can have a look at it under
> > >>>>
> > >>>> http://tomcat.apache.org/dev/dist/tomcat-connectors/jk/docs/jk-1.2.24/generic_howto/timeouts.html
> > >>>>
> > >>>> and consider using the option recovery_options.
> > >>>>
> > >>>> Regards,
> > >>>>
> > >>>> Rainer
> > >>>>
> > >>>>
> > >>>> ben short wrote:
> > >>>>> Hi,
> > >>>>>
> > >>>>> I have a odd issue occurring with my tomcat cluster serving ~50 lines
> > >>>>> of the page from a stopped webapp.
> > >>>>>
> > >>>>> My setup is as follows...
> > >>>>>
> > >>>>> Box 1
> > >>>>>
> > >>>>> Apache running a jk mod loadbalancer. It loadbalances between an
> > >>>>> instance of tomcat on this box and on box 2.
> > >>>>>
> > >>>>> Box 2
> > >>>>>
> > >>>>> Apache running a jk mod loadbalancer. It loadbalances between an
> > >>>>> instance of tomcat on this box and on box 1.
> > >>>>>
> > >>>>> Software...
> > >>>>>
> > >>>>> OS RH 4
> > >>>>> Tomcat 6.0.13
> > >>>>> Java 1.6.0_01
> > >>>>> Apache 2.2.4
> > >>>>> Mod_jk 1.2.23
> > >>>>>
> > >>>>> workers.properties (same on both boxes)
> > >>>>>
> > >>>>> # JK Status worker config
> > >>>>>
> > >>>>> worker.list=jkstatus
> > >>>>> worker.jkstatus.type=status
> > >>>>>
> > >>>>> # Presentaton Load Balancer Config
> > >>>>>
> > >>>>> worker.list=preslb
> > >>>>>
> > >>>>> worker.preslb.type=lb
> > >>>>> worker.preslb.balance_workers=jcpres1,jcpres2
> > >>>>> worker.preslb.sticky_session=0
> > >>>>>
> > >>>>> worker.jcpres1.port=8009
> > >>>>> worker.jcpres1.host=192.168.6.171
> > >>>>> worker.jcpres1.type=ajp13
> > >>>>> worker.jcpres1.lbfactor=1
> > >>>>> worker.jcpres1.fail_on_status=503,400,500,909
> > >>>>>
> > >>>>> worker.jcpres2.port=8009
> > >>>>> worker.jcpres2.host=192.168.6.174
> > >>>>> worker.jcpres2.type=ajp13
> > >>>>> worker.jcpres2.lbfactor=1
> > >>>>> worker.jcpres2.fail_on_status=503,400,500,909
> > >>>>>
> > >>>>>
> > >>>>> My problem...
> > >>>>>
> > >>>>> If i stop the webapp on box 2, wait for a while and make a request I
> > >>>>> get about 50 lines of the expected page in my browser ( assuming the
> > >>>>> request went to the shutdown webapp. On checking the jkstatus page I
> > >>>>> then see that the lb has set that webapp to ERR. On refreshing the
> > >>>>> browser the lb routes me to the running webapp and I get the expected
> > >>>>> page.
> > >>>>> After a while the jk lb will set the shutdown webapp into the REC
> > >>>>> state. If I then make another request I see the same thing, about 50
> > >>>>> lines of a page and then the lb kicks the lb member out of the lb
> > >>>>> pool.
> >
> > ---------------------------------------------------------------------
> > To start a new topic, e-mail: users@tomcat.apache.org
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
>

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to