Shiby Maria John wrote:
I was getting confused with setting the load balancer to be
sticky_session and setting of lbfactor together.
By session, i meant new sessions being created in the server.
Are they mutually exclusive ?

sticky_session: if a request carries a session id, either via JSESSIONID cookie or a ;jsessionid=... URL encoding, mod_jk looks, if the session id contains a route, i.e. a suffix separated with a dot '.'.

If so, it checks, if the load balancer has a member, with route attribute equal to the session route, or name of the worker equal to the session route. If so, and this worker is in OK state and not stopped, it sends the request there. If there is no such worker, or the worker is nor usable, the request is handled like it wouldn't have a route in the session id, or no session at all.

The route is put into the id automatically by Tomcat, if you set the jvmRoute in server.xml. The route added to the session id is equal to the value of jvmRoute.

No lbfactor used for this decision.

lbfactor influences the decision, to which member of a load balancer a request gets forwarded only, if the request isn't already handled sticky, e.g. it doesn't carry a session id or the session id does not have a route in it.

In this case, mod_jk chooses the member with least load. How we count load, depends on the method attribute of the load balancer. When we look for the least load, we divide the load of the individual members by their lbfactor, s.t. the load of a member with lbfactor 10 only counts 1/10 of the load of a member with lbfactor 1. That way the member with lbfactor 10 will approximately get 10 times the number of requests than the one with lbfactor 1.

Technically we don't really divide, but do something similar, that doesn't need floating point, but leads to the same result (see below).

Can you please explain the effect of setting both those values along
with method=R.
Please clarify.

Method "R": Each request forwarded to some member changes the load value of the member. The load value is used like described above, when we need to decide, which member should get the next request, that's not already handled by stickyness.

R: if a request goes to a member, increase the load value of it by 1.
T: if a request goes to a member, increase the load value by the number of bytes read and written for this request B: if a request goes to a member, increase the load value by 1, and directly after the end of the request, decrese by 1 (so the load value should be equal to the number of requests currently processed in parallel by this member)
S: Like R, but only count the request, if it is not handled by stickyness.

More precisely, to avoid the division by the lbfactor, we don't increase by one or by the number of bytes, but we increase the load value instead with a multiple of 1 or the number of bytes. The factor is an integer number and those factors for the members are proportional to 1/lbfactor.

The factors can be seen in the status worker output (I think the column is named "M" for multiplicity) and I think the resulting load values are "V". See the status worker and its legend.

Furthermore: in order to keep the influence of load local in time, the load values of all workers are divided by 2 approximately once a minute. This is true for all methods, apart from "B", where the load value does not accumulate, so there's no need to decay.

Regards,
Shiby

Regards,

Rainer

Rainer Jung <[EMAIL PROTECTED] ppdata.de> To Tomcat Users List 01/14/2008 <users@tomcat.apache.org> 04:58 PM cc Subject Please respond Re: Doubt in how lbfactor works to with load balancing of Apache "Tomcat Users with Tomcat cluster List" <[EMAIL PROTECTED] pache.org>



Hi Shiby,

Shiby Maria John schrieb:
Hi,

This is my worker.properties for Apache server for clustering 3
instances of Tomcat in my machine.

# The advanced router LB worker
worker.list=router

# Define a worker using ajp13
worker.worker1.port=8009
worker.worker1.host=localhost
worker.worker1.type=ajp13
worker.worker1.lbfactor=1

# Define another worker using ajp13
worker.worker2.port=9009
worker.worker2.host=localhost
worker.worker2.type=ajp13
worker.worker2.lbfactor=10

# Define the LB worker
worker.router.type=lb
worker.router.balance_workers=worker2,worker1,worker3
worker.router.method=B

# Define another worker using ajp13
worker.worker3.port=8029
worker.worker3.host=localhost
worker.worker3.type=ajp13
worker.worker3.lbfactor=50

I expected more sessions to be hitting worker3 since it has the max
lbfactor. But the sessions are created equally in all servers.
Can some one please explain this ?

What happens, if you use the default "mthod", which is "R" = by
requests?

Is your app a normal webapp (throughput focused, many relatively short

running requests)? Then "R" should be best.

Is there a reason you are talking about "sessions"? What is the
ressource you need to balance with, is it CPU (the traditional notion
of
load) or more memory (because your sessions are very big)? In the
latter
case, you could also use "S". Although many people use "B", I very
rarely find a use case, where "B" is a nice fit.

A nice way of following what's going on is to use a status worker:

worker.list=jkstatus
worker.jkstatus.type=status

JkMount /jkstatus jkstatus

and then point your browser to the URL /jkstatus

See: http://tomcat.apache.org/connectors-doc/reference/status.html

Regards,
Shiby

If "R" (or maybe "S") don't help you, let us know.

Regards,

Rainer

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to