On 10/23/2012 6:10 AM, Henri Beauchamp wrote:

> And in fact, llappcorehttp.cpp only touches CP_CONNECTION_LIMIT, so
> CP_PER_HOST_CONNECTION_LIMIT is kept at its default (8) whatever the
> TextureFetchConcurrency debug setting value, meaning the viewer never
> opens more than 8 simultaneous connections per HTTP server.
>
> I therefore think that, as it is used right now TextureFetchConcurrency
> is not really useful (there's already a hard-coded limit of 40 in
> lltexturefetch.cpp for the max number of simultaenously queued texture
> fetching requests: perhaps this number should be affected by
> TextureFetchConcurrency instead ?), and in fact, the CP_CONNECTION_LIMIT
> will need to be much greater than 8 or 12, once the new HTTP core is used
> for connecting to other servers than just texture servers (mesh,
> capabilities, etc).
> On the other hand, I agree that CP_PER_HOST_CONNECTION_LIMIT should be
> kept below a reasonnable maximum value (8 sounds good for pipelining
> requests, but non-pipelining ones could probably allow up to 32 which
> is the default for per-host connections in most HTTP servers).

Actually, GP_CONNECTION_LIMIT (global) and CP_PER_HOST_CONNECTION_LIMIT
(per-class, per-host) aren't implemented yet so only CP_CONNECTION_LIMIT
(and TextureFetchConcurrency) have effect.  _httpinternal.h has the
general to-do list for next phases.  This is one area that should get
some attention but a single control is all that's necessary for this
release.

As for the 40 request high water mark.  That's part of a trade-off
between several competing factors:
1.  Deep pipelining to keep work available to low-level code (favors
large numbers).
2.  Responsiveness to changes in prioritization without having to
serialize and pass new priority values down to lowest layers (favors
small numbers).
3.  Eventual balancing with other users of the same class to guarantee
fairness and liveness.  For textures, this will almost certainly include
meshes and possibly other caps-based requests that don't use SSL.

> Unless the router is buggy, it shouldn't be impacted by the number of
> open sockets (at least not under 60K sockets)... Some protocols, such
> as torrent can use hundreds or even thousands of sockets at once.

As Oz points out, routers are all affected by this and other factors.
And I'd go so far to say that any router that implements NAT is
guaranteed to be broken by design.  But they're all broken in unique
and interesting ways.  Some are sensitive to connection concurrency.
Others to connections created over a time interval.  To counts of
dest_addr:src_addr pairs, to counts of (dst_addr:dst_port:src_addr:
src_port:ident) tuples.  To DNS activity interspersed with handshakes.
And then the failure modes are many.  I once tried to build a simple
control system to respond to failures but one family of routers
takes five minutes to respond to a change in environment.  Can't build
a universally valid feedback system for this purpose with that kind
of delay in the response.

To quote Roy Batty:  I've seen things you people wouldn't believe.

Here's a chart I keep forwarding:
http://www.smallnetbuilder.com/lanwan/router-charts/bar/77-max-simul-conn
Not officially endorsed by Linden, etc., but a useful measure of
one metric that is likely to predict problems.  At the bottom of
that chart you'll find members of router families that are both
very common and very often a source of problems in SL.

> The true limit is server side.

No, not really.  It is *a* limit and one I'm deeply involved with.
But there are people who are having difficulty getting to the
servers and haven't even been able to enjoy its limits.


_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to