On Thu, 2 Aug 2012 02:01:45 -0700
Dahlia Trimble <dahliatrim...@gmail.com> wrote:

> I can't help but think something is wrong here.  A single TCP/IP link
> is more than capable of saturating available network bandwidth with
> efficient transfers of large volumes of data provided the end-points
> can produce and consume quickly enough.
> 
> It seems part of the problem may in the request/response nature of
> HTTP. The viewer needs to make a request for each asset it needs as
> it discovers it needs it. It sends a request for each asset, and the
> provider endpoint then has to do whatever it does to make the asset
> available before beginning to send it back to the client. This may
> occur relatively instantly in the case of assets in a server memory
> cache, or a lot longer depending on where it needs to be pulled from
> or how it may need to be prepared. Assuming this is the case, having
> multiple overlapping requests can improve the overall download rate
> of multiple assets by allowing some downloads to occur while others
> are prepared, albeit at the expense of additional connections. Having
> a persistent connection reduces some of the delays introduced by
> re-establishing a connection for each asset, but it does nothing to
> reduce the time that the server endpoint needs to acquire and prepare
> the asset to send.
> 
> Now (assuming this isn't the case already) if the producer endpoint
> could be made aware of future requests, it could fetch and prepare
> the asset for transfer prior to the actual request being received,
> thereby reducing or eliminating the time delays inherent in the
> request-response paradigm. This *may* be as simple as adding
> additional optional UUIDs and parameters to the asset request for
> assets that the viewer would likely be requesting next. If this were
> the case, a single connection could have a higher effective
> throughput by ensuring minimal delays between request and response,
> and reduce the need for more simultaneous connections.
> 
> Such a solution may or may not be practical or easily implemented in
> existing infrastructure, or may not be as efficient as other designs.
> My point is more or less meant to bring more perspectives into the
> discussion by considering other bottlenecks that may exist, which if
> mitigated, could reduce the need for excessive connections.
> 
> Thoughts?
> -dahlia

Dahlia, an overwhelming amount of past experience has taught us that
Linden Lab is not interested if you "think along" with them about the
server, or the viewer-server protocol,either with suggestions, designs
or anything. This list is set up to get feedback about viewer bugs, and
to announce things they came up with (in most cases someone else came up
with than the ones that we're allowed to talk with). So, it's a complete
waste of your time to do anything else than to just sit back and read
what Linden Lab is going to push through (no matter what arguments you
come with, or what flaws you point out).

[ That being said, they came up with "pipelining" as the solution for
the problem you mention. That allows a viewer to send new requests over
the same connection, without having to wait for a reply for former
requests. This isn't the most efficient solution, but a lot better
(still depending on the closed source server implementation though)
than how it works so far. ]

-- 
Carlo Wood <ca...@alinoe.com>
_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to