On 29/09/2009 09:15, André Warnier wrote:
aaime74 wrote:
...
Hi.
Kind of restarting from the beginning, I think that the first question
to ask is whether whatever method which actually does the rendering of
the maps, and which is "heavy" in terms of resources, is capable of
being interrupted cleanly in the middle. Is it capable itself of
checking regularly if it should continue doing the work ? Or else, if
you "shoot it down" does it mop up after itself, or does it leave stuff
to clean up all over the place ?
From an overall design point of view, it seem to me that you have two
very different types of processes going on : the first type is the
management of the HTTP requests, connections, protocol, etc.., which is
something that should be efficient, light-weight and quick, should
detect (whenever possible) that the client has broken the connection and
so on. That part should also serve the response to the client, when the
full response is ready as a static object on disk e.g.
Andre's approach is correct IMHO, it's a caching problem: decouple the
generation bit from the delivery bit and ensure that the data you need
to serve is always available at short notice, so the consequences of an
interruption aren't so big.
The second part is the generation of that content, which by it's nature
is slow and heavy, but has a very simple interface ("create this
content"; "stop right now"; ..).
Consider pre-fetching the next possible grid squares into the cache -
you may also consider tuning your caching algorithm to determine the
rate & direction of travel (of a given client) across the map, so you
can avoid pre-fetching unnecessary units.
I'd be thinking of a queue of requests and a multi-threaded
(multi-server?) pool to service it.
Slightly more leftfield: if you're effectively serving static images off
a disk, you could consider directing (via a forward) requests straight
to a public image directory and use APR/sendfile to serve them
statically - which would be very fast, delivery-wise, and would
eliminate the need for you to handle interruptions in your own code. (I
think.)
Lastly, if it's a heavy-weight process then at some point, when demand
rises, you'll need a server(s) with lots of power and probably a big
chunk of RAM. Don't try and design around this.
p
Personally, I would tend to try to separate the two parts, and create a
separate process to handle the content generation, a bit like a database
back-end. It seems to me that it would then be easier to "wrap" this
process in a simple management wrapper which can interrupt the content
generation when receiving some signal from the first part, and cleanup
properly, without tying up resources useful to the HTTP part in the
meantime.
Such a separation may also simplify aspects such as caching of
previously generated content, or load-balancing several content generators.
Maybe you should have a look at Apache MINA for the content-generation
side ? (http://mina.apache.org/)
The "(whenever possible)" above refers to the fact that a number of
things outside of your control can come in the way of such detection :
proxies, firewalls and the like. If the ultimate client breaks the
connection, it is not guaranteed that Tomcat itself would notice this
right away.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org