Peter Kennard wrote:
At 23:07 3/4/2007, you wrote:
But since you can't send the response without finishing the reading of the input stream - the entire question doesn't seem to make sense.

If the input pipe is slow (ie: cellphone with slow pipe) and you are sending a transaction where the first part of it initiates a big operation (like a database lookup) the lookup can be happening while the rest of the data is still being read in. ie: it is useful to be able to read input in small chunks as it comes in. And the client can be tuned to chunk appropriately for the transaction(s).

It's not really useful for Tomcat though, given that the server is designed to be a Servlet Container rather than a multipurpose network application.

Tomcat mainly handles two cases: 1) read headers & THEN send response e.g. GET, 2) read headers & process body, THEN send response e.g. POST.

available() may work for this depending on buffering scheme of tomcat's protocol handler.

On writing the reply if you call "flushBuffer()" it will dispatch whatever is in the buffer (HTTP chunks as ip packets) to the client even if input reading is incomplete. Doing so if you can, will reduce round trip latency and the time your socket is consumed. A gross example would be a transaction to process a large file and return it to the client. If the processing was serial then the client could be receiving the return file even before it had finished sending the source file.

It seems the servelet API was not upgraded to handle incremental chunks in a flexible general manner when it was added in HTTP1.1. This is irrespective of how chunks may be juggled by any proxy or other front end. I am simply dealing with how you *can* handle them on the receiving end.

Why would the servlet API need to do that, when chunking is something that happens during the response rather than the request?

Your analysis is from the point of view of someone who's (if you'll forgive the analogy) trying to force a car to work like a canoe.


Given that, I'd suggest that if your app client is sending a large amount of data that can be processed in discrete chunks, you might as well split it (the request) into a series of separate smaller requests.

If you've got control of the client you could set an X-Header that indicates it's position in the series, and another that groups them.

At least then you gain some robustness and your server can indicate to the client if it's missing one of the series.


Having said all that, though, I'd have started from scratch or built a web service as I'm not sure what I'd really be gaining by using Tomcat.


p




PK




---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to