Chunked encoding with the coyote http1.1 connector seems to be very
inefficient.  I have 3 major comments about it which I will summarize, then
explain in more detail for those interested.



1) There are some performance problems with the current implementation of
chunked encoding.



2) I would like to be able to turn off chunked encoding completely, but this
does not appear to be an option.



3) Chunked encoding should probably only be used when returning content of
indeterminate length.  Static files served up by the default servlet should
perhaps not be chunked.



------------------------------

1 - Performance problems

I did some informal testing over a fast LAN.  My server is configured with
compression enabled, so I am getting responses with Transfer-Encoding:
chunked, and Content-Encoding: gzip from Tomcat.  I was using IE 6 as a
client.



A request to retrieve a static html page from Tomcat with the 4.1.27 coyote
connector results in the following:



<- (From Server to Client)

<- HTTP/1.1 200 OK (No Data)

<- HTTP Continuation (3 Bytes of data - chunk header size=0xa)

-> TCP Acknowledgement

<- HTTP Continuation (10 bytes of data - GZIP Header)

<- Empty packet with no HTTP header or data (just \r\n)

-> TCP Acknowledgement



Repeat the following 7 times (TCP ack every two packets)

<- HTTP Continuation (5 Bytes of data - chunk header size=0x200)

<- HTTP Continuation (512 Bytes of data *see below)

-> TCP Acknowledgement

<- Empty packet with no HTTP header or data (just \r\n)



...



<- HTTP Continuation (5 Bytes of data - chunk header size=0x18d)

-> TCP Acknowledgement

<- HTTP Continuation (397 Bytes of data)

<- Empty packet with no HTTP header or data (just \r\n)

-> TCP Acknowledgement

<- HTTP Continuation (5 Bytes of data - chunk header size=0)



This took a total of 43 packets and 9.2 mSec to complete.



* The 512 byte limit for the largest packets seems to be because the Gzip
filter uses the default constructor for the GZIPOutputStream (with a default
output buffer size of 512 bytes).





I tried the same test with Apache 2.0.44 using mod-gzip.  Apache does not
chunk the static HTML files.  I got:

<- HTTP/1.1 200 OK (972 bytes of data)

<- HTTP Continuation (1331 Bytes of data)

-> TCP Acknowledgement

<- HTTP Continuation (1332 Bytes of data)

<- HTTP Continuation (666 Bytes of data)



This took a total of 5 packets and 7 mSec to complete.



These informal tests were conducted over a fast LAN connection.  I'm
particularly concerned about the total number of packets used. Over





------------------------------

2 - Turning off Chunking

The older HTTP1.1 connector had an allowChunking attribute that allowed you
to turn chunking off completely.  I can't find a way to do this with the
coyote connector.  My investigation into all of this started because I am
investigating an IE bug that causes it to use many connections without
closing them with gzipped chunked responses.  I think I need to be able to
just disable chunking and felt that this should be a feature of the coyote
connector.



-------------------------------

3 - When to chunk

I thought that chunking was invented to handle serving up dynamically
created content that did not have a size known in advance.  I believe that
on both IIS and Apache static content is not chunked.  Is there any way for
tomcat to behave similarly - could the default servlet do something to
prevent the connector from chunking the data it serves up?





If you made it this far, thanks for taking the time to read this and
consider my questions.












---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to