On Sep 21, 2008, at 18:53, Maurizio Lotauro wrote:
> HTTP is stateless, but this has nothing to do with keeping the 
> connection open.
> I don't think that a browser reopen the connection for every part that 
> compose a
> web page.
> And I don't think also that you close and reopen the connection for 
> every file
> you want upload or download to/from an ftp server :-)

I'm sorry but you are confused.  HTTP is not like FTP.  In HTTP 1.1, 
persistent connections were defined but it was a matter of convenience 
and they are not required (RFC #2616, Section 8 uses the term "SHOULD" 
instead of "MUST" when discussing the negotiation of persistent 
connections).

As of HTTP 1.1, persistent connections are the default.  But still, 
browsers do open multiple connections otherwise all retrieval would be 
synchronized, and it's not.  They may not create a new connection for 
each resource, but the reuse of the connections is pretty much 
arbitrary.  For the record, prior to 1.1, most servers did not support 
persistent connections, and browsers did in fact had to close and 
reopen the connection for every file they wanted to upload or download. 
  Some servers supported the Keep-Alive mechanism, but--again--it was 
more of a convenience, and could never be expected to be supported by 
either side.

This is why, in my opinion, coercing HTTP as the de facto standard 
protocol for all communications in the Internet (as a lot of people are 
trying to make it) is stupid.  There are better transfer protocols out 
there.  But this is a rant for another day.

>> From what I understand now about NTLM (still need
>> to learn about it!), it requires the cycle to happen within the same
>> session, which counters the RFC, and thus is an exceptional case.
>
> The basic is the only one that is handled in one step. All others 
> needs a
> negotiation. IIRC the NTLM is anomaly is that it is the server that 
> starts it
> with the first 401 answer.

Again, wrong.  All HTTP authentication uses the same negotiation 
mechanism (even since the days of HTTP 1.0), because they authenticate 
the *request*:

1. The client requests a resource from the server for the very first 
time and it doesn't know it requires authentication.

2. The server responds with an appropriate error code specifying the 
authentication mechanism(s) supported (including Basic and Digest).

(At this point, servers or clients using older versions of the HTTP 
protocol would have closed the connection.)

3. The client re-sends the request with the appropriate credentials and 
the server performs the authorization.

4. On every subsequent request, the client includes the same 
credentials and the server re-authenticates the request.  (Digest has 
some differences, but generally fits this scheme).

Notice that since the protocol is absolutely stateless, the server has 
no idea at any point how many requests have been sent, and it doesn't 
care.  So, the client can remember the specific server's requirements 
and send them in subsequent visits.  And in fact, this is what browsers 
do:  from then on, the authentication can be done in a single request, 
without the "error" step.

Now, NTLM (yes, finally I read some on it) authenticates the 
*connection*, not the request as the HTTP authentications mechanisms 
do.  This requires that the entire challenge-response cycle be 
performed in a single persistent connection.  As noted in that link you 
sent me, it's actually a bastardization of the protocol:  it's not 
really part of the HTTP 1.1 protocol, but it uses the standard 
mechanisms in a weird way.

Another way that NTLM breaks away from the protocol is that, once the 
connection is authenticated, subsequent requests within the same 
connection need not perform the challenge-response.

Because of its unique nature (damn you, Microsoft!) I'm sure browsers 
treat it as a special case. (I seem to remember having problems with 
Firefox 1.x connecting to my office's intranet site, while IE worked 
fine.  I always knew it had to be some MS extension to the protocol but 
didn't know what.  Now I do.)  Of course, IE will work fine with IIS.  
However, it'll be more interesting to find out how IE copes when 
dealing with Tomcat, which seems to return an error response right 
after the headers.

> In my case the server answers not after the headers but rigth after 
> the client
> has sent the first 8193 byte of the SendStream (that is the size of 
> the THttpCli
> send buffer).

My guess is that, technically, it's after the headers that the server 
reacts, but the first stream buffer (8193 bytes) is already on its way, 
since the client didn't stop to check, so they cross in transit.  It 
could also be that the HttpCli component does not acknowledge the 
response until after if finishes sending the buffer (which makes sense, 
since it's a single-threaded operation), and this causes the delay in 
the receipt of the response.

In any case, I'll see if I can do some experiments with IE and Tomcat 
and let you know what I find.

        dZ.
-- 
        DZ-Jay [TeamICS]
        http://www.overbyte.be/eng/overbyte/teamics.html

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Reply via email to