Philipp Kern <tr...@philkern.de> writes: > On 2010-05-19, Goswin von Brederlow <goswin-...@web.de> wrote: >> Reading that I don't think that is really a pipelining issue. You do not >> need pipelineing for it to work. The real problem is keep-alive. The >> connection isn't destroyed after each request so you can put multiple >> requests into the stream and exploit different brokenness in different >> parsers along the way. > > Those are bugs in the servers that allow that output, though. > >> I think you have failed to show that pipelining is broken. What seems >> "broken" is Keep-Alive. Do you suggest we stop using Keep-Alive to >> prevent broken parsers from being exploited? Make a full 3-way handshake >> for every request? > > I think we would want keep-alive with a pipeline depth of 1 (i.e. send the
Obviously I did not mean to disable Keep-Alive. :) > new request after the old one was processed). I'd rather think that > TCP slow start is a problem if you avoid keepalive than the full 3-way > handshake (which is annoying too). Concurrent requests put an unreasonable > load onto the mirrors, so we should avoid that. > > Kind regards, > Philipp Kern How about useing pipelining and if it breaks retry without pipelining and inform the user of it? If it occurs frequently the user will eventually notice the message and configure a depth of 1. So 2 changes: 1) cope with the error graciously and 2) explain what is probably happening. That way most people would get the speed benefit and installations would still not break when broken software is encountered. MfG Goswin PS: That doesn't mean squid shouldn't be fixed too. -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/87d3wrnqyi....@frosties.localdomain