One setting is provided to control the pipeline depth in cases
           where the remote server is not RFC conforming or buggy (such as
           Squid 2.0.2).  Acquire::http::Pipeline-Depth can be a value from 0
           to 5 indicating how many outstanding requests APT should send. A
           value of zero MUST be specified if the remote host does not
           properly linger on TCP connections - otherwise data corruption will
           occur. Hosts which require this are in violation of RFC 2068.

I guess an alternative to trying to SRU this everywhere it affects
images would be to ask Amazon to support RFC 2068?

** Summary changed:

- apt-get hashsum/size mismatch due caused by swapped local file names
+ apt-get hashsum/size mismatch because s3 mirrors don't support http 
pipelining correctly

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/948461

Title:
  apt-get hashsum/size mismatch because s3 mirrors don't support http
  pipelining correctly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apt/+bug/948461/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to