On Sun, 2002-11-10 at 06:48, N. Thomas wrote: > I'm creating a large 20Mb file with this command: > > dd if=/dev/zero of=foo bs=1m count=20 > > and then scp'ing it between the machines.
Use hdparm to switch DMA on on the HDD on the debian box. Because your reading from disk as you transfer across the network, this may be the bottleneck. Could also be compression? Some machines scp might compress the stream while others does not (lots of zero's compresses very nicely :) I have tested speeds on Gigabit ethernet before, and you cant rely on the disk subsystem to deliver enough data fast enough over very fast networks. Its a big factor with Gigabit, and possibly could be a factor here. The best way to test is create a CGI script that runs under Apache, that prints the Content-type and content-length, and then returns that many zero's and then closes the connection. This way its a process thats generating the data rather than a file being read of disk. You know the process is going to be able to saturate the pipe faster than the network can handle. And then any Web browser can be used to test the speed (This removes the encryption bottle necks as well). Kind Regards Crispin Wellington
signature.asc
Description: This is a digitally signed message part