> > Oh, come on. Get real. The world TCP speed record is 10GE right now, > it'll > go higher as soon as there are higher interface speeds to be had.
You can buy 100G right now. I also believe there are some 40G available, too. Also, check this: http://media.caltech.edu/press_releases/13216 That was in 2008. > I can easily get 100 megabit/s long-distance between two linux boxes > without tweaking the settings much. Until you drop a packet. I can get 100 Megabits/sec with UDP without tweaking it at all. Getting 100Meg/sec San Francisco to London is a challenge over a typical Internet path (i.e. not a dedicated leased path). > Or they might tweak some other TCP settings and get 30 meg/s with > existing > 1500 MTU. It's WAY easier to tweak existing TCP than trying to get the > whole network to go to a higher MTU. We do 4470 internally and on > peering > links where the other end agrees, but getting it to work all the way to > the end customer isn't really easy. I guess you didn't read the links earlier. It has nothing to do with stack tweaks. The moment you lose a single packet, you are toast. And there is a limit to how much you can buffer because at some point it becomes difficult to locate a packet to resend. *If* you have a perfect path, sure, but that is generally not available, particularly to APAC. > But in a transition some end systems will have 9000 MTU and some parts > of > the network will have smaller, so then you get problems. Which is no different than end systems that have 9000 today. A lot of networks run jumbo frames internally now. Maybe a lot more than you realize. When you are using NFS and iSCSI and other things like database queries that return large output, large MTUs save you a lot of packets. NFS reads in 8K chunks, that can easily fit in a 9000 byte packets. It is more common in enterprise and academic networks that you might be aware.