On 6/22/2012 5:45 AM, Muhammad Yousuf Khan wrote: > [ ID] Interval Transfer Bandwidth > [ 5] 0.0-10.0 sec 744 MBytes 624 Mbits/sec > [ 4] 0.0-10.0 sec 876 MBytes 734 Mbits/sec
> [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 744 MBytes 623 Mbits/sec > [ 4] 0.0-10.0 sec 876 MBytes 735 Mbits/sec This shows sustained short duration transfer rates of 78MB/s and 91MB/s. That's not bad, but can be higher. With good NICs, proper TCP tuning, and jumbo frames, you should be able to hit a theoretical peak of around 117MB/s, or 936Mb/s. That's about the limit after all the protocol overhead. And this assumes your PCI/e bus, mobo chipset, and host CPU are up to the task. These test numbers are a bit meaningless in real world use however, as most of your iSCSI/CIFS/etc traffic will comprise concurrent small IOs, transactional in nature, as is the case with the vast majority of server workloads. So instead concentrating on your raw point-to-point GbE bandwidth, you need to concentrate on the IO latency of your iSCSI and virtualization servers. Maximizing the random IO performance of these systems will do far more for overall network performance than spending countless hours trying to maximize point-to-point GbE throughput. One of the few applications requiring long duration throughput is network based backup. And even in this case you're not streaming large files, but typically many small files. So again, system latency is a bigger factor than throughput. And in the event you do find yourself transferring vary large files on a regular basis, and need max throughput, it's most often much easier to attain that throughput using LACP with two NICs than to spend days/weeks attempting to maximize the performance of a single NIC. -- Stan -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/4fe4f032.2020...@hardwarefreak.com