Ben, I am facing the performance issue when we try to bond the multiple interfaces with virtual interface. It could be related to this thread. My questions are, *) When we use mulitple NICs, will the performance of overall system be summation of all individual lines XX bits/sec. ? *) What are the factors improves the performance if we have multiple interfaces?. [ kind of tuning the parameters in proc ]
Breno, I hope this thread will be helpful for performance issue which i have with bonding driver. Jeba On Thu, 2008-01-10 at 16:36 +0000, Ben Hutchings wrote: > Breno Leitao wrote: > > Hello, > > > > I've perceived that there is a performance issue when running netperf > > against 4 e1000 links connected end-to-end to another machine with 4 > > e1000 interfaces. > > > > I have 2 4-port interfaces on my machine, but the test is just > > considering 2 port for each interfaces card. > > > > When I run netperf in just one interface, I get 940.95 * 10^6 bits/sec > > of transfer rate. If I run 4 netperf against 4 different interfaces, I > > get around 720 * 10^6 bits/sec. > <snip> > > I take it that's the average for individual interfaces, not the > aggregate? RX processing for multi-gigabits per second can be quite > expensive. This can be mitigated by interrupt moderation and NAPI > polling, jumbo frames (MTU >1500) and/or Large Receive Offload (LRO). > I don't think e1000 hardware does LRO, but the driver could presumably > be changed use Linux's software LRO. > > Even with these optimisations, if all RX processing is done on a > single CPU this can become a bottleneck. Does the test system have > multiple CPUs? Are IRQs for the multiple NICs balanced across > multiple CPUs? > > Ben. > -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html