Also, crank up your mtu and set the ib adaptors to use connected mode. With 
those changes, i've gotten just over 3 GB/s with a similar setup.
 -nld

On Feb 4, 2011, at 6:46 AM, Skylar Thompson wrote:

> On 02/04/11 04:24, Conrad Wood wrote:
>> Hello,
>> 
>> I have some trouble with infiniband - and admittedly I am an infiniband
>> novice. I get 7GBit/s using TCP (iperf/wget and others). Oddly though
>> because it's a QDR (40GBit/s) switch.
>> Same results with 2.6.36, 2.6.38-rc2 kernel.
>> QLogic PCI-e cards and QLogic switches. 
>> 
>> I changed kernels/bios/tcp settings and what have you, but no luck. I
>> notice the cpu can be quite busy during these tests.
>> 
>> Anyone seen something like this before or knows more about inifiniband
>> than me ? ;))
>> 
>> 
> 
> Here's a few things that could be limiting you:
> 
> 1. Local interconnect: If your HCA isn't at least 64-bit 100MHz PCI-X
> you won't even get above 7Gbps.
> 2. CPU: If you're not doing TCP offload you could very well be maxing
> out your CPU. This would be easy to verify with vmstat; if you're
> spending a lot of time in the "sy" column you're CPU bound. If you have
> multiple CPUs you could try multiple iperf streams.
> 3. Network protocol: TCP isn't really designed for single
> ultra-high-bandwidth ultra-low-latency streams that IB can provide. IB
> is more intended for large shared-memory applications that use RDMA to
> communicate.
> 
> If it helps, I did an iperf test between two hosts with these spec's:
> 
> Iperf Client:
> 1x dual core Intel E5503 (2.00GHz)
> Mellanox MHQH29B-XTR QDR HCA (PCIe G2 16x slot - 8GBps peak bandwidth)
> 
> Iperf Server:
> 2x quad core Intel E5640 (2.67 GHz)
> Mellanox MHQH29B-XTR QDR HCA (PCIe G2 16x slot - 8GBps peak bandwidth)
> 
> Switch:
> Mellanox 36-port M3601Q QDR switch
> 
> On a single stream from client to server, I got 11Gbps. Using two
> streams. I managed to max out the CPU on the server and got an aggregate
> throughput of 18Gbps. Running four streams dropped the aggregate down to
> 15Gbps.
> 
> In fact, it appeared that either the HCA driver or the TCP stack is
> single threaded as top reported that ksoftirqd was pegging one CPU. I
> suspect either interrupts from the driver or processing TCP packets was
> responsible for that kernel thread, although someone more familiar with
> the kernel could correct me.
> 
> -- 
> -- Skylar Thompson ([email protected])
> -- http://www.cs.earlham.edu/~skylar/
> 
> 
> _______________________________________________
> Tech mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
> http://lopsa.org/

_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to