I just managed to get a 2-port Mellanox 10Gbe pci-e NIC working with 2.6.23-rc6 + my hacks. There are some errors about scheduling while atomic and such in the management path (ie, querying stats, etc), but the data path looks pretty good.
At 1500 MTU I was able to send + rx 2.5Gbps on both ports using my pktgen. TCP maxed out at about 1.4Gbps send + rx, generated with my proprietary user-space tool with MTU 1500. With MTU 8000, I can send + rx about 1.8Gbps. When I change MTU to 8000 on the NICs, pktgen can send + rx about 4.5Gbps at 4000 byte pkt sizes. When sending one one port and receiving on the other, I can send 9+Gbps of traffic, using MTU of 8000 and pktgen pkt size of 4000. Using larger pktgen pkt sizes slows traffic down to around 7Gbps, probably due to extra page allocations. So, there are some warts to be worked out in the driver, but the raw performance looks pretty promising! Take it easy, Ben -- Ben Greear <[EMAIL PROTECTED]> Candela Technologies Inc http://www.candelatech.com - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html