Hi,


On 05/31/2010 12:45 PM, Michael Iverson wrote:
Igor,

I'm basically doing the same thing, only with MHEA28-XTC cards. I wouldn't think you'll have any problems creating a similar setup with the MHES cards.

I've not attempted to use infiniband sdr, just ipoib. I am running opensm on one of the nodes. I'm getting throughput numbers like this:


Would be very nice if you could test your setup with drbd infiniband sdr support, probably you will not need to re-sync anything.

cirrus:~$ netperf -H stratus-ib
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to stratus-ib.focus1.com <http://stratus-ib.focus1.com> (172.16.24.1) port 0 AF_INET : demo
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    10.00    7861.61

Nice.


A couple of things to watch out for:

1. Upgrade the firmware on the cards to the latest and greatest version. I saw about a 25% increase in throughput as a result. The firmware updater was a pain to compile, but that was mostly due to Ubuntu's fairly rigid default compiler flags.

Will watch that!

2. Run the cards in connected mode, rather than datagram mode, and put the MTU at the max value of 65520. My performance benchmarks of drbd show that this is the best setup.

If I use infiniband sdr support from drbd, should I care about MTU?

The replication rate on my setup is completely limited by the bandwidth of my disk subsystem, which is about 200 MB/s for writes. I can share some performance comparisons between this and bonded gigabit ethernet, if you would like. However, I won't be able to provide it until tomorrow, as it is a holiday in the US today, and I don't have ready access to the data.

We have a couple of setups that have I/O performances greater than 500MB/sec, so we really need 10Gbit trunks.

Thanks for the help, but I don't need performance results from Gbit setup's, we have a couple, and we know the problems! :) Anyway if you want to paste it here, I guess no one will complain.

On Mon, May 31, 2010 at 6:17 AM, Igor Neves <[email protected] <mailto:[email protected]>> wrote:

    Hi,

    I'm looking for a 10Gbit backend for storage drbd replication. I'm
    expecting to setup infiniband solution connected back to back,
    this means both nodes will be connected together without a switch.

    I wonder if I bought two of this cards MHES14-xtc and a cable, I
    will be able to produce such setup?

    Link to the cards:
    
http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41
    
<http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41>

    Another question, I intend to use this with infiniband sdr support
    added to drbd in 8.3.3, and I found this on the spec's of the card.

    "In addition, the card includes internal Subnet Management Agent
    (SMA) and General Service Agents, eliminating the requirement for
    an external management agent CPU."

    This means I don't need to run openSM in any nodes? I will just
    need to setup two cards, a cable, connect them, and setup IPoIB to
    start replicating in 10Gbit?

    Thanks very much,


Thanks, once again.

--
Igor Neves<[email protected]>
3GNTW - Tecnologias de Informação, Lda

 SIP: [email protected]
 MSN: [email protected]
 JID: [email protected]
 PSTN: 00351 252377120


_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to