Hi,
from reading the FAQ and this list it seems OpenMPI can use multiple
InfiniBand rails by round-robining across the ports out of each node (as
long as they're configured to be on separate subnets (I think)).
can OpenMPI also deal with one of the subnets failing?
ie. will OpenMPI automatically
does OpenMPI support Quadrics elan3/4 interconnects?
I saw a few hits on google suggesting that support was partial or maybe
planned, but couldn't find much in the openmpi sources to suggest any
support at all.
cheers,
robin
On Thu, Jan 18, 2007 at 03:10:15PM +0200, Gleb Natapov wrote:
>On Thu, Jan 18, 2007 at 07:17:13AM -0500, Robin Humble wrote:
>> On Thu, Jan 18, 2007 at 11:08:04AM +0200, Gleb Natapov wrote:
>> >On Thu, Jan 18, 2007 at 03:52:19AM -0500, Robin Humble wrote:
>> >> On W
On Thu, Jan 18, 2007 at 11:08:04AM +0200, Gleb Natapov wrote:
>On Thu, Jan 18, 2007 at 03:52:19AM -0500, Robin Humble wrote:
>> On Wed, Jan 17, 2007 at 08:55:31AM -0700, Brian W. Barrett wrote:
>> >On Jan 17, 2007, at 2:39 AM, Gleb Natapov wrote:
>> >> On Wed, Ja
argh. attached.
cheers,
robin
On Thu, Jan 18, 2007 at 03:52:19AM -0500, Robin Humble wrote:
>On Wed, Jan 17, 2007 at 08:55:31AM -0700, Brian W. Barrett wrote:
>>On Jan 17, 2007, at 2:39 AM, Gleb Natapov wrote:
>>> On Wed, Jan 17, 2007 at 04:12:10AM -0500, Robin Humble wrote:
On Wed, Jan 17, 2007 at 08:55:31AM -0700, Brian W. Barrett wrote:
>On Jan 17, 2007, at 2:39 AM, Gleb Natapov wrote:
>> On Wed, Jan 17, 2007 at 04:12:10AM -0500, Robin Humble wrote:
>>> basically I'm seeing wildly different bandwidths over InfiniBand 4x DDR
>>> when
so this isn't really an OpenMPI questions (I don't think), but you guys
will have hit the problem if anyone has...
basically I'm seeing wildly different bandwidths over InfiniBand 4x DDR
when I use different kernels.
I'm testing with netpipe-3.6.2's NPmpi, but a home-grown pingpong sees
the same