Hi,
We have a computational cluster which is consisting of 8 HP Proliant
ML370G5 with 32GB ram.
Each node has a Melanox single port infiniband DDR HCA card (20Gbit/s)
and connected each other through
a Voltaire ISR9024D-M DDR infiniband switch.
Now we want to increase the bandwidth to 40GBit/
We have a computational cluster which is consisting of 8 HP Proliant
ML370G5 with 32GB ram.
Each node has a Melanox single port infiniband DDR HCA card (20Gbit/s)
and connected each other through
a Voltaire ISR9024D-M DDR infiniband switch.
Now we want to increase the bandwidth to 40GBit/s
On Wed, Jul 29, 2009 at 4:57 PM, Ralph Castain wrote:
> Ah, so there is a firewall involved? That is always a problem. I gather
> that node 126 has clear access to all other nodes, but nodes 122, 123, and
> 125 do not all have access to each other?
> See if your admin is willing to open at least
On Jul 30, 2009, at 6:36 AM, David Doria wrote:
On Wed, Jul 29, 2009 at 4:57 PM, Ralph Castain
wrote:
Ah, so there is a firewall involved? That is always a problem. I
gather that node 126 has clear access to all other nodes, but nodes
122, 123, and 125 do not all have access to each o
The leave pinned will not help in this context. It can only help for
devices capable of real RMA operations and that require pinned memory,
which unfortunately is not the case for TCP. What is [really] strange
about your results is that you get a 4 times better bandwidth over TCP
than over
(I just realized I had the wrong subject line, here it goes again)
Hi Jeff
Yes, I am using the right one. I've installed the freshly compiled
openmpi into /opt/openmpi/1.3.3-g95-32. If I edit the mpif.h file by
hand and put "error!" in the first line I get:
zamblap:sandbox zamb$ edit /opt/
On Thu, Jul 30, 2009 at 10:08 AM, George Bosilca wrote:
> The leave pinned will not help in this context. It can only help for devices
> capable of real RMA operations and that require pinned memory, which
> unfortunately is not the case for TCP. What is [really] strange about your
> results is tha
Apologies if I'm being confusing; I'm probably trying to get at atypical use
cases. M and N need not correspond to the number of nodes/ppn nor ppn/nodes
available. By node vs. slot doesn't much matter, as long as in the end I don't
oversubscribe any node. By slot might be good for efficiency
On Jul 30, 2009, at 11:49 AM, Adams, Brian M wrote:
Apologies if I'm being confusing; I'm probably trying to get at
atypical use cases. M and N need not correspond to the number of
nodes/ppn nor ppn/nodes available. By node vs. slot doesn't much
matter, as long as in the end I don't ove
Thanks Ralph, I wasn't aware of the relative indexing or sequential mapper
capabilities. I will check those out and report back if I still have a feature
request. -- Brian
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Ralph C
Let me know how it goes, if you don't mind. It would be nice to know
if we actually met your needs, or if a tweak might help make it easier.
Thanks
Ralph
On Jul 30, 2009, at 1:36 PM, Adams, Brian M wrote:
Thanks Ralph, I wasn't aware of the relative indexing or sequential
mapper capabilitie
I found the manual pages for mpirun and orte_hosts, which have a pretty
thorough description of these features. Let me know if there's anything else I
should check out.
My quick impression is that this will meet at least 90% of user needs out of
the box as most (all?) users will run with numbe
12 matches
Mail list logo