[OMPI users] Using dual infiniband HCA cards

2009-07-30 Thread Sefa Arslan
Hi, We have a computational cluster which is consisting of 8 HP Proliant ML370G5 with 32GB ram. Each node has a Melanox single port infiniband DDR HCA card (20Gbit/s) and connected each other through a Voltaire ISR9024D-M DDR infiniband switch. Now we want to increase the bandwidth to 40GBit/

Re: [OMPI users] Using dual infiniband HCA cards

2009-07-30 Thread Pavel Shamis (Pasha)
We have a computational cluster which is consisting of 8 HP Proliant ML370G5 with 32GB ram. Each node has a Melanox single port infiniband DDR HCA card (20Gbit/s) and connected each other through a Voltaire ISR9024D-M DDR infiniband switch. Now we want to increase the bandwidth to 40GBit/s

Re: [OMPI users] Test works with 3 computers, but not 4?

2009-07-30 Thread David Doria
On Wed, Jul 29, 2009 at 4:57 PM, Ralph Castain wrote: > Ah, so there is a firewall involved? That is always a problem. I gather > that node 126 has clear access to all other nodes, but nodes 122, 123, and > 125 do not all have access to each other? > See if your admin is willing to open at least

Re: [OMPI users] Test works with 3 computers, but not 4?

2009-07-30 Thread Ralph Castain
On Jul 30, 2009, at 6:36 AM, David Doria wrote: On Wed, Jul 29, 2009 at 4:57 PM, Ralph Castain wrote: Ah, so there is a firewall involved? That is always a problem. I gather that node 126 has clear access to all other nodes, but nodes 122, 123, and 125 do not all have access to each o

Re: [OMPI users] strange IMB runs

2009-07-30 Thread George Bosilca
The leave pinned will not help in this context. It can only help for devices capable of real RMA operations and that require pinned memory, which unfortunately is not the case for TCP. What is [really] strange about your results is that you get a 4 times better bandwidth over TCP than over

Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran withMPI_REDUCE / MPI_ALLREDUCE

2009-07-30 Thread Ricardo Fonseca
(I just realized I had the wrong subject line, here it goes again) Hi Jeff Yes, I am using the right one. I've installed the freshly compiled openmpi into /opt/openmpi/1.3.3-g95-32. If I edit the mpif.h file by hand and put "error!" in the first line I get: zamblap:sandbox zamb$ edit /opt/

Re: [OMPI users] strange IMB runs

2009-07-30 Thread Michael Di Domenico
On Thu, Jul 30, 2009 at 10:08 AM, George Bosilca wrote: > The leave pinned will not help in this context. It can only help for devices > capable of real RMA operations and that require pinned memory, which > unfortunately is not the case for TCP. What is [really] strange about your > results is tha

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Adams, Brian M
Apologies if I'm being confusing; I'm probably trying to get at atypical use cases. M and N need not correspond to the number of nodes/ppn nor ppn/nodes available. By node vs. slot doesn't much matter, as long as in the end I don't oversubscribe any node. By slot might be good for efficiency

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Ralph Castain
On Jul 30, 2009, at 11:49 AM, Adams, Brian M wrote: Apologies if I'm being confusing; I'm probably trying to get at atypical use cases. M and N need not correspond to the number of nodes/ppn nor ppn/nodes available. By node vs. slot doesn't much matter, as long as in the end I don't ove

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Adams, Brian M
Thanks Ralph, I wasn't aware of the relative indexing or sequential mapper capabilities. I will check those out and report back if I still have a feature request. -- Brian From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph C

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Ralph Castain
Let me know how it goes, if you don't mind. It would be nice to know if we actually met your needs, or if a tweak might help make it easier. Thanks Ralph On Jul 30, 2009, at 1:36 PM, Adams, Brian M wrote: Thanks Ralph, I wasn't aware of the relative indexing or sequential mapper capabilitie

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Adams, Brian M
I found the manual pages for mpirun and orte_hosts, which have a pretty thorough description of these features. Let me know if there's anything else I should check out. My quick impression is that this will meet at least 90% of user needs out of the box as most (all?) users will run with numbe