I am runing open-mpi 1.1.1-1 compiled from OFED1.1 which I downloaded
from their website.
I am using SGE installed via OSCAR 5.0 and when running under SGE I
get the "mca_mpool_openib_register: ibv_reg_mr(0x59,528384) failed
with error: Cannot allocate memory" error discussed at length in you
If you use one of the latest version of MX (such as 1.2.0f) there
will be less than 0.15 micro-second difference between the MTL and
BTL MX. This version of MX allow us to do the matching outside the
NIC, which decrease the overhead for small messages. In terms of
bandwidth the BTL MX is so
Note that since you are setting OMPI_MCA_pml to cm, OMPI_MCA_btl will have no
effect. You may try setting OMPI_MCA_pml=ob1, and trying your measurements
again, but we generally get better performance with the cm pml than then ob1
pml.
Tim
On Wednesday 06 June 2007 12:54:26 pm George Bosilca wr
Hi Jeff,
Thanks for willing to put more thought on it. Here is my simplified
story. I have an accelerator physics code, Omega3P that is to perform
complex eigenmode analysis. The algorithm for solving eigensystems
makes use of a 3rd-party sparse direct solver called MUMPS (http://
graal.e
George,
Apologies for not saying what latency is with Open MPI. I've noted it below.
I don't know why turning off the sm feature would help launching 1ppn. I just
tried turning it off and it didn't make a difference.
-cdm
On Wed, 6 Jun 2007, George Bosilca wrote:
Which one is the latency
Which one is the latency with Open MPI ? Which version of Open MPI ?
You might want to use OMPI_MCA_btl=mx,self to see if it makes any
difference.
Thanks,
george.
On Jun 6, 2007, at 12:26 PM, Maestas, Christopher Daniel wrote:
With 2 nodes using 1.1.7 with the patch we measured (usin
With 2 nodes using 1.1.7 with the patch we measured (using mpich-mx
1.2.7..4):
3.07us
With mx 1.2.1-rc18 we measure:
3.69 us
And with mpich-mx 1.2.7..4 we see:
3.20us
Our Open MPI settings:
---
# env | grep OMPI
OMPI_MCA_pml=cm
OMPI_MCA_mpi_keep_hostnames=1
OMPI_MCA_oob_tc
So I have been trying to build multiple applications with an ifort+gcc
implementation of Open-MPI. I wanted to build them in debug mode. This is
on a Macbook Pro
System Version:Mac OS X 10.4.9 (8P2137)
Kernel Version:Darwin 8.9.1
gcc :gcc version 4.0.1
ifort: 10.0.16
I have tried bui
On Jun 5, 2007, at 11:17 PM, Lie-Quan Lee wrote:
it is a quite of headache for each compiler/platform to deal with
mixed language
issues. I have to compile my application on IBM visual age compiler,
Pathscale, Cray X1E compiler,
intel/openmpi, intel/mpich, PGI compiler ...
And of course, openmp