Re: [OMPI users] Running error after upgrading from OpenMPI 1.27 to 1.32

2009-12-29 Thread Gus Correa
Hi Galaxia Try: mpirun -mca btl ^openib -np 2 hello_world This should turn off openib at runtime. A better alternative is to download the OpenMPI tarball and build it from source. It will only build support to the hardware you have. If you do so, use a non-standard installation directory. This

[OMPI users] Running error after upgrading from OpenMPI 1.27 to 1.32

2009-12-29 Thread Galaxia
I am working on a computer running CentOS 5 with 2 Quad-core CPUs (only one computer, it is not connected with others). Previously the OpenMPI version was 1.27 and my programs worked fine. After the automatic upgrade to 1.32 (through yum), I can compile programs but it shows error in running:

Re: [OMPI users] Problem compiling 1.4.0 snap with PGI 10.0-1 and openib flags turned on ...

2009-12-29 Thread Mostyn Lewis
Chance your arm and include a CFLAGS with your configure including -D__GNUC__ CFLAGS=-D__GNUC__ A small test case just using those headers works, thus wise. BTW, PGI 9.0-1 also fails on those headers. DM On Tue, 29 Dec 2009, Richard Walsh wrote: All, Not overwhelmed with responses here ...

Re: [OMPI users] Problem compiling 1.4.0 snap with PGI 10.0-1 and openib flags turned on ...

2009-12-29 Thread Joshua Bernstein
Hi Richard, I've built our OpenMPI with PGI 10.0 and included OpenIB support. I've verified it works. You'll notice we build with UDAPL and OpenIB, but generally only OpenIB is used. Our complete configure flags, and OpenMPI is included with Scyld ClusterWare is shown here: ./conf

Re: [OMPI users] Problem compiling 1.4.0 snap with PGI 10.0-1 and openib flags turned on ...

2009-12-29 Thread Gus Correa
Hi Richard I haven't upgraded from PGI 8.0-4, so no comments on PGI 10. As for IB, I configured OpenMPI only with the --with-openib flag, but not --enable-openib-ibcm. Have you tried that? Gus Correa - Gustavo Correa Lamont-Doh

Re: [OMPI users] Problem compiling 1.4.0 snap with PGI 10.0-1 and openib flags turned on ...

2009-12-29 Thread Richard Walsh
All, Not overwhelmed with responses here ... ;-) ... No one using PGI 10.0 yet? We need it to make use of the GPU compiler directives they are supporting. Can some perhaps comment on whether this is the correct way to configure for an IB system? Everything works with Intel and/or if I compile w

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-29 Thread Gus Correa
Hi Ilya OK, with 28 nodes and 4GB/node, you have much more memory than I thought. The maximum N is calculated based on the total memory you have (assuming the nodes are homogeneous, have the same RAM), not based on the memory per node. I haven't tried OpenMPI 1.3.3. Last I ran HPL was with OpenM

Re: [OMPI users] MTT -trivial :All tests are not getting passed

2009-12-29 Thread Ralph Castain
The executables must be available on all nodes - normally, this is done by putting them in an NFS-mounted directory. On Dec 29, 2009, at 6:35 AM, vishal shorghar wrote: > HI All, > > Today I reran the trivial test on two nodes with (via the --scratch option) > to a NFS share that is accessible

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-29 Thread ilya zelenchuk
Hello, Gus! Sorry for the lack of debug info. I have 28 nodes. Each node have 2 processors Xeon 2.4 GHz with 4 Gb RAM. OpenMPI 1.3.3 was compiled as: CC=icc CFLAGS=" -O3" CXX=icpc CXXFLAGS=" -O3" F77=ifort FFLAGS=" -O3" FC=ifort FCFLAGS=" -O3" ./configure --prefix=/opt/openmpi/intel/ --enable-debu

[OMPI users] MTT -trivial :All tests are not getting passed

2009-12-29 Thread vishal shorghar
HI All, Today I reran the trivial test on two nodes with (via the --scratch option) to a NFS share that is accessible to all nodes in hostlist (as suggested by Ethan).But still no luck. I have shared "/root/mtt-svn/samples/installs/nRpF/tests/trivial/test_get__trivial" on my head node which