[OMPI users] Question about OpenMPI performance vs. MVAPICH2

2009-09-20 Thread Brian Powell

Greetings,

I recently purchased and set up a new blade cluster using Xeon 5560  
CPUs, Mellanox DDR ConnectX cards, running CentOS 5.2. I use the  
cluster to run a large FORTRAN 90 fluid model. I have been using  
OpenMPI on my other clusters for years, and it is my default MPI  
environment.


I downloaded and installed the latest OpenMPI 1.3.3 release with the  
following:


./configure FC=ifort F77=ifort F90=ifort --prefix=/share/apps/ 
openmpi-1.3.3-intel --with-openib=/opt/ofed --with-openib-libdir=/opt/ 
ofed/lib64 --with-tm=/opt/torque/


To show the configuration, I ran:

(machine:~)% mpicc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man -- 
infodir=/usr/share/info --enable-shared --enable-threads=posix -- 
enable-checking=release --with-system-zlib --enable-__cxa_atexit -- 
disable-libunwind-exceptions --enable-libgcj-multifile --enable- 
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk -- 
disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2- 
gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux

Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

I then ran a large number of tests using one of my typical model  
domain configurations (which are relatively expensive) to test how  
well the system was performing. I didn't want to use "benchmarking"  
code, but rather the code I actually use the cluster for. Remarkably,  
it was scaling linearly up to about 8 nodes (using 8 cores per node).


I decided -- for curiosity -- to see how this compared with MVAPICH2.  
I downloaded the 1.4rc2 code, and compiled it using the following:


./configure FC=ifort F77=ifort F90=ifort --prefix=/share/apps/ 
mvapich2-1.4-intel --enable-f90 --with-ib-libpath=/opt/ofed/lib64 -- 
with-rdma=gen2 --with-ib-include=/opt/ofed/include


This was confirmed with:

(machine:~)% mpicc -v
mpicc for 1.4.0rc2
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man -- 
infodir=/usr/share/info --enable-shared --enable-threads=posix -- 
enable-checking=release --with-system-zlib --enable-__cxa_atexit -- 
disable-libunwind-exceptions --enable-libgcj-multifile --enable- 
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk -- 
disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2- 
gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux

Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

I tested the same runs as before, now using MVAPICH2 rather than  
OpenMPI. To my astonishment, the MVAPICH2 runs ran -- on average --  
20% faster as measured in terms of wall clock time. This was  
incredibly surprising to me. I tried a number of domain configurations  
(over 1-16 nodes, with various numbers of processors per node), and  
the improvement was from 7.7-35.2 percent (depending on the  
configuration).


I reran a number of my OpenMPI tests because it was so surprising, and  
they were consistent with the original. I read through the FAQ:  and tried a number of options with RDMA (the size of the messages  
passed in the code I run is -- I believe -- rather small) and I was  
able to improve the OpenMPI results by about 3%, but still nowhere  
near what I was getting with MVAPICH2.


I ran a final test which I find very strange: I ran the same test case  
on 1 cpu. The MVAPICH2 case was 23% faster!?!? This makes little sense  
to me. Both are using ifort as the mpif90 compiler using *identical*  
optimization flags, etc. I don't understand how the results could be  
different.


All of these cases are run with myself as the only user of the cluster  
and each test is run alone (without any other interference on the  
machine). I am running TORQUE, so each is submitted to the queue, then  
the actual queue run time is used as the measure of time, which is the  
actual wallclock time for the job to finish. Some may discount that  
time metric; however, it is what I am most concerned with. If I have  
to wait 2 hours to run a job in OpenMPI, but 1:36 in MVAPICH2, that is  
a significant advantage to me.


That said, MVAPICH2 has its own problems with hung mpd processes that  
can linger around on the nodes, etc. I prefer to use OpenMPI, so my  
question is:


What does the list suggest I modify in order to improve the OpenMPI  
performance?


I have played with the RDMA parameters to increase its thresholds, but  
little was gained. I am happy to provide the output of ompi_info if  
needed, but it is long so I didn't want to include in the initial  
post. I apologize for my naivete on the internals of MPI hardware  
utilization.


Thank you in advance.

Cheers,
Brian







[OMPI users] Job fails after hours of running on a specific node

2009-09-20 Thread Sangamesh B
Dear all,

 The CPMD application which is compiled with OpenMPI-1.3 (Intel 10.1
compilers) on CentOS-4.5, fails only, when a specific node i.e. node-0-2 is
involved. But runs well on other nodes.

  Initially job failed after 5-10 mins (on node-0-2 + some other
nodes). After googling error, I added options "-mca
btl_openib_ib_min_rnr_timer 25 -mca btl_openib_ib_timeout 20" to mpirun
command in the SGE script:

$ cat cpmdrun.sh
#!/bin/bash
#$ -N cpmd-acw
#$ -S /bin/bash
#$ -cwd
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
#$ -pe ib 32
unset SGE_ROOT
PP_LIBRARY=/home/user1/cpmdrun/wac/prod/PP
CPMD=/opt/apps/cpmd/3.11/ompi/SOURCE/cpmd311-ompi-mkl.x
MPIRUN=/opt/mpi/openmpi/1.3/intel/bin/mpirun
$MPIRUN -np $NSLOTS -hostfile $TMPDIR/machines -mca
btl_openib_ib_min_rnr_timer 25 -mca btl_openib_ib_timeout 20 $CPMD
wac_md26.in  $PP_LIBRARY > wac_md26.out
After adding these options, job executed for 24+ hours then failed with the
same error as earlier. The error is:

$ cat err.6186.cpmd-acw
--
The OpenFabrics stack has reported a network error event.  Open MPI
will try to continue, but your job may end up failing.
  Local host:node-0-2.local
  MPI process PID:   11840
  Error number:  10 (IBV_EVENT_PORT_ERR)
This error may indicate connectivity problems within the fabric;
please contact your system administrator.
--
[node-0-2.local:11836] 7 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] Set MCA parameter "orte_base_help_aggregate" to 0 to
see all help / error messages
[node-0-2.local:11836] 1 more process has sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 7 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 1 more process has sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 7 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 1 more process has sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 7 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 1 more process has sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 7 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 1 more process has sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 15 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 16 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[node-0-2.local:11836] 16 more processes have sent help message
help-mpi-btl-openib.txt / of error event
[[718,1],20][btl_openib_component.c:2902:handle_wc] from node-0-22.local to:
node-0-2
--
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):
The total number of times that the sender wishes the receiver to
retry timeout, packet sequence, etc. errors before posting a
completion error.
This error typically means that there is something awry within the
InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.
Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:
* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
  to 10).  The actual timeout value used is calculated as:
 4.096 microseconds * (2^btl_openib_ib_timeout)
  See the InfiniBand spec 1.2 (section 12.7.34) for more details.
Below is some information about the host that raised the error and the
peer to which it was connected:
  Local host:   node-0-22.local
  Local device: mthca0
  Peer host:node-0-2
You may need to consult with your system administrator to get this
problem fixed.
--
error polling LP CQ with status RETRY EXCEEDED ERROR status number 12 for
wr_id 66384128 opcode 128 qp_idx 3
--
mpirun has exited due to process rank 20 with PID 10425 on
node ibc22 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--
rm: cannot remove `/tmp/6186.1.iblo

Re: [OMPI users] Question about OpenMPI performance vs. MVAPICH2

2009-09-20 Thread Jed Brown
Brian Powell wrote:
> I ran a final test which I find very strange: I ran the same test case
> on 1 cpu. The MVAPICH2 case was 23% faster!?!? This makes little sense
> to me. Both are using ifort as the mpif90 compiler using *identical*
> optimization flags, etc. I don't understand how the results could be
> different.

Are you saying the output of mpicc/mpif90 -show has the same
optimization flags?  MPICH2 usually puts it's own optimization flags
into the wrappers.

Jed



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] Question about OpenMPI performance vs. MVAPICH2

2009-09-20 Thread Ralph Castain
Did you set -mca mpi_paffinity_alone 1? This will bind the processes  
to cores and (usually) significantly improve performance.


The upcoming 1.3.4 will have additional binding options to help with  
performance issues.


On Sep 20, 2009, at 12:36 AM, Brian Powell wrote:


Greetings,

I recently purchased and set up a new blade cluster using Xeon 5560  
CPUs, Mellanox DDR ConnectX cards, running CentOS 5.2. I use the  
cluster to run a large FORTRAN 90 fluid model. I have been using  
OpenMPI on my other clusters for years, and it is my default MPI  
environment.


I downloaded and installed the latest OpenMPI 1.3.3 release with the  
following:


./configure FC=ifort F77=ifort F90=ifort --prefix=/share/apps/ 
openmpi-1.3.3-intel --with-openib=/opt/ofed --with-openib-libdir=/ 
opt/ofed/lib64 --with-tm=/opt/torque/


To show the configuration, I ran:

(machine:~)% mpicc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man  
--infodir=/usr/share/info --enable-shared --enable-threads=posix -- 
enable-checking=release --with-system-zlib --enable-__cxa_atexit -- 
disable-libunwind-exceptions --enable-libgcj-multifile --enable- 
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk  
--disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/ 
java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat- 
linux

Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

I then ran a large number of tests using one of my typical model  
domain configurations (which are relatively expensive) to test how  
well the system was performing. I didn't want to use "benchmarking"  
code, but rather the code I actually use the cluster for.  
Remarkably, it was scaling linearly up to about 8 nodes (using 8  
cores per node).


I decided -- for curiosity -- to see how this compared with  
MVAPICH2. I downloaded the 1.4rc2 code, and compiled it using the  
following:


./configure FC=ifort F77=ifort F90=ifort --prefix=/share/apps/ 
mvapich2-1.4-intel --enable-f90 --with-ib-libpath=/opt/ofed/lib64 -- 
with-rdma=gen2 --with-ib-include=/opt/ofed/include


This was confirmed with:

(machine:~)% mpicc -v
mpicc for 1.4.0rc2
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man  
--infodir=/usr/share/info --enable-shared --enable-threads=posix -- 
enable-checking=release --with-system-zlib --enable-__cxa_atexit -- 
disable-libunwind-exceptions --enable-libgcj-multifile --enable- 
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk  
--disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/ 
java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat- 
linux

Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

I tested the same runs as before, now using MVAPICH2 rather than  
OpenMPI. To my astonishment, the MVAPICH2 runs ran -- on average --  
20% faster as measured in terms of wall clock time. This was  
incredibly surprising to me. I tried a number of domain  
configurations (over 1-16 nodes, with various numbers of processors  
per node), and the improvement was from 7.7-35.2 percent (depending  
on the configuration).


I reran a number of my OpenMPI tests because it was so surprising,  
and they were consistent with the original. I read through the FAQ:  and tried a number of options with RDMA (the size of the messages  
passed in the code I run is -- I believe -- rather small) and I was  
able to improve the OpenMPI results by about 3%, but still nowhere  
near what I was getting with MVAPICH2.


I ran a final test which I find very strange: I ran the same test  
case on 1 cpu. The MVAPICH2 case was 23% faster!?!? This makes  
little sense to me. Both are using ifort as the mpif90 compiler  
using *identical* optimization flags, etc. I don't understand how  
the results could be different.


All of these cases are run with myself as the only user of the  
cluster and each test is run alone (without any other interference  
on the machine). I am running TORQUE, so each is submitted to the  
queue, then the actual queue run time is used as the measure of  
time, which is the actual wallclock time for the job to finish. Some  
may discount that time metric; however, it is what I am most  
concerned with. If I have to wait 2 hours to run a job in OpenMPI,  
but 1:36 in MVAPICH2, that is a significant advantage to me.


That said, MVAPICH2 has its own problems with hung mpd processes  
that can linger around on the nodes, etc. I prefer to use OpenMPI,  
so my question is:


What does the list suggest I modify in order to improve the OpenMPI  
performance?


I have played with the RDMA parameters to increase its thresholds,  
but little was gained. I am happy to provide the output of ompi_info  
if needed, but it is long so I didn't want to include in the initial  
post. I apolo

Re: [OMPI users] Question about OpenMPI performance vs. MVAPICH2

2009-09-20 Thread Ralph Castain
Excellent point that is often overlooked. Be sure you compiled  
optimized - i.e., "mpicc -O3" (or whatever O-level you want).


OMPI's wrapper compilers do NOT contain optimization flags.

On Sep 20, 2009, at 9:50 AM, Jed Brown wrote:


Brian Powell wrote:
I ran a final test which I find very strange: I ran the same test  
case
on 1 cpu. The MVAPICH2 case was 23% faster!?!? This makes little  
sense

to me. Both are using ifort as the mpif90 compiler using *identical*
optimization flags, etc. I don't understand how the results could be
different.


Are you saying the output of mpicc/mpif90 -show has the same
optimization flags?  MPICH2 usually puts it's own optimization flags
into the wrappers.

Jed

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users