Hi Jeff,
I installed two versions of open mpi slightly different. One on
/opt/openmpi or I would get the gfortran error and the other in
/home/allan/openmpi
However I do not think that is the problem as the path names are
specified in the bahrc and bash_profile files of the /home/allan directory.
I also log into user allan who is not a superuser.On running the open
mpi with HPL I use the following command line:
a1> mpirun -mca pls_rsh_orted /home/allan/openmpi/bin/orted -hostfile aa
-np 16 ./xhpl
from the directory where xhpl resides such as /homer/open/bench and I
use the -mca command pls_rsh_orted as it otherwise comes up with an
error that it cannot find the ORTED daemon on machines a1, a2 etc. That
is probaly aconfiguration error. However the commands above and the
setup described works fine and there are no errors in the HPL.out file,
except that it is slow.
I use an atlas BLAS library for creating xhpl from hpl.tar.gz. The make
file for hpl uses the atlas libs and the open mpi mpicc compiler for
both compilation and linking. and I have zeroed out the MPI macro paths
in Make.open(that's what I reanmed the hpl makefile) for make arch=open
in hpl directory. Please find attached the ompi_info -all file as
requested. Thank you very much:
Allan
We've done linpack runs recently w/ Infiniband, which result in performance
comparable to mvapich, but not w/ the tcp port. Can you try running w/ an
earlier version, specify on the command line:
-mca pml teg
I'm interested in seeing if there is any performance difference.
Thanks,
Tim