Nehemiah Dacres wrote:
also, I'm not sure if I'm reading the results right. According to the last run, did using the sun compilers (update 1 )  result in higher performance with sunct?

On Wed, Apr 6, 2011 at 11:38 AM, Nehemiah Dacres <dacre...@slu.edu> wrote:
this first test was run as a base case to see if MPI works., the sedcond run is to see the speed up using OpenIB provides
[jian@therock ~]$ mpirun -machinefile list /opt/iba/src/mpi_apps/mpi_stress/mpi_stress
[jian@therock ~]$ mpirun -mca orte_base_help_aggregate btl,openib,self, -machinefile list /opt/iba/src/mpi_apps/mpi_stress/mpi_stress
[jian@therock ~]$ mpirun -mca orte_base_help_aggregate btl,openib,self, -machinefile list sunMpiStress
I don't think the command-line syntax for the MCA parameters is quite right.  I suspect it should be

--mca orte_base_help_aggregate 1 --mca btl openib,self

Further, they are unnecessary.  The first is on by default and the second is unnecessary since OMPI finds the fastest interconnect automatically (presumably openib,self, with sm if there are on-node processes).  Another way of setting MCA parameters is with environment variables:

setenv OMPI_MCA_orte_base_help_aggregate 1
setenv OMPI_MCA_btl openib,self

since then you can use ompi_info to check your settings.

Anyhow, it looks like your runs are probably all using openib and I don't know why the last one is 2x faster.  If you're testing the interconnect, the performance should be limited by the IB (more or less) and not by the compiler.

Reply via email to