Hi,

I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel. I use the following syntax to compile and run:
Intel MPI Compiler: (Redhat Fedora Core release 3 (Heidelberg), Kernel
version: Linux 2.6.9-1.667smp x86_64**

        mpiicpc -o xxxx.cpp <filename> -lmpi

OpenMPI 1.4.3: (Centos 5.5 w/ python 2.4.3, Kernel version: Linux
2.6.18-194.el5 x86_64)**

        mpiCC xxxx.cpp -o <filename

MPI run command: 

        mpirun -np 4 <filename> 


**Other hardware specs**

    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 15
    model           : 3
    model name      : Intel(R) Xeon(TM) CPU 3.60GHz
    stepping        : 4
    cpu MHz         : 3591.062
    cache size      : 1024 KB
    physical id     : 0
    siblings        : 2
    core id         : 0
    cpu cores       : 1
    apicid          : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 5
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
pge mca cmov pat pse36    
    clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lmconstant_tsc
pni monitor ds_cpl est tm2   
     cid xtpr
     bogomips        : 7182.12
    clflush size    : 64
    cache_alignment : 128
    address sizes   : 36 bits physical, 48 bits virtual
    power management:

Can the issue of efficiency be deciphered from the above info? 

Does the compiler flags have an effect on the efficiency of the
simulation. If so, what flags maybe useful to check to be included for
Open MPI. 

Will including MPICH2 increase efficiency in running simulations using
OpenMPI?

Thanks,
Ashwin.

Reply via email to