There may also be confusion here between OpenMP and Open MPI -- these are two 
very different technologies.

OpenMP -- compiler-created multi-threaded applications.  You put pragmas in 
your code to tell the compiler how to parallelize your application.
Open MPI -- a library for explicit parallelization.  You put calls to MPI API 
functions in your code.

Your gfortran command line suggests you are using OpenMP, not Open MPI.

If you use Open MPI to launch 8 processes, each with multiple threads, you 
could very well be oversubscribing your machine, and then it would definitely 
run slower than 1 process (with multiple threads).



> On Nov 28, 2019, at 12:03 AM, Gilles Gouaillardet via users 
> <users@lists.open-mpi.org> wrote:
> 
> Your gfortran command line strongly suggests your program is serial and does 
> not use MPI at all.
> 
> Consequently, mpirun will simply spawn 8 identical instances of the very same 
> program, and no speed up should be expected
> 
> (but you can expect some slow down and/or file corruption).
> 
> 
> If you observe similar behaviour with Open MPI and IntelMPI, then this is 
> very unlikely an Open MPI issue,
> 
> and this mailing it not the right place to discuss a general 
> MPI/parallelization performance issue.
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> On 11/28/2019 1:54 PM, Mahesh Shinde via users wrote:
>> Hi,
>> 
>> I am running a physics based boundary layer model with parallel code which 
>> uses openmpi libraries. I installed openmpi. I am running it on general 
>> purpose Azure machine with 8 cores, 32GB RAM. I compiled the code with 
>> /*gfortran -O3 -fopenmp -o abc.exe abc.f*/ and then /*mpirun -np 8 
>> ./abc.exe*/ But i found slow speed with 4 and 8 cores. I also tried it with 
>> trial version of intel parallel studio suite, but  no improvement in the 
>> speed.
>> 
>> why this is happening? is the code not properly utilize mpi? does it need 
>> HPC machine on Azure? Does it compiled with intel ifort?
>> 
>> your suggestions/comments are welcome.
>> 
>> Thanks and regards.
>> Mahesh
>> 
>> 


-- 
Jeff Squyres
jsquy...@cisco.com

Reply via email to