On 04/14/2014 03:02 PM, Jeff Squyres (jsquyres) wrote:
If you didn't use Open MPI, then this is the wrong mailing list for you.  :-)

(this is the Open MPI users' support mailing list)


On Apr 14, 2014, at 2:58 PM, Djordje Romanic <djord...@gmail.com> wrote:

I didn't use OpenMPI.


On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> 
wrote:
This can also happen when you compile your application with one MPI implementation (e.g., Open 
MPI), but then mistakenly use the "mpirun" (or "mpiexec") from a different MPI 
implementation (e.g., MPICH).


On Apr 14, 2014, at 2:32 PM, Djordje Romanic <djord...@gmail.com> wrote:

I compiled it with: x86_64 Linux, gfortran compiler with gcc   (dmpar). dmpar - 
distributed memory option.

Attached is the self-generated configuration file.
The architecture specification settings start at line 107.
I didn't use Open MPI (shared memory option).

NoNoNoNoNo!

You are confusing yourself (and even Jeff) by mixing up
OpenMPI and OpenMP.
Well, everybody wants to be Open ... :)

Note that OpenMP (no trailing "I") is the WRF shared memory option,
which is different from OpenMPI (with trailing "I") which
is the  MPI implementation of this mailing list
(distributed memory, so to speak, althouth intra-node it is shared mem).

My guess is that your intent is to compile with MPI, right?
And actually with OpenMPI, i.e., with this implementation of MPI, right?

What is the output of "ldd ./wrf.exe"?
This may show the MPI libraries.

I hope this helps,
Gus Correa




On Mon, Apr 14, 2014 at 1:23 PM, Dave Goodell (dgoodell) <dgood...@cisco.com> 
wrote:
On Apr 14, 2014, at 12:15 PM, Djordje Romanic <djord...@gmail.com> wrote:

When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
-------------------------------------------------
  starting wrf task            0  of            1
  starting wrf task            0  of            1
  starting wrf task            0  of            1
  starting wrf task            0  of            1
-------------------------------------------------
This indicates that it is not using 4 processors, but 1.

Any idea what might be the problem?

It could be that you compiled WRF with a different MPI implementation than you 
are using to run it (e.g., MPICH vs. Open MPI).

-Dave

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

<configure.wrf>_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Reply via email to