Hi Djordje
That is great news.
Congrats for making it work!
Just out of curiosity: What did the trick?
Did you install Open MPI from source, or did you sort out
the various MPI flavors which were previously installed on your system?
Now the challenge is to add OpenMP and run WRF
in hybrid mode just for fun! :)
Best,
Gus Correa
***
PS: Parallel computing, MPI, and OpenMP, tutorials at LLNL:
https://computing.llnl.gov/tutorials/parallel_comp/
https://computing.llnl.gov/tutorials/mpi/
https://computing.llnl.gov/tutorials/openMP/
Ch. 5 in the first tutorial gives an outline of the various
parallel programming models, and the basic ideas behind MPI and OpenMP.
**
Wild guesses based on other models (climate, not weather):
Most likely WRF uses the domain decomposition technique to solve
the dynamics' PDEs, exchanging sub-domain boundary data via MPI.
[Besides the dynamics, each process will also
compute thermodynamics, radiation effects, etc,
which may also require data exchange with neighbors.]
Each MPI process takes care of/computes on a subdomain,
and exchanges boundary data with those processes assigned
to neighbor subdomains, with the whole group contributing to
solve the PDEs in the global domain.
[This uses MPI point-to-point functions like MPI_Send/MPI_Recv.]
There may be some additional global calculations also, say,
to ensure conservation of mass, energy, momentum, etc,
which may involve all MPI processes.
[This may use MPI collective functions like MPI_Reduce.]
http://en.wikipedia.org/wiki/Domain_decomposition_methods
Besides, WRF probably can split computation on
loops across different threads via OpenMP.
[Well, there is more to OpenMP than just loop splitting,
but loop splitting is the most common.]
You need to provide physical processors for those threads,
which is typically done by setting the environment variable
OMP_NUM_THREADS (e.g. in bash: 'export OMP_NUM_THREADS=4').
In hybrid (MPI + OpenMP mode) you use both, but must be careful
to provide enough processors for all MPI processes and OpenMP threads.
Say, for 3 MPI processes, each one launching two OpenMP threads,
you could do (if you turned both on when you configured WRF):
export OMP_NUM_THREADS=2
mpirun -np 3 ./wrf.exe
for a total of 6 processors.
Better not oversubscribe processors.
If your computer has 4 cores, use -np 2 instead of 3 in the lines above.
For a small number of processors (and/or a small global domain), you
will probably get better performance if you assign
all processors to MPI processes, and simply do not use OpenMP.
Finally, if you do:
export OMP_NUM_THREADS=1
mpiexec -np 4 ./wrf.exe
WRF will run in MPI mode, even if you configured it hybrid.
[At least this is what it is supposed to do.]
I hope this helps,
Gus Correa
On 04/15/2014 01:59 PM, Djordje Romanic wrote:
Hi,
It is working now. It shows:
--------------------------------------------
starting wrf task 0 of 4
starting wrf task 1 of 4
starting wrf task 2 of 4
starting wrf task 3 of 4
---------------------------------------------
Thank you so much!!! You helped me a lot! Finally :) And plus I know the
difference between OpenMP and Open MPI (well, to be honest not
completely, but more than i knew before). :D
Thanks,
Djordje
On Tue, Apr 15, 2014 at 11:57 AM, Gus Correa <g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu>> wrote:
Hi Djordje
"locate mpirun" shows items labled "intel", "mpich", and "openmpi",
maybe more.
Is it Ubuntu or Debian?
Anyway, if you got this mess from somebody else,
instead of sorting it out,
it may save you time and headaches installing Open MPI from
source.
Since it is a single machine, there are no worries about
having an homogeneous installation for several computers (which
could be done if needed, though).
0. Make sure you have gcc, g++, and gfortran installed,
including any "devel" packages that may exist.
[apt-get or yum should tell you]
If something is missing, install it.
1. Download the Open MPI (a.k.a OMPI) tarball to a work directory
of your choice,
say /home/djordje/inst/openmpi/1.8 (create the directory if needed),
and untar the tarball (tar -jxvf ...)
http://www.open-mpi.org/__software/ompi/v1.8/
<http://www.open-mpi.org/software/ompi/v1.8/>
2. Configure it to be installed in yet another directory under
your home, say /home/djordje/sw/openmpi/1.8 (with --prefix).
cd /home/djordje/inst/openmpi/1.8
./configure --prefix=/home/djordje/sw/__openmpi/1.8 CC=gcc, CXX=g++,
FC=gfortran
[Not sure if with 1.8 there is a separate F77 interface, if there is
add F77=gfortran to the configure command line above.
Also, I am using OMPI 1.6.5,
but my recollection is that Jeff would phase off mpif90 and mpif77 in
favor of a single mpifortran of sorts. Please check the OMPI README
file.]
Then do
make
make install
3. Setup your environment variables PATH and LD_LIBRARY_PATH
to point to *this* Open MPI installation ahead of anything else.
This is easily done in your .bashrc or .tcshrc/.cshrc file,
depending on which shell you use
.bashrc :
export PATH=/home/djordje/sw/openmpi/__1.8/bin:$PATH
export
LD_LIBRARY_PATH=/home/djordje/__sw/openmpi/1.8/lib:$LD___LIBRARY_PATH
.tcshrc/.cshrc:
setenv PATH /home/djordje/sw/openmpi/1.8/__bin:$PATH
setenv LD_LIBRARY_PATH
/home/djordje/sw/openmpi/1.8/__lib:$LD_LIBRARY_PATH
4. Logout, login again (or open a new terminal), and check if you
get the right mpirun, etc:
which mpicc
which mpif90
which mpirun
They should point to items in /home/djordje/sw/openmpi/1.8/__bin
5. Rebuild WRF from scratch.
6. Check if WRF got the libraries right:
ldd wrf.exe
This should show mpi libraries in /home/djordje/sw/openmpi/1.8/__lib
7. Run WRF
mpirun -np 4 wrf.exe
I hope this helps,
Gus Correa
On 04/14/2014 08:21 PM, Djordje Romanic wrote:
Hi,
Thanks for this guys. I think I might have two MPI implementations
installed because 'locate mpirun' gives (see bold lines) :
------------------------------__-----------
/etc/alternatives/mpirun
/etc/alternatives/mpirun.1.gz
*/home/djordje/Build_WRF/__LIBRARIES/mpich/bin/mpirun*
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/intel/4.__1.1.036/linux-x86_64/bin/__mpirun
<http://4.1.1.036/linux-x86_64/bin/mpirun>
<http://4.1.1.036/linux-x86___64/bin/mpirun
<http://4.1.1.036/linux-x86_64/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/intel/4.__1.1.036/linux-x86_64/bin64/__mpirun
<http://4.1.1.036/linux-x86_64/bin64/mpirun>
<http://4.1.1.036/linux-x86___64/bin64/mpirun
<http://4.1.1.036/linux-x86_64/bin64/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/intel/4.__1.1.036/linux-x86_64/ia32/bin/__mpirun
<http://4.1.1.036/linux-x86_64/ia32/bin/mpirun>
<http://4.1.1.036/linux-x86___64/ia32/bin/mpirun
<http://4.1.1.036/linux-x86_64/ia32/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/intel/4.__1.1.036/linux-x86_64/intel64/__bin/mpirun
<http://4.1.1.036/linux-x86_64/intel64/bin/mpirun>
<http://4.1.1.036/linux-x86___64/intel64/bin/mpirun
<http://4.1.1.036/linux-x86_64/intel64/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/openmpi/__1.4.3/linux-x86_64-2.3.4/gnu4.__5/bin/mpirun
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/openmpi/__1.4.3/linux-x86_64-2.3.4/gnu4.__5/share/man/man1/mpirun.1
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/openmpi/__1.6.4/linux-x86_64-2.3.4/gnu4.__6/bin/mpirun
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/openmpi/__1.6.4/linux-x86_64-2.3.4/gnu4.__6/share/man/man1/mpirun.1
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich2
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich2>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich2
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich2>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich2
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich2>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich2
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich2>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_amd64/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_amd64/libmpirun.so>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_amd64/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_amd64/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_ia32/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_ia32/libmpirun.so>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_ia32/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_ia32/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/lib/linux___amd64/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/lib/linux_amd64/libmpirun.so>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/lib/linux___amd64/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/lib/linux_amd64/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/lib/linux___ia32/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/lib/linux_ia32/libmpirun.so>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/lib/linux___ia32/libmpirun.so
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/lib/linux_ia32/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/share/man/__man1/mpirun.1.gz
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/share/man/man1/mpirun.1.gz>
<http://8.2.0.0/linux64_2.6-__x86-glibc_2.3.4/share/man/__man1/mpirun.1.gz
<http://8.2.0.0/linux64_2.6-x86-glibc_2.3.4/share/man/man1/mpirun.1.gz>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich2
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich2>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/bin/mpirun.__mpich2
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich2>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich2
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich2>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/bin/__mpirun.mpich2
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich2>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_amd64/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_amd64/libmpirun.so>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_amd64/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_amd64/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_ia32/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_ia32/libmpirun.so>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/ia32/lib/__linux_ia32/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_ia32/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/lib/linux___amd64/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/lib/linux_amd64/libmpirun.so>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/lib/linux___amd64/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/lib/linux_amd64/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/lib/linux___ia32/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/lib/linux_ia32/libmpirun.so>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/lib/linux___ia32/libmpirun.so
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/lib/linux_ia32/libmpirun.so>>
/home/djordje/StarCCM/Install/__STAR-CCM+8.06.007/mpi/__platform/8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/share/man/__man1/mpirun.1.gz
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/share/man/man1/mpirun.1.gz>
<http://8.3.0.2/linux64_2.6-__x86-glibc_2.3.4/share/man/__man1/mpirun.1.gz
<http://8.3.0.2/linux64_2.6-x86-glibc_2.3.4/share/man/man1/mpirun.1.gz>>
*/usr/bin/mpirun*
/usr/bin/mpirun.openmpi
/usr/lib/openmpi/include/__openmpi/ompi/runtime/__mpiruntime.h
/usr/share/man/man1/mpirun.1.__gz
/usr/share/man/man1/mpirun.__openmpi.1.gz
/var/lib/dpkg/alternatives/__mpirun
------------------------------__-----------
This is a single machine. I actually just got it... another user
used it
for 1-2 years.
Is this a possible cause of the problem?
Regards,
Djordje
On Mon, Apr 14, 2014 at 7:06 PM, Gus Correa
<g...@ldeo.columbia.edu <mailto:g...@ldeo.columbia.edu>
<mailto:g...@ldeo.columbia.edu <mailto:g...@ldeo.columbia.edu>>__>
wrote:
Apologies for stirring even more the confusion by mispelling
"Open MPI" as "OpenMPI".
"OMPI" doesn't help either, because all OpenMP environment
variables and directives start with "OMP".
Maybe associating the names to
"message passing" vs. "threads" would help?
Djordje:
'which mpif90' etc show everything in /usr/bin.
So, very likely they were installed from packages
(yum, apt-get, rpm ...),right?
Have you tried something like
"yum list |grep mpi"
to see what you have?
As Dave, Jeff and Tom said, this may be a mixup of different
MPI implementations at compilation (mpicc mpif90) and
runtime (mpirun).
That is common, you may have different MPI implementations
installed.
Other possibilities that may tell what MPI you have:
mpirun --version
mpif90 --show
mpicc --show
Yet another:
locate mpirun
locate mpif90
locate mpicc
The ldd didn't show any MPI libraries, maybe they are
static libraries.
An alternative is to install Open MPI from source,
and put it in a non-system directory
(not /usr/bin, not /usr/local/bin, etc).
Is this a single machine or a cluster?
Or perhaps a set of PCs that you have access to?
If it is a cluster, do you have access to a filesystem that is
shared across the cluster?
On clusters typically /home is shared, often via NFS.
Gus Correa
On 04/14/2014 05:15 PM, Jeff Squyres (jsquyres) wrote:
Maybe we should rename OpenMP to be something less
confusing --
perhaps something totally unrelated, perhaps even
non-sensical.
That'll end lots of confusion!
My vote: OpenMP --> SharkBook
It's got a ring to it, doesn't it? And it sounds fearsome!
On Apr 14, 2014, at 5:04 PM, "Elken, Tom"
<tom.el...@intel.com <mailto:tom.el...@intel.com>
<mailto:tom.el...@intel.com
<mailto:tom.el...@intel.com>>> wrote:
That’s OK. Many of us make that mistake, though
often as a
typo.
One thing that helps is that the correct spelling
of Open
MPI has a space in it,
but OpenMP does not.
If not aware what OpenMP is, here is a link:
http://openmp.org/wp/
What makes it more confusing is that more and more
apps.
offer the option of running in a hybrid mode, such as WRF,
with OpenMP threads running over MPI ranks with the same
executable.
And sometimes that MPI is Open MPI.
Cheers,
-Tom
From: users [mailto:users-bounces@open-____mpi.org
<mailto:users-bounces@open-__mpi.org>
<mailto:users-bounces@open-__mpi.org
<mailto:users-boun...@open-mpi.org>>] On Behalf Of Djordje
Romanic
Sent: Monday, April 14, 2014 1:28 PM
To: Open MPI Users
Subject: Re: [OMPI users] mpirun runs in serial
even I set
np to several processors
OK guys... Thanks for all this info. Frankly, I
didn't know
these diferences between OpenMP and OpenMPI. The
commands:
which mpirun
which mpif90
which mpicc
give,
/usr/bin/mpirun
/usr/bin/mpif90
/usr/bin/mpicc
respectively.
A tutorial on how to compile WRF
(http://www.mmm.ucar.edu/wrf/____OnLineTutorial/compilation_____tutorial.php
<http://www.mmm.ucar.edu/wrf/__OnLineTutorial/compilation___tutorial.php>
<http://www.mmm.ucar.edu/wrf/__OnLineTutorial/compilation___tutorial.php
<http://www.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php>>)
provides a test program to test MPI. I ran the
program and
it gave me the output of successful run, which is:
------------------------------____---------------
C function called by Fortran
Values are xx = 2.00 and ii = 1
status = 2
SUCCESS test 2 fortran + c + netcdf + mpi
------------------------------____---------------
It uses mpif90 and mpicc for compiling. Below is
the output
of 'ldd ./wrf.exe':
linux-vdso.so.1 => (0x00007fff584e7000)
libpthread.so.0 =>
/lib/x86_64-linux-gnu/____libpthread.so.0
(0x00007f4d160ab000)
libgfortran.so.3 =>
/usr/lib/x86_64-linux-gnu/____libgfortran.so.3
(0x00007f4d15d94000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.____6
(0x00007f4d15a97000)
libgcc_s.so.1 =>
/lib/x86_64-linux-gnu/libgcc_____s.so.1
(0x00007f4d15881000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.____6
(0x00007f4d154c1000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4d162e8000)
libquadmath.so.0 =>
/usr/lib/x86_64-linux-gnu/____libquadmath.so.0
(0x00007f4d1528a000)
On Mon, Apr 14, 2014 at 4:09 PM, Gus Correa
<g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu> <mailto:g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu>>__> wrote:
Djordje
Your WRF configure file seems to use mpif90 and
mpicc (line
115 & following).
In addition, it also seems to have DISABLED OpenMP (NO
TRAILING "I")
(lines 109-111, where OpenMP stuff is commented out).
So, it looks like to me your intent was to compile
with MPI.
Whether it is THIS MPI (OpenMPI) or another MPI
(say MPICH,
or MVAPICH,
or Intel MPI, or Cray, or ...) only your
environment can tell.
What do you get from these commands:
which mpirun
which mpif90
which mpicc
I never built WRF here (but other people here use it).
Which input do you provide to the command that
generates the
configure
script that you sent before?
Maybe the full command line will shed some light on
the problem.
I hope this helps,
Gus Correa
On 04/14/2014 03:11 PM, Djordje Romanic wrote:
to get help :)
On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic
<djord...@gmail.com <mailto:djord...@gmail.com>
<mailto:djord...@gmail.com <mailto:djord...@gmail.com>>
<mailto:djord...@gmail.com
<mailto:djord...@gmail.com> <mailto:djord...@gmail.com
<mailto:djord...@gmail.com>>>> wrote:
Yes, but I was hoping to get. :)
On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres
(jsquyres)
<jsquy...@cisco.com
<mailto:jsquy...@cisco.com> <mailto:jsquy...@cisco.com
<mailto:jsquy...@cisco.com>>
<mailto:jsquy...@cisco.com
<mailto:jsquy...@cisco.com> <mailto:jsquy...@cisco.com
<mailto:jsquy...@cisco.com>>>> wrote:
If you didn't use Open MPI, then this is
the wrong
mailing list
for you. :-)
(this is the Open MPI users' support
mailing list)
On Apr 14, 2014, at 2:58 PM, Djordje Romanic
<djord...@gmail.com <mailto:djord...@gmail.com>
<mailto:djord...@gmail.com <mailto:djord...@gmail.com>>
<mailto:djord...@gmail.com
<mailto:djord...@gmail.com>
<mailto:djord...@gmail.com
<mailto:djord...@gmail.com>>>> wrote:
> I didn't use OpenMPI.
>
>
> On Mon, Apr 14, 2014 at 2:37 PM, Jeff
Squyres
(jsquyres)
<jsquy...@cisco.com
<mailto:jsquy...@cisco.com> <mailto:jsquy...@cisco.com
<mailto:jsquy...@cisco.com>>
<mailto:jsquy...@cisco.com
<mailto:jsquy...@cisco.com> <mailto:jsquy...@cisco.com
<mailto:jsquy...@cisco.com>>>> wrote:
> This can also happen when you compile your
application with
one MPI implementation (e.g., Open MPI),
but then
mistakenly use
the "mpirun" (or "mpiexec") from a
different MPI
implementation
(e.g., MPICH).
>
>
> On Apr 14, 2014, at 2:32 PM, Djordje
Romanic
<djord...@gmail.com
<mailto:djord...@gmail.com> <mailto:djord...@gmail.com
<mailto:djord...@gmail.com>>
<mailto:djord...@gmail.com
<mailto:djord...@gmail.com> <mailto:djord...@gmail.com
<mailto:djord...@gmail.com>>>> wrote:
>
> > I compiled it with: x86_64 Linux,
gfortran
compiler with
gcc (dmpar). dmpar - distributed memory
option.
> >
> > Attached is the self-generated
configuration
file. The
architecture specification settings start
at line
107. I didn't
use Open MPI (shared memory option).
> >
> >
> > On Mon, Apr 14, 2014 at 1:23 PM,
Dave Goodell
(dgoodell)
<dgood...@cisco.com
<mailto:dgood...@cisco.com> <mailto:dgood...@cisco.com
<mailto:dgood...@cisco.com>>
<mailto:dgood...@cisco.com
<mailto:dgood...@cisco.com> <mailto:dgood...@cisco.com
<mailto:dgood...@cisco.com>>>> wrote:
> > On Apr 14, 2014, at 12:15 PM,
Djordje Romanic
<djord...@gmail.com
<mailto:djord...@gmail.com> <mailto:djord...@gmail.com
<mailto:djord...@gmail.com>>
<mailto:djord...@gmail.com
<mailto:djord...@gmail.com> <mailto:djord...@gmail.com
<mailto:djord...@gmail.com>>>> wrote:
> >
> > > When I start wrf with mpirun -np 4
./wrf.exe, I get this:
> > >
------------------------------____-------------------
> > > starting wrf task 0 of
1
> > > starting wrf task 0 of
1
> > > starting wrf task 0 of
1
> > > starting wrf task 0 of
1
> > >
------------------------------____-------------------
> > > This indicates that it is not using 4
processors, but 1.
> > >
> > > Any idea what might be the problem?
> >
> > It could be that you compiled WRF with a
different MPI
implementation than you are using to run
it (e.g.,
MPICH vs.
Open MPI).
> >
> > -Dave
> >
> >
___________________________________________________