Hi,

Thanks for this guys. I think I might have two MPI implementations
installed because 'locate mpirun' gives (see bold lines) :
-----------------------------------------
/etc/alternatives/mpirun
/etc/alternatives/mpirun.1.gz
*/home/djordje/Build_WRF/LIBRARIES/mpich/bin/mpirun*
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/intel/
4.1.1.036/linux-x86_64/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/intel/
4.1.1.036/linux-x86_64/bin64/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/intel/
4.1.1.036/linux-x86_64/ia32/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/intel/
4.1.1.036/linux-x86_64/intel64/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/openmpi/1.4.3/linux-x86_64-2.3.4/gnu4.5/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/openmpi/1.4.3/linux-x86_64-2.3.4/gnu4.5/share/man/man1/mpirun.1
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/openmpi/1.6.4/linux-x86_64-2.3.4/gnu4.6/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/openmpi/1.6.4/linux-x86_64-2.3.4/gnu4.6/share/man/man1/mpirun.1
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich2
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich2
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_amd64/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_ia32/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/lib/linux_amd64/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/lib/linux_ia32/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.2.0.0/linux64_2.6-x86-glibc_2.3.4/share/man/man1/mpirun.1.gz
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/bin/mpirun.mpich2
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/bin/mpirun.mpich2
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_amd64/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/ia32/lib/linux_ia32/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/lib/linux_amd64/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/lib/linux_ia32/libmpirun.so
/home/djordje/StarCCM/Install/STAR-CCM+8.06.007/mpi/platform/
8.3.0.2/linux64_2.6-x86-glibc_2.3.4/share/man/man1/mpirun.1.gz
*/usr/bin/mpirun*
/usr/bin/mpirun.openmpi
/usr/lib/openmpi/include/openmpi/ompi/runtime/mpiruntime.h
/usr/share/man/man1/mpirun.1.gz
/usr/share/man/man1/mpirun.openmpi.1.gz
/var/lib/dpkg/alternatives/mpirun
-----------------------------------------
This is a single machine. I actually just got it... another user used it
for 1-2 years.

Is this a possible cause of the problem?

Regards,
Djordje


On Mon, Apr 14, 2014 at 7:06 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:

> Apologies for stirring even more the confusion by mispelling
> "Open MPI" as "OpenMPI".
> "OMPI" doesn't help either, because all OpenMP environment
> variables and directives start with "OMP".
> Maybe associating the names to
> "message passing" vs. "threads" would help?
>
> Djordje:
>
> 'which mpif90' etc show everything in /usr/bin.
> So, very likely they were installed from packages
> (yum, apt-get, rpm ...),right?
> Have you tried something like
> "yum list |grep mpi"
> to see what you have?
>
> As Dave, Jeff and Tom said, this may be a mixup of different
> MPI implementations at compilation (mpicc mpif90) and runtime (mpirun).
> That is common, you may have different MPI implementations installed.
>
> Other possibilities that may tell what MPI you have:
>
> mpirun --version
> mpif90 --show
> mpicc --show
>
> Yet another:
>
> locate mpirun
> locate mpif90
> locate mpicc
>
> The ldd didn't show any MPI libraries, maybe they are static libraries.
>
> An alternative is to install Open MPI from source,
> and put it in a non-system directory
> (not /usr/bin, not /usr/local/bin, etc).
>
> Is this a single machine or a cluster?
> Or perhaps a set of PCs that you have access to?
> If it is a cluster, do you have access to a filesystem that is
> shared across the cluster?
> On clusters typically /home is shared, often via NFS.
>
> Gus Correa
>
>
> On 04/14/2014 05:15 PM, Jeff Squyres (jsquyres) wrote:
>
>> Maybe we should rename OpenMP to be something less confusing --
>> perhaps something totally unrelated, perhaps even non-sensical.
>> That'll end lots of confusion!
>>
>> My vote: OpenMP --> SharkBook
>>
>> It's got a ring to it, doesn't it?  And it sounds fearsome!
>>
>>
>>
>> On Apr 14, 2014, at 5:04 PM, "Elken, Tom" <tom.el...@intel.com> wrote:
>>
>>  That’s OK.  Many of us make that mistake, though often as a typo.
>>> One thing that helps is that the correct spelling of Open MPI has a
>>> space in it,
>>>
>> but OpenMP does not.
>
>> If not aware what OpenMP is, here is a link: http://openmp.org/wp/
>>>
>>> What makes it more confusing is that more and more apps.
>>>
>> offer the option of running in a hybrid mode, such as WRF,
> with OpenMP threads running over MPI ranks with the same executable.
> And sometimes that MPI is Open MPI.
>
>>
>>> Cheers,
>>> -Tom
>>>
>>> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Djordje
>>> Romanic
>>> Sent: Monday, April 14, 2014 1:28 PM
>>> To: Open MPI Users
>>> Subject: Re: [OMPI users] mpirun runs in serial even I set np to several
>>> processors
>>>
>>> OK guys... Thanks for all this info. Frankly, I didn't know these
>>> diferences between OpenMP and OpenMPI. The commands:
>>> which mpirun
>>> which mpif90
>>> which mpicc
>>> give,
>>> /usr/bin/mpirun
>>> /usr/bin/mpif90
>>> /usr/bin/mpicc
>>> respectively.
>>>
>>> A tutorial on how to compile WRF (http://www.mmm.ucar.edu/wrf/
>>> OnLineTutorial/compilation_tutorial.php) provides a test program to
>>> test MPI. I ran the program and it gave me the output of successful run,
>>> which is:
>>> ---------------------------------------------
>>> C function called by Fortran
>>> Values are xx = 2.00 and ii = 1
>>> status = 2
>>> SUCCESS test 2 fortran + c + netcdf + mpi
>>> ---------------------------------------------
>>> It uses mpif90 and mpicc for compiling. Below is the output of 'ldd
>>> ./wrf.exe':
>>>
>>>
>>>      linux-vdso.so.1 =>  (0x00007fff584e7000)
>>>      libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
>>> (0x00007f4d160ab000)
>>>      libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3
>>> (0x00007f4d15d94000)
>>>      libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4d15a97000)
>>>      libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
>>> (0x00007f4d15881000)
>>>      libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4d154c1000)
>>>      /lib64/ld-linux-x86-64.so.2 (0x00007f4d162e8000)
>>>      libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0
>>> (0x00007f4d1528a000)
>>>
>>>
>>>
>>> On Mon, Apr 14, 2014 at 4:09 PM, Gus Correa <g...@ldeo.columbia.edu>
>>> wrote:
>>> Djordje
>>>
>>> Your WRF configure file seems to use mpif90 and mpicc (line 115 &
>>> following).
>>> In addition, it also seems to have DISABLED OpenMP (NO TRAILING "I")
>>> (lines 109-111, where OpenMP stuff is commented out).
>>> So, it looks like to me your intent was to compile with MPI.
>>>
>>> Whether it is THIS MPI (OpenMPI) or another MPI (say MPICH, or MVAPICH,
>>> or Intel MPI, or Cray, or ...) only your environment can tell.
>>>
>>> What do you get from these commands:
>>>
>>> which mpirun
>>> which mpif90
>>> which mpicc
>>>
>>> I never built WRF here (but other people here use it).
>>> Which input do you provide to the command that generates the configure
>>> script that you sent before?
>>> Maybe the full command line will shed some light on the problem.
>>>
>>>
>>> I hope this helps,
>>> Gus Correa
>>>
>>>
>>> On 04/14/2014 03:11 PM, Djordje Romanic wrote:
>>> to get help :)
>>>
>>>
>>>
>>> On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic <djord...@gmail.com
>>> <mailto:djord...@gmail.com>> wrote:
>>>
>>>      Yes, but I was hoping to get. :)
>>>
>>>
>>>      On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres (jsquyres)
>>>      <jsquy...@cisco.com <mailto:jsquy...@cisco.com>> wrote:
>>>
>>>          If you didn't use Open MPI, then this is the wrong mailing list
>>>          for you.  :-)
>>>
>>>          (this is the Open MPI users' support mailing list)
>>>
>>>
>>>          On Apr 14, 2014, at 2:58 PM, Djordje Romanic <
>>> djord...@gmail.com
>>>          <mailto:djord...@gmail.com>> wrote:
>>>
>>>           > I didn't use OpenMPI.
>>>           >
>>>           >
>>>           > On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres)
>>>          <jsquy...@cisco.com <mailto:jsquy...@cisco.com>> wrote:
>>>           > This can also happen when you compile your application with
>>>          one MPI implementation (e.g., Open MPI), but then mistakenly use
>>>          the "mpirun" (or "mpiexec") from a different MPI implementation
>>>          (e.g., MPICH).
>>>           >
>>>           >
>>>           > On Apr 14, 2014, at 2:32 PM, Djordje Romanic
>>>          <djord...@gmail.com <mailto:djord...@gmail.com>> wrote:
>>>           >
>>>           > > I compiled it with: x86_64 Linux, gfortran compiler with
>>>          gcc   (dmpar). dmpar - distributed memory option.
>>>           > >
>>>           > > Attached is the self-generated configuration file. The
>>>          architecture specification settings start at line 107. I didn't
>>>          use Open MPI (shared memory option).
>>>           > >
>>>           > >
>>>           > > On Mon, Apr 14, 2014 at 1:23 PM, Dave Goodell (dgoodell)
>>>          <dgood...@cisco.com <mailto:dgood...@cisco.com>> wrote:
>>>           > > On Apr 14, 2014, at 12:15 PM, Djordje Romanic
>>>          <djord...@gmail.com <mailto:djord...@gmail.com>> wrote:
>>>           > >
>>>           > > > When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
>>>           > > > -------------------------------------------------
>>>           > > >  starting wrf task            0  of            1
>>>           > > >  starting wrf task            0  of            1
>>>           > > >  starting wrf task            0  of            1
>>>           > > >  starting wrf task            0  of            1
>>>           > > > -------------------------------------------------
>>>           > > > This indicates that it is not using 4 processors, but 1.
>>>           > > >
>>>           > > > Any idea what might be the problem?
>>>           > >
>>>           > > It could be that you compiled WRF with a different MPI
>>>          implementation than you are using to run it (e.g., MPICH vs.
>>>          Open MPI).
>>>           > >
>>>           > > -Dave
>>>           > >
>>>           > > _______________________________________________
>>>           > > users mailing list
>>>           > > us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>
>>>           > > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>           > >
>>>           > > <configure.wrf>_______________
>>> ________________________________
>>>           > > users mailing list
>>>           > > us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>
>>>           > > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>           >
>>>           >
>>>           > --
>>>           > Jeff Squyres
>>>           > jsquy...@cisco.com <mailto:jsquy...@cisco.com>
>>>
>>>           > For corporate legal information go to:
>>>          http://www.cisco.com/web/about/doing_business/legal/cri/
>>>           >
>>>           > _______________________________________________
>>>           > users mailing list
>>>           > us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>
>>>           > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>           >
>>>           > _______________________________________________
>>>           > users mailing list
>>>           > us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>
>>>           > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>          --
>>>          Jeff Squyres
>>>          jsquy...@cisco.com <mailto:jsquy...@cisco.com>
>>>
>>>          For corporate legal information go to:
>>>          http://www.cisco.com/web/about/doing_business/legal/cri/
>>>
>>>          _______________________________________________
>>>          users mailing list
>>>          us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>
>>>          http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>>
>>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to