to get help :)


On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic <djord...@gmail.com> wrote:

> Yes, but I was hoping to get. :)
>
>
> On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> If you didn't use Open MPI, then this is the wrong mailing list for you.
>>  :-)
>>
>> (this is the Open MPI users' support mailing list)
>>
>>
>> On Apr 14, 2014, at 2:58 PM, Djordje Romanic <djord...@gmail.com> wrote:
>>
>> > I didn't use OpenMPI.
>> >
>> >
>> > On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com> wrote:
>> > This can also happen when you compile your application with one MPI
>> implementation (e.g., Open MPI), but then mistakenly use the "mpirun" (or
>> "mpiexec") from a different MPI implementation (e.g., MPICH).
>> >
>> >
>> > On Apr 14, 2014, at 2:32 PM, Djordje Romanic <djord...@gmail.com>
>> wrote:
>> >
>> > > I compiled it with: x86_64 Linux, gfortran compiler with gcc
>> (dmpar). dmpar - distributed memory option.
>> > >
>> > > Attached is the self-generated configuration file. The architecture
>> specification settings start at line 107. I didn't use Open MPI (shared
>> memory option).
>> > >
>> > >
>> > > On Mon, Apr 14, 2014 at 1:23 PM, Dave Goodell (dgoodell) <
>> dgood...@cisco.com> wrote:
>> > > On Apr 14, 2014, at 12:15 PM, Djordje Romanic <djord...@gmail.com>
>> wrote:
>> > >
>> > > > When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
>> > > > -------------------------------------------------
>> > > >  starting wrf task            0  of            1
>> > > >  starting wrf task            0  of            1
>> > > >  starting wrf task            0  of            1
>> > > >  starting wrf task            0  of            1
>> > > > -------------------------------------------------
>> > > > This indicates that it is not using 4 processors, but 1.
>> > > >
>> > > > Any idea what might be the problem?
>> > >
>> > > It could be that you compiled WRF with a different MPI implementation
>> than you are using to run it (e.g., MPICH vs. Open MPI).
>> > >
>> > > -Dave
>> > >
>> > > _______________________________________________
>> > > users mailing list
>> > > us...@open-mpi.org
>> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> > >
>> > > <configure.wrf>_______________________________________________
>> > > users mailing list
>> > > us...@open-mpi.org
>> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> >
>> > --
>> > Jeff Squyres
>> > jsquy...@cisco.com
>> > For corporate legal information go to:
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>> >
>> > _______________________________________________
>> > users mailing list
>> > us...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> > _______________________________________________
>> > users mailing list
>> > us...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> For corporate legal information go to:
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>

Reply via email to