[OMPI users] Error run mpiexec

2008-07-21 Thread mariognu-outside
Hi all,

First, excuse my english, it isn't good :)

Well, I have 2 machines, one a Xeon with 2 cpu (64bit) and a Pentium 4 with 
only one cpu. At the 2 machines I have installed Ubuntu 8 Server and all 
packages to open-mpi and gromacs.

I use gromacs for my works

Ok, in the 2 machines, at my users folder, I have a file like this:
machine1 cpu=2
machine2

Machine1 is Xeon (192.168.0.10) and Machine2 is Pentium 4 (192.168.0.11)

My file /etc/hosts is configured too.

When I run mpiexec in machine2, I have like this:
mariojose@machine2:~/lam-mpi$ mpiexec -n 3 hostname
machine1
machine2
-
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-
machine1
mpirun failed with exit status 252

When I run in machine1 I have like this:

mariojose@machine1:~/lam-mpi$ mpiexec -n 3 hostname
machine1
machine1
machine2
-
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-
mpirun failed with exit status 252

I don't know why I have this message. I think that is a error.

I try run with gromacs, if anybody use gromacs and can help me I like very much 
:) . 

mariojose@machine1:~/lam-mpi$ grompp -f run.mdp -p topol.top -c pr.gro -o 
run.tpr
mariojose@machine1:~/mpiexec -n 3 mdrun -v -deffnm run

It's works Ok. I see that cpu of 2 machines woks in 100%. It look well for me. 
But I have a error em I run mdrun_mpi that is a binary to work in cluster.

mariojose@machine1:~/lam-mpi$ grompp -f run.mdp -p topol.top -c pr.gro -o 
run.tpr -np 3 -sort -shuffle
mariojose@machine1:~/lam-mpi$ mpiexec -n 3 mdrun_mpi -v -deffnm run
NNODES=3, MYRANK=0, HOSTNAME=machine1
NNODES=3, MYRANK=2, HOSTNAME=machine1
NODEID=0 argc=4
NODEID=2 argc=4
NNODES=3, MYRANK=1, HOSTNAME=machine2
NODEID=1 argc=4
 :-)  G  R  O  M  A  C  S  (-:

 Gyas ROwers Mature At Cryogenic Speed

:-)  VERSION 3.3.3  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  mdrun_mpi  (-:

Option Filename  Type Description

  -srun.tpr  InputGeneric run input: tpr tpb tpa xml
  -orun.trr  Output   Full precision trajectory: trr trj
  -xrun.xtc  Output, Opt. Compressed trajectory (portable xdr format)
  -crun.gro  Output   Generic structure: gro g96 pdb xml
  -erun.edr  Output   Generic energy: edr ene
  -grun.log  Output   Log file
-dgdl   run.xvg  Output, Opt. xvgr/xmgr file
-field  run.xvg  Output, Opt. xvgr/xmgr file
-table  run.xvg  Input, Opt.  xvgr/xmgr file
-tablep run.xvg  Input, Opt.  xvgr/xmgr file
-rerun  run.xtc  Input, Opt.  Generic trajectory: xtc trr trj gro g96 pdb
-tpirun.xvg  Output, Opt. xvgr/xmgr file
 -eirun.edi  Input, Opt.  ED sampling input
 -eorun.edo  Output, Opt. ED sampling output
  -jrun.gct  Input, Opt.  General coupling stuff
 -jorun.gct  Output, Opt. General coupling stuff
-ffout  run.xvg  Output, Opt. xvgr/xmgr file
-devout run.xvg  Output, Opt. xvgr/xmgr file
-runav  run.xvg  Output, Opt. xvgr/xmgr file
 -pirun.ppa  Input, Opt.  Pull parameters
 -porun.ppa  Output, Opt. Pull parameters
 -pdrun.pdo  Output, Opt. Pull data output
 -pnrun.ndx  Input, Opt.  Index file
-mtxrun.mtx  Output, Opt. Hessian matrix
 -dnrun.ndx  O

Re: [OMPI users] Error run mpiexec

2008-07-21 Thread Ralph Castain
If you look closely at the error messages, you will see that you were  
executing LAM-MPI, not Open MPI. If you truly wanted to run Open MPI,  
I would check your path to ensure that mpiexec is pointing at the Open  
MPI binary.


Ralph

On Jul 21, 2008, at 7:47 AM, mariognu-outs...@yahoo.com.br wrote:


Hi all,

First, excuse my english, it isn't good :)

Well, I have 2 machines, one a Xeon with 2 cpu (64bit) and a Pentium  
4 with only one cpu. At the 2 machines I have installed Ubuntu 8  
Server and all packages to open-mpi and gromacs.


I use gromacs for my works

Ok, in the 2 machines, at my users folder, I have a file like this:
machine1 cpu=2
machine2

Machine1 is Xeon (192.168.0.10) and Machine2 is Pentium 4  
(192.168.0.11)


My file /etc/hosts is configured too.

When I run mpiexec in machine2, I have like this:
mariojose@machine2:~/lam-mpi$ mpiexec -n 3 hostname
machine1
machine2
-
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-
machine1
mpirun failed with exit status 252

When I run in machine1 I have like this:

mariojose@machine1:~/lam-mpi$ mpiexec -n 3 hostname
machine1
machine1
machine2
-
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-
mpirun failed with exit status 252

I don't know why I have this message. I think that is a error.

I try run with gromacs, if anybody use gromacs and can help me I  
like very much :) .


mariojose@machine1:~/lam-mpi$ grompp -f run.mdp -p topol.top -c  
pr.gro -o run.tpr

mariojose@machine1:~/mpiexec -n 3 mdrun -v -deffnm run

It's works Ok. I see that cpu of 2 machines woks in 100%. It look  
well for me. But I have a error em I run mdrun_mpi that is a binary  
to work in cluster.


mariojose@machine1:~/lam-mpi$ grompp -f run.mdp -p topol.top -c  
pr.gro -o run.tpr -np 3 -sort -shuffle

mariojose@machine1:~/lam-mpi$ mpiexec -n 3 mdrun_mpi -v -deffnm run
NNODES=3, MYRANK=0, HOSTNAME=machine1
NNODES=3, MYRANK=2, HOSTNAME=machine1
NODEID=0 argc=4
NODEID=2 argc=4
NNODES=3, MYRANK=1, HOSTNAME=machine2
NODEID=1 argc=4
:-)  G  R  O  M  A  C  S  (-:

Gyas ROwers Mature At Cryogenic Speed

   :-)  VERSION 3.3.3  (-:


 Written by David van der Spoel, Erik Lindahl, Berk Hess, and  
others.
  Copyright (c) 1991-2000, University of Groningen, The  
Netherlands.

Copyright (c) 2001-2008, The GROMACS development team,
   check out http://www.gromacs.org for more information.

This program is free software; you can redistribute it and/or
 modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.

 :-)  mdrun_mpi  (-:

Option Filename  Type Description

 -srun.tpr  InputGeneric run input: tpr tpb tpa xml
 -orun.trr  Output   Full precision trajectory: trr trj
 -xrun.xtc  Output, Opt. Compressed trajectory (portable xdr  
format)

 -crun.gro  Output   Generic structure: gro g96 pdb xml
 -erun.edr  Output   Generic energy: edr ene
 -grun.log  Output   Log file
-dgdl   run.xvg  Output, Opt. xvgr/xmgr file
-field  run.xvg  Output, Opt. xvgr/xmgr file
-table  run.xvg  Input, Opt.  xvgr/xmgr file
-tablep run.xvg  Input, Opt.  xvgr/xmgr file
-rerun  run.xtc  Input, Opt.  Generic trajectory: xtc trr trj  
gro g96 pdb

-tpirun.xvg  Output, Opt. xvgr/xmgr file
-eirun.edi  Input, Opt.  ED sampling input
-eorun.edo  Output, Opt. ED sampling output
 -jrun.gct  Input, Opt.  General coupling stuff
-jorun.gct  Output, Opt. General coupling stuff
-ffout  run.xvg  Output, Opt. xvgr/xmgr file
-devout run.xvg  Output, Opt. xvgr/xmgr file
-runav

Re: [OMPI users] Error run mpiexec

2008-07-21 Thread mariognu-outside
Hi,

In my /usr/bin I have
lrwxrwxrwx 1 root root25 2008-07-17 10:25 /usr/bin/mpiexec -> 
/etc/alternatives/mpiexec
-rwxr-xr-x 1 root root 19941 2008-03-23 03:36 /usr/bin/mpiexec.lam
lrwxrwxrwx 1 root root 7 2008-07-17 10:25 /usr/bin/mpiexec.openmpi -> 
orterun

I try to run

mpiexec.openmpi -n 3 mdrun_mpi.openmpi -n 3 -v -deffnm run

But only machine 1 the cpu is working in 100%. The machine2 not.

I run

mpiexec.openmpi -n 3 hostname

And I have

machine1
machine1
machine1

My path

mariojose@machine1:~/lam-mpi$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games

I don't know what happening

Thanks

Mario Jose


--- Em seg, 21/7/08, Ralph Castain  escreveu:

> De: Ralph Castain 
> Assunto: Re: [OMPI users] Error run mpiexec
> Para: mariognu-outs...@yahoo.com.br, "Open MPI Users" 
> Data: Segunda-feira, 21 de Julho de 2008, 11:00
> If you look closely at the error messages, you will see that
> you were  
> executing LAM-MPI, not Open MPI. If you truly wanted to run
> Open MPI,  
> I would check your path to ensure that mpiexec is pointing
> at the Open  
> MPI binary.
> 
> Ralph
> 
> On Jul 21, 2008, at 7:47 AM, mariognu-outs...@yahoo.com.br
> wrote:
> 
> > Hi all,
> >
> > First, excuse my english, it isn't good :)
> >
> > Well, I have 2 machines, one a Xeon with 2 cpu (64bit)
> and a Pentium  
> > 4 with only one cpu. At the 2 machines I have
> installed Ubuntu 8  
> > Server and all packages to open-mpi and gromacs.
> >
> > I use gromacs for my works
> >
> > Ok, in the 2 machines, at my users folder, I have a
> file like this:
> > machine1 cpu=2
> > machine2
> >
> > Machine1 is Xeon (192.168.0.10) and Machine2 is
> Pentium 4  
> > (192.168.0.11)
> >
> > My file /etc/hosts is configured too.
> >
> > When I run mpiexec in machine2, I have like this:
> > mariojose@machine2:~/lam-mpi$ mpiexec -n 3 hostname
> > machine1
> > machine2
> >
> -
> > It seems that [at least] one of the processes that was
> started with
> > mpirun did not invoke MPI_INIT before quitting (it is
> possible that
> > more than one process did not invoke MPI_INIT --
> mpirun was only
> > notified of the first one, which was on node n0).
> >
> > mpirun can *only* be used with MPI programs (i.e.,
> programs that
> > invoke MPI_INIT and MPI_FINALIZE).  You can use the
> "lamexec" program
> > to run non-MPI programs over the lambooted nodes.
> >
> -
> > machine1
> > mpirun failed with exit status 252
> >
> > When I run in machine1 I have like this:
> >
> > mariojose@machine1:~/lam-mpi$ mpiexec -n 3 hostname
> > machine1
> > machine1
> > machine2
> >
> -
> > It seems that [at least] one of the processes that was
> started with
> > mpirun did not invoke MPI_INIT before quitting (it is
> possible that
> > more than one process did not invoke MPI_INIT --
> mpirun was only
> > notified of the first one, which was on node n0).
> >
> > mpirun can *only* be used with MPI programs (i.e.,
> programs that
> > invoke MPI_INIT and MPI_FINALIZE).  You can use the
> "lamexec" program
> > to run non-MPI programs over the lambooted nodes.
> >
> -
> > mpirun failed with exit status 252
> >
> > I don't know why I have this message. I think that
> is a error.
> >
> > I try run with gromacs, if anybody use gromacs and can
> help me I  
> > like very much :) .
> >
> > mariojose@machine1:~/lam-mpi$ grompp -f run.mdp -p
> topol.top -c  
> > pr.gro -o run.tpr
> > mariojose@machine1:~/mpiexec -n 3 mdrun -v -deffnm run
> >
> > It's works Ok. I see that cpu of 2 machines woks
> in 100%. It look  
> > well for me. But I have a error em I run mdrun_mpi
> that is a binary  
> > to work in cluster.
> >
> > mariojose@machine1:~/lam-mpi$ grompp -f run.mdp -p
> topol.top -c  
> > pr.gro -o run.tpr -np 3 -sort -shuffle
> > mariojose@machine1:~/lam-mpi$ mpiexec -n 3 mdrun_mpi
> -v -deffnm run
> > NNODES=3, MYRANK=0, HOSTNAME=machine1
> > NNODES=3, MYRANK=2, HOSTNAME=machine1
> > NODEID=0 argc=4
> > NODEID=2 argc=4
> > NNODES=3, MYRANK=1, HOSTNAME=machine2
> > NODEID=1 argc=4
> > :-)  G  R  O  M  A  C  S  (-:
> >
> > Gyas ROwers Mature At Cryogenic
> Speed
> >
> >:-)  VERSION 3.3.3  (-:
> >
> >
> >  Written by David van der Spoel, Erik Lindahl,
> Berk Hess, and  
> > others.
> >   Copyright (c) 1991-2000, University of
> Groningen, The  
> > Netherlands.
> > Copyright (c) 2001-2008, The GROMACS
> development team,
> >check out http://www.gromacs.org for more
> information.
> >
> > This program is free software; you can
> redistribute it and/or
> >  modify it under the terms of the GNU General
> Public License
> > a

Re: [OMPI users] Error run mpiexec

2008-07-21 Thread jody
Hi
>
> mpiexec.openmpi -n 3 hostname
>
Here you forgot to specify the hosts, so all processes run on the local machine;
see:
http://www.open-mpi.org/faq/?category=running#mpirun-host


Jody


[OMPI users] Problem with WRF and pgi-7.2

2008-07-21 Thread Brock Palen

Hi, When compiling WRF with PGI-7.2-1  with openmpi-1.2.6
The file buf_for_proc.c  fails.  Nothing specail about this file  
sticks out to me.  But older versions of PGI like it just fine.  The  
errors PGI complains about has to do with mpi.h though:


[brockp@nyx-login1 RSL_LITE]$ mpicc  -DFSEEKO64_OK  -w -O3 - 
DDM_PARALLEL   -c buf_for_proc.c
PGC-S-0036-Syntax error: Recovery attempted by inserting  
identifier .Z before '(' (/home/software/rhel4/openmpi-1.2.6/ 
pgi-7.0/include/mpi.h: 823)
PGC-S-0082-Function returning array not allowed (/home/software/rhel4/ 
openmpi-1.2.6/pgi-7.0/include/mpi.h: 823)
PGC-S-0043-Redefinition of symbol, MPI_Comm (/home/software/rhel4/ 
openmpi-1.2.6/pgi-7.0/include/mpi.h: 837)

PGC/x86-64 Linux 7.2-1: compilation completed with severe errors

Has anyone else seen that kind of problem with mpi.h  and pgi?  Do I  
need to use -c89  ?  I know PGI changed the default with this a while  
back, but it does not appear to help.


Thanks!


Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985