Hi
Have you tried to compile and run the simple examples
that come with OpenMPI?
Often times they tell you right away if there are problems with your
PATH, or with your LD_LIBRARY_PATH, or if all OpenMPI software can
be reached by your nodes (and not only by your head node), etc.
The little tim
Given all the back-n-forth here, you should probably try to get some simple
Open MPI applications running, and then try to get Gromacs running.
Try running the simple "hello world" and "ring" applications in the examples/
directory in the source tree of Open MPI (just type "make" in there and it
One problem with versions or incompatibility can lead to a error like:
"Unable to start a daemon on the local node"
and
"ompi_mpi_init: ort_init failed"
??
thanks
De: "Addepalli, Srirangam V"
Para: Open MPI Users
Enviadas: Terça-feira, 8 de Junho de 2010
Hi,
all are linked.
what should I find ? anything different?
thank`s
and sorry for all
De: "Addepalli, Srirangam V"
Para: Open MPI Users
Enviadas: Terça-feira, 8 de Junho de 2010 13:59:08
Assunto: Re: [OMPI users] Res: Res: Res: Gromacs run in parallel
Hello,
ldd `which mdrun_mpi`
should give you which libraries the binary is looking for. What does the above
command do for your build.
I had a user who had a serial mdrun in his path and it did the same.
Rangam
From: users-boun...@open-mpi.org [users
Which version of OMPI are you running on and the OS version?
Can you try and replace the rankfile specification with --bind-to-core
and tell me if that works any better?
--td
Chamila Janath wrote:
_rankfile_
rank 0=10.16.71.1 slot=0
I launched my mpi app using,
$ mpirun -np 1 -rf rankfile
Hi,
I did it and it match.
mdrun and mpiexec at the same place.
seems ok...
1 more suggestion?
thank you,
De: Carsten Kutzner
Para: Open MPI Users
Enviadas: Terça-feira, 8 de Junho de 2010 13:12:35
Assunto: Re: [OMPI users] Res: Res: Gromacs run in
Hi all,
Please verify: if using openib BTL, the only threading model
is MPI_THREAD_SINGLE?
Is there a timeline for full support of MPI_THREAD_MULTIPLE
in Open MPI's openib BTL?
Thanks!
--
Best regards,
David Turner
User Services Groupemail: dptur...@lbl.gov
NERSC Division
Ok,
1. type 'which mdrun' to see where the mdrun executable resides.
2. type ldd 'which mdrun' to find out against which mpi library it is linked
3. type which mpirun (or which mpiexec, whatever you use) to verify that
this is the right mpi launcher for your mdrun.
4. If the MPI's do not match, ei
I saw
Host: pid: nodeid: 0 nnodes: 1
really it`s running in 1 node
and All of you really undestood my problem, thanks
But how can I fix it.
How can I run 1 job in 4 nodes...?
I really need help,
I took a look in my files and erase all the errors and the implementations seem
corect.
From the
No, I'm sorry -- I wasn't clear. What I meant was, that if you run:
mpirun -np 4 my_mpi_application
1. If you see a single, 4-process MPI job (regardless of how many nodes/servers
it's spread across), then all is good. This is what you want.
2. But if you're seeing 4 independent 1-process
On Jun 8, 2010, at 3:06 PM, Jeff Squyres wrote:
> I know nothing about Gromacs, but you might want to ensure that your Gromacs
> was compiled with Open MPI. A common symptom of "mpirun -np 4
> my_mpi_application" running 4 1-process MPI jobs (instead of 1 4-process MPI
> job) is that you compi
oh! ok,
Then I put MPI on server with 4 nodes.
I have to put 1 for each?
How do I do that?
What's the first step in this case when I want to run 1 job in 4 nodes (the
same server)?
Cause all of then are make the same job again.
sorry for all...
De: Jeff Squy
I know nothing about Gromacs, but you might want to ensure that your Gromacs
was compiled with Open MPI. A common symptom of "mpirun -np 4
my_mpi_application" running 4 1-process MPI jobs (instead of 1 4-process MPI
job) is that you compiled my_mpi_application with one MPI implementation, but
The version of Gromacs is 4.0.7.
This is the first time that I using Gromacs, then excuse me if I'm nonsense.
Wich part of md.log output should I post?
after or before the input description?
thanks for all,
and sorry
De: Carsten Kutzner
Para: Open MPI Users
*rankfile*
rank 0=10.16.71.1 slot=0
I launched my mpi app using,
$ mpirun -np 1 -rf rankfile appname
I can run the application on Intel dual-core machine with Linux based OS
nicely. But i can't run it on single core machine(P4).
The execution terminates specifying a problem of slot number. What
Hi, I have 3 questions to ask about,
1, how does open-mpi find the faulty node?
2, if one node is dead, could the programs continue running? How about two
nodes or even more nodes are dead ?
3, How to recovery faulty node (dead node) ? Is there any possibilities to
recover without check-po
17 matches
Mail list logo