Hi Peter,

I tried first with 2 nodes, but is was the same problem, it hang. Then I tried with 1 node, and copied that output in my previous mail. When I checked the process status (with 2 nodes), one of the migrate processes was using 100 %, the other was sleeping. But after 15 minutes, still no output change.

Andy

Peter Beerli wrote:

Dear Andy,

you wrote that with openmpi:
avierstr@muscorum:~> mpiexec --hostfile hostfile -np 1  migrate-n


it does not work, but with lam-mpi
avierstr@muscorum:~/thomas> mpiexec -np 2  migrate-n

you started openmpi on only _one_ node, migrate needs at least _two_ nodes to run
(as you did in lam-mpi)
migrate actually aborts when running on only one node, it should show an error message so, like this

zork>which mpirun
/usr/local/openmpi/bin/mpirun
zork>mpirun -machinefile ~/onehost -np 1 migrate-n
migrate-n
  =============================================
  MIGRATION RATE AND POPULATION SIZE ESTIMATION
  using Markov Chain Monte Carlo simulation
  =============================================
  compiled for a PARALLEL COMPUTER ARCHITECTURE
  Version debug 2.1.3   [x]
  Program started at   Mon Feb 13 09:03:45 2006



Reading N ...
Reading S ...

In file main.c on line 697
This program was compiled to use a parallel computer
and you tried to run it on only a single node.
This will not work because it uses a
"single_master-many_worker" architecture
and needs at least TWO nodes


Peter

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
*********************************************************************
* Youth is a wonderful thing. What a crime to waste it on children. *
*                                             (George Bernard Shaw) *
*********************************************************************


Andy Vierstraete
Department of Biology
University of Ghent
K. L. Ledeganckstraat 35
B-9000 Gent
Belgium
phone : 09-264.52.66
fax : 09-264.87.93
http://allserv.UGent.be/~avierstr/

Reply via email to