Hello Duke, Welcome to the forum. The way openmpi schedules by default is to fill all the slots in a host, before moving on to next host.
Check this link for some info: http://www.open-mpi.org/faq/?category=running#mpirun-scheduling -- Jingcha On Thu, Jun 7, 2012 at 2:11 AM, Duke <duke.li...@gmx.com> wrote: > Hi folks, > > Please be gentle to the newest member of openMPI, I am totally new to this > field. I just built a test cluster with 3 boxes on Scientific Linux 6.2 and > openMPI (Open MPI 1.5.3), and I wanted to test how the cluster works but I > cant figure out what was/is happening. On my master node, I have the > hostfile: > > [mpiuser@fantomfs40a ~]$ cat .mpi_hostfile > # The Hostfile for Open MPI > fantomfs40a slots=2 > hp430a slots=4 max-slots=4 > hp430b slots=4 max-slots=4 > > To test, I used the following c code: > > [mpiuser@fantomfs40a ~]$ cat test/mpihello.c > /* program hello */ > /* Adapted from mpihello.f by drs */ > > #include <mpi.h> > #include <stdio.h> > > int main(int argc, char **argv) > { > int *buf, i, rank, nints, len; > char hostname[256]; > > MPI_Init(&argc,&argv); > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > gethostname(hostname,255); > printf("Hello world! I am process number: %d on host %s\n", rank, > hostname); > MPI_Finalize(); > return 0; > } > > and then compiled and ran: > > [mpiuser@fantomfs40a ~]$ mpicc -o test/mpihello test/mpihello.c > [mpiuser@fantomfs40a ~]$ mpirun -np 2 --machinefile > /home/mpiuser/.mpi_hostfile ./test/mpihello > Hello world! I am process number: 0 on host fantomfs40a > Hello world! I am process number: 1 on host fantomfs40a > > Unfortunately the result did not show what I wanted. I expected to see > somethign like: > > Hello world! I am process number: 0 on host hp430a > Hello world! I am process number: 1 on host hp430b > > Anybody has any idea what I am doing wrong? > > Thank you in advance, > > D. > > > > > > ______________________________**_________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/**mailman/listinfo.cgi/users<http://www.open-mpi.org/mailman/listinfo.cgi/users> >