Thanks a lot for your reply.

Now the mpiblast run in only one node both inside and outside a torque job.

How could I setup a hostlist for open mpi? I haven't found this in the open
mpi faq. Thanks.

the "ompi_info | grep tm" output is:


              MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA ras: tm (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA plm: tm (MCA v2.0, API v2.0, Component v1.4.1)


I also attached the "ompi_info --all" output with this email. Maybe you'd
like help me to check it.

I have set the openmpi/bin to PATH and openmpi/lib to LD_LIBRARY_PATH and I
think the mpiblast complier chose the right mpicc.


HZ Liu


2010/4/1 Jeff Squyres (jsquyres) <jsquy...@cisco.com>

> Are you running your job inside a torque job? If you don't, open mpi will
> not have a hostlist and will assume that it should launch everything on the
> localhost.
>
> If you are launching inside a torque job, ensure that ompi was build with
> torque support properly - run
>
> ompi_info | grep tm
>
> If you see 1 or more tm plugins listed, ompi has torque support.
>
> Finally, make sure you're using the right mpicc and mpirun, etc.
>
> -jms
> Sent from my PDA. No type good.
>
> ------------------------------
>  *From*: users-boun...@open-mpi.org <users-boun...@open-mpi.org>
> *To*: us...@open-mpi.org <us...@open-mpi.org>
> *Sent*: Thu Apr 01 02:07:08 2010
> *Subject*: [OMPI users] mpiblast only run in one node
>
> Hi,
>
> I've installed Torque/Maui, Open MPI 1.4.1 and mpiBlast 1.6.0-beta1 in a
> linux Beowulf culster with 15 nodes (node1~15).
>
> Open MPI, mpiBlast were installed and my home directory lies in a directory
> "/data" which was shared by all nodes.
>
> Open Mpi was compiled with "--with-tm" to support Torque, and mpiBlast was
> compiled with "--with-mpi" to the directory where Open MPI installed.
>
> When I run mpiBlast by mpirun in command line, like
>
> node1 $ /data/open-mpi/bin/mpirun -np 34 /data/mpiblast/bin/mpiblast -p
> blastp -d nr -i test.faa -o test.out
>
> I noticed all 34 mpiBlast processes run on node1 only.
>
> This would not change if I submit job to Torque.
>
> I've searched the mailing list archives but it seems won't help.
>
> How to resolve this problem?
>
> Any suggestion will be appreciated!
>
>
> HZ Liu
>
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Attachment: ompi_info.all
Description: Binary data

Reply via email to