For things like these, I usually use the "dot file" mca parameter file in my home directory:

    http://www.open-mpi.org/faq/?category=tuning#setting-mca-params

That way, I don't accidently forget to set the parameters on a given run ;).

Brian

On Feb 8, 2007, at 6:15 PM, Mark Kosmowski wrote:

I have a style question related to this issue that I think is resolved.

I have added the following line to my .bashrc:

export OMPIFLAGS="-mca oob_tcp_include eth0 -mca btl_tcp_if_include
eth0 --hostfile ~/work/openmpi_hostfile"

and have verified that mpirun $OMPIFLAGS -np 4 hostname works.

Is there a better way of accomplishing this, or is this a matter of
there being more than one way to skin the proverbial cat?

Thanks,

Mark Kosmowski

On 2/8/07, Mark Kosmowski <mark.kosmow...@gmail.com> wrote:
I think I fixed the problem.  I at least have mpirun ... hostname
working over the cluster.

The first thing I needed to do was to make the gigabit network an
internal zone in Yast ... firewall (which essentially turns off the
firewall over this interface).

Next I needed to add the -mca options as follows:

mpirun --prefix /opt/openmpi -mca oob_tcp_include eth0 -mca
btl_tcp_if_include eth0 --hostfile ~/work/openmpi_hostfile -np 4
hostname

The above command works properly without the --prefix option,
verifying that my PATH and LD_LIBRARY_PATH variables are properly set
up.

Unfortunately, I have jobs running on each machine in SMP mode that
will take the better part of this coming week to complete, so it will
be awhile before I will be able to do more than just mpirun ...
hostname.

Could a section be added to the FAQ mentioning that the firewall
service should be shutdown over the mpi interface and that the two
-mca switches should be used?  This could perhaps be most useful to a
beginner in either the 'Running MPI Jobs' or 'Troubleshooting'
sections of the FAQ.

Thanks,

Mark Kosmowski

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

--
  Brian Barrett
  Open MPI Team, CCS-1
  Los Alamos National Laboratory


Reply via email to