Pete,
I don't know why the behavior on an 8 processor machine differs with
the machine file format/syntax. You don't need to specify a machine
file on a single multiprocessor machine.
On you torque scheduled cluster you shouldn't need a machine file for
openmpi. Openmpi should just use th
I use the following: mpirun -machinefile machine.file -np 8 ./mpi-program
and the machine file has the following:
t01
t01
t01
t01
t01
t01
t01
t01
I get the following error:
rm_12992: (0.632812) net_send: could not write to fd=4, errno = 32
rm_13053: (0.421875) net_send: could not write to fd=4
Hello boys and girls. I just wanted to drop a line and give you an update.
First of all, my simple question:
In what files can I find the source code for "mca_oob.oob_send" and
"mca_oob.oob_recv"? I'm having a hard time following the initialization
code that populates the struct of callbacks.
Sorry for the delay - this one slipped past me.
There was an internal change to the way we handle hostfiles in Open
MPI, so the parameter changed. The correct way to specify a system-
wide default host file is indeed to set OMPI_MCA_orte_default_hostfile
in your environment, or to set orte_d
Open MPI uses GNU Libtool to build itself. I suspect that perhaps
Libtool doesn't know the Right Mojo to understand the Lahey compilers,
and that's why you're seeing this issue. As such, it might well be
that your workaround is the best one.
Ralf -- we build the OMPI 1.2 series with that
On Aug 1, 2008, at 6:07 PM, James Philbin wrote:
I'm just using TCP so this isn't a problem for me. Any ideas what
could be causing this segfault?
This is not really enough information to diagnose what your problem
is. Can you please send all the information listed here:
http://www.op