Hi,
I am facing problems running OpenMPI-1.0.1 on a heterogeneous cluster.
I have a Linux machine and a SunOS machine in this cluster.
linux$ uname -a
Linux pg1cluster01 2.6.8-1.521smp #1 SMP Mon Aug 16 09:25:06 EDT 2004
i686 i686 i386 GNU/Linux
OpenMPI-1.0.1 is installed uisng
./configure -
On Mar 9, 2006, at 9:18 PM, Brian Barrett wrote:
On Mar 9, 2006, at 6:41 PM, Troy Telford wrote:
I've got a machine that has the following config:
Each node has two InfiniBand ports:
* The first port is on fabric 'a' with switches for 'a'
* The second port is on fabric 'b' with separate
On Mar 9, 2006, at 12:18 PM, Pierre Valiron wrote:
- However compiling the mpi.f90 takes over 35 *minutes* with -O1.
This seems a bit excessive... I tried removing any -O option and
things are just as slow. Is this behaviour related to open-mpi or
to some wrong features of the Studio11 co
Jeff Squyres wrote:
Please note that I replied to your original post:
http://www.open-mpi.org/community/lists/users/2006/02/0712.php
Was that not sufficient? If not, please provide more details on what
you are attempting to do and what is occurring. Thanks.
I have a simple program
Cezary Sliwa wrote:
Jeff Squyres wrote:
Please note that I replied to your original post:
http://www.open-mpi.org/community/lists/users/2006/02/0712.php
Was that not sufficient? If not, please provide more details on what
you are attempting to do and what is occurring. Thanks.
On Mar 10, 2006, at 6:01 AM, Cezary Sliwa wrote:
http://www.open-mpi.org/community/lists/users/2006/02/0712.php
I have a simple program in which the rank 0 task dispatches compute
tasks to other processes. It works fine on one 4-way SMP machine, but
when I try to run it on two nodes, the
Jeff Squyres wrote:
One additional question: are you using TCP as your communications
network, and if so, do either of the nodes that you are running on
have more than one TCP NIC? We recently fixed a bug for situations
Yes, precisely.
where at least one node in on multiple TCP network
On Mar 10, 2006, at 2:24 AM, Troy Telford wrote:
On Mar 9, 2006, at 9:18 PM, Brian Barrett wrote:
On Mar 9, 2006, at 6:41 PM, Troy Telford wrote:
I've got a machine that has the following config:
Each node has two InfiniBand ports:
* The first port is on fabric 'a' with switches for 'a'
On Mar 10, 2006, at 12:09 AM, Ravi Manumachu wrote:
I am facing problems running OpenMPI-1.0.1 on a heterogeneous cluster.
I have a Linux machine and a SunOS machine in this cluster.
linux$ uname -a
Linux pg1cluster01 2.6.8-1.521smp #1 SMP Mon Aug 16 09:25:06 EDT 2004
i686 i686 i386 GNU/Linux
On Mar 9, 2006, at 11:37 PM, Tom Rosmond wrote:
Attached are output files from a build with the adjustments you
suggested.
setenv FC pgf90
setenv F77 pgf90
setenv CCPFLAGS -I/usr/include/gm
./configure --prefix=/users/rosmond/ompi --with-gm
The results are the same.
Yes, I figured the
On Mar 10, 2006, at 8:35 AM, Brian Barrett wrote:
On Mar 9, 2006, at 11:37 PM, Tom Rosmond wrote:
Attached are output files from a build with the adjustments you
suggested.
setenv FC pgf90
setenv F77 pgf90
setenv CCPFLAGS -I/usr/include/gm
./configure --prefix=/users/rosmond/ompi --with-
On Mar 9, 2006, at 12:18 PM, Pierre Valiron wrote:
- 'mpirun --help' non longer crashes.
Improvement :)
- standard output seems messy:
a) 'mpirun -np 4 pwd' returns randomly 1 or two lines, never 4. The
same behaviour occurs if the output is redirected to a file.
b) When running some si
Jeff Squyres wrote:
One additional question: are you using TCP as your communications
network, and if so, do either of the nodes that you are running on
have more than one TCP NIC? We recently fixed a bug for situations
where at least one node in on multiple TCP networks, not all of which
Attached are the two library files you requested, also the output from
ompi_info.
I tried the work-around procedure you suggested, and it worked. I had to
also use it in 'ompi/mca/mpool/gm' and 'ompi/mca/ptl/gm', but I got a
successful make. Then, on a hunch, I went back and added
setenv LDFLAGS
14 matches
Mail list logo