[OMPI users] compile fail on 64bit AMD cluster with intel compilers

2008-07-10 Thread Tod A. Charles-Pascal
Hello all: While attempting to compile openmip-1.2.6 on an 64bit AMD cluster with Intel compilers (v 10.1.015 and v 9.1.038) make fails with the following error: Making all in datatype make[2]: Entering directory `/home/test/openmpi-1.2.7rc2/ompi/datatype' depbase=`echo dt_args.lo | sed 's|[^/]

Re: [OMPI users] Gridengine + Open MPI

2008-07-10 Thread Romaric David
Hello, I just made a fix for the problem I've shown below in r18844. I think it is essentially the same problem that you are running into here. Please let me know if you still see the problem with the SGE tight integration job errors out. And I'll look at the suspend/resume feature later on

[OMPI users] Open Mpi on LFS 6.3

2008-07-10 Thread lam...@lucullo.it
Hi, The Open Mpi FAQ says that the LFS is not natively suported by that OS... --- 5. Does Open MPI support LSF? Not natively -- yet. We're working on it. It is anticipated that native LSF support will be included in the Open MPI v1.3 series. Platform has released a script-based integration in t

[OMPI users] Number of file handles limiting the number off processes?

2008-07-10 Thread Samuel Sarholz
Hi, mpiexec seems to need a file handle per started process. By default the number of file handles is set to 1024 here, thus I can start about 900 something processes. With higher numbers I get mca_oob_tcp_accept: accept() failed: Too many open files (24). If I decrease the file handles on th

Re: [OMPI users] Number of file handles limiting the number off processes?

2008-07-10 Thread Ralph Castain
Unfortunately, that is indeed true for the 1.2 series. It is significantly better in 1.3, but still not perfect. Basically, in 1.2, every process has to "call home" to mpirun (or mpiexec - same thing) when it starts up. This is required in order to exchange connection info so that the MPI comm sub

[OMPI users] Strange problem with 1.2.6

2008-07-10 Thread Joe Landman
Hi folks: I am running into a strange problem with Open-MPI 1.2.6, built using gcc/g++ and intel ifort 10.1.015, atop an OFED stack (1.1-ish). The problem appears to be that if I run using the tcp btl, disabling sm and openib, the run completes successfully (on several different platforms),