[OMPI users] mpicc Segmentation Fault with Intel Compiler

2007-11-06 Thread Michael Schulz
Hi, I've the same problem described by some other users, that I can't compile anything if I'm using the open-mpi compiled with the Intel- Compiler. > ompi_info --all Segmentation fault OpenSUSE 10.3 Kernel: 2.6.22.9-0.4-default Intel P4 Configure-Flags: CC=icc, CXX=icpc, F77=ifort, F90=ifort

Re: [OMPI users] mpicc Segmentation Fault with Intel Compiler

2007-11-06 Thread Åke Sandgren
On Tue, 2007-11-06 at 10:28 +0100, Michael Schulz wrote: > Hi, > > I've the same problem described by some other users, that I can't > compile anything if I'm using the open-mpi compiled with the Intel- > Compiler. > > > ompi_info --all > Segmentation fault > > OpenSUSE 10.3 > Kernel: 2.6.22.9

[OMPI users] Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-11-06 Thread hpe...@infonie.fr
Just a thought, Behaviour can be unpredictable if you use MPI_Bsend or MPI_Ibsend ... on your sender side cause nothing is checked regard to allocated buffer. MPI_Send or MPI_Isend shall be used instead. Regards Herve ALICE C'EST ENCORE MIEUX AVEC CANAL+ LE BOUQUET

[OMPI users] OpenMPI - can you switch off threads?

2007-11-06 Thread Mostyn Lewis
I'm trying to build a udapl OpenMPI from last Friday's SVN and using Qlogic/QuickSilver/SilverStorm 4.1.0.0.1 software. I can get it made and it works in machine. With IB between 2 machines is fails near termination of a job. Qlogic says they don't have a threaded udapl (libpthread is in the trace

Re: [OMPI users] OpenMPI - can you switch off threads?

2007-11-06 Thread Andrew Friedley
All thread support is disabled by default in Open MPI; the uDAPL BTL is neither thread safe nor makes use of a threaded uDAPL implementation. For completeness, the thread support is controlled by the --enable-mpi-threads and --enable-progress-threads options to the configure script. The refer

Re: [OMPI users] OpenMPI - can you switch off threads?

2007-11-06 Thread Mostyn Lewis
Andrew, Failure looks like: > + mpirun --prefix > + /tools/openmpi/1.3a1r16632_svn/infinicon/gcc64/4.1.2/udapl/suse_sles_1 > + 0/x86_64/opteron -np 8 > -machinefile H ./a.out > Process 0 of 8 on s1470 > Process 1 of 8 on s1470 > Process 4 of 8 on s1469 > Process 2 of 8 on s1470 > Process 7 of

Re: [OMPI users] OpenMPI - can you switch off threads?

2007-11-06 Thread Andrew Friedley
Mostyn Lewis wrote: Andrew, Failure looks like: + mpirun --prefix + /tools/openmpi/1.3a1r16632_svn/infinicon/gcc64/4.1.2/udapl/suse_sles_1 + 0/x86_64/opteron -np 8 -machinefile H ./a.out Process 0 of 8 on s1470 Process 1 of 8 on s1470 Process 4 of 8 on s1469 Process 2 of 8 on s1470 Proces

Re: [OMPI users] OpenMPI - can you switch off threads?

2007-11-06 Thread Mostyn Lewis
Andrew, Thanks for looking. These machines are SUN X2200 and looking at the OUI of the card it's a generic SUN Mellanox HCA. This is SuSE SLES10 SP1 and the QuickSilver(SilverStorm) 4.1.0.0.1 software release. 02:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex (Tavor compatibil

Re: [OMPI users] mpicc Segmentation Fault with Intel Compiler

2007-11-06 Thread Jeff Squyres
On Nov 6, 2007, at 4:28 AM, Michael Schulz wrote: I've the same problem described by some other users, that I can't compile anything if I'm using the open-mpi compiled with the Intel- Compiler. ompi_info --all Segmentation fault OpenSUSE 10.3 Kernel: 2.6.22.9-0.4-default Intel P4 Configure-

Re: [OMPI users] mpicc Segmentation Fault with Intel Compiler

2007-11-06 Thread Jeff Squyres
On Nov 6, 2007, at 4:42 AM, Åke Sandgren wrote: I had the same problem with pathscale. There is a known outstanding problem with the pathscale problem. I am still waiting for a solution from their engineers (we don't know yet whether it's an OMPI issue or a Pathscale issue, but my [biased

[OMPI users] Job does not quit even when the simulation dies

2007-11-06 Thread Teng Lin
Hi, Just realize I have a job run for a long time, while some of the nodes already die. Is there any way to ask other nodes to quit ? [kyla-0-1.local:09741] mca_btl_tcp_frag_send: writev failed with errno=104 [kyla-0-1.local:09742] mca_btl_tcp_frag_send: writev failed with errno=104 T

Re: [OMPI users] machinefile and rank

2007-11-06 Thread Jeff Squyres
Unfortunately, not yet. I believe that this kind of functionality is slated for the v1.3 series -- is that right Ralph/Voltaire? On Nov 5, 2007, at 11:22 AM, Karsten Bolding wrote: Hello I'm using a machinefile like: n03 n04 n03 n03 n03 n02 n01 .. .. .. the order of the entries is determi

Re: [OMPI users] Slightly OT: mpi job terminates

2007-11-06 Thread Jeff Squyres
You might want to run your app through a memory-checking debugger to see if anything obvious shows up. Also, check to see if your corelimit size is greater than zero (i.e., make it "unlimited"). Then run again and see if you can get corefiles to see if your app is silently dumping core, an

Re: [OMPI users] Node assignment using openmpi for multiple simulations in the same submission script in PBS (GROMACS)

2007-11-06 Thread Jeff Squyres
On Nov 2, 2007, at 11:02 AM, himanshu khandelia wrote: This question is about the use of a simulation package called GROMACS. PS: On our cluster (quad-core nodes), GROMACS does not scale well beyond 4 cpus. So, I wish to run two different simulations, while requesting 2 nodes (1 simulation on e