Re: [OMPI users] problem with mpirun

2010-06-25 Thread Nifty Tom Mitchell
On Fri, Jun 11, 2010 at 11:03:03AM +0200, asmae.elbahlo...@mpsa.com wrote: > Sender: users-boun...@open-mpi.org > > >hello, > >i'm doing a tutorial on OpenFoam, but when i run in parallel by typing >"mpirun -np 30 foamProMesh -parallel | tee 2>&1 log/FPM.log" . > >[1] i

Re: [OMPI users] ipo: warning #11009: file format not recognized for /Libraries_intel/openmpi/lib/libmpi.so

2009-11-10 Thread Nifty Tom Mitchell
On Tue, Nov 10, 2009 at 03:44:59PM +0200, vasilis gkanis wrote: > > I am trying to compile openmpi-1.3.3 with intel Fortran and gcc compiler. > > In order to compile openmpi I run configure with the following options: > > ./configure --prefix=/Libraries/openmpi FC=ifort --enable-mpi-f90 > > Ope

Re: [OMPI users] MPI-3 Fortran feedback

2009-10-27 Thread Nifty Tom Mitchell
On Mon, Oct 26, 2009 at 09:12:24AM -0400, Jeff Squyres wrote: > On Oct 25, 2009, at 11:38 PM, Steve Kargl wrote: > >> There is currently a semi-heated debate in comp.lang.fortran >> concerning co-arrays and the upcoming Fortran 2008. Don't >> waste your time trying to decipher the thread; however,

Re: [OMPI users] explicit routing for multiple network interfaces

2009-08-27 Thread Nifty Tom Mitchell
On Tue, Aug 25, 2009 at 09:44:29PM +0530, Jayanta Roy wrote: > >Hi, >I am using Openmpi (version 1.2.2) for MPI data transfer using >non-blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca >btl_tcp_if_include eth0,eth1" to use both the eth link for data >transfe

Re: [OMPI users] 2 to 1 oversubscription

2009-08-06 Thread Nifty Tom Mitchell
On Mon, Jul 13, 2009 at 01:24:54PM -0400, Mark Borgerding wrote: > > Here's my advice: Don't trust anyones advice. Benchmark it yourself and > see. > > The problems vary so wildly that only you can tell if your problem will > benefit from over-subscription. It really depends on too many facto

Re: [OMPI users] Test works with 3 computers, but not 4?

2009-07-29 Thread Nifty Tom Mitchell
On Wed, Jul 29, 2009 at 01:42:39PM -0600, Ralph Castain wrote: > > It sounds like perhaps IOF messages aren't getting relayed along the > daemons. Note that the daemon on each node does have to be able to send > TCP messages to all other nodes, not just mpirun. > > Couple of things you can do t

[OMPI users] Profiling performance by forcing transport choice.

2009-07-20 Thread Nifty Tom Mitchell
On Thu, Jun 25, 2009 at 08:37:21PM -0400, Jeff Squyres wrote: > Subject: Re: [OMPI users] 50%performance reduction due to OpenMPI v > 1.3.2forcing > allMPI traffic over Ethernet instead of using Infiniband While the previous thread on "performance reduction" went left, right, forward and

Re: [OMPI users] Problem with qlogic cards InfiniPath_QLE7240 and AlltoAll call

2009-06-26 Thread Nifty Tom Mitchell
On Thu, Jun 25, 2009 at 10:29:39AM -0700, D'Auria, Raffaella wrote: > >Dear All, >I have been encountering a fatal type "error polling LP CQ with status >RETRY EXCEEDED ERROR status number 12" whenever I try to run a simple >MPI code (see below) that performs an AlltoAll call. >