[OMPI users] Continuous poll/select using btl sm (svn 1.4a1r18899)

2008-07-23 Thread Mostyn Lewis
Hello, Using a very recent svn version (1.4a1r18899) I'm getting a non-terminating condition if I use the sm btl with tcp,self or with openib,self. The program is not finishing a reduce operation. It works if the sm btl is left out. Using 2 4 core nodes. Program is: -

Re: [OMPI users] Can't use tcp instead of openib/infinipath

2008-07-23 Thread Jeff Squyres
On Jul 23, 2008, at 5:35 PM, Bill Broadley wrote: My understanding is that -mca btl foo should fail since there isn't a transport layer called foo. It should, but it's getting trumped. See below. So OFED-1.3.1 (or an openmpi build from source) ./install.pl works with TCP, but not infinipa

Re: [OMPI users] Can't use tcp instead of openib/infinipath

2008-07-23 Thread Bill Broadley
Jeff Squyres wrote: Sorry for the delay in replying. What exactly is the relay program timing? Can you run a standard benchmark like NetPIPE, perchance? (http://www.scl.ameslab.gov/netpipe/) It gives very similar numbers to osu_latency. Turns out the mca btl seems to be completely ignor

Re: [OMPI users] runtime warnings with MPI_File_write_ordered

2008-07-23 Thread Jeff Squyres
I forwarded this on to the ROMIO maintainers; let's see what they say... On Jul 18, 2008, at 11:38 AM, Edgar Gabriel wrote: here is a patch that we use on our development version to silence that warning, you have to apply it to. ompi/ompi/mca/io/romio/romio/mpi-io/io_romio_close.c I would n

Re: [OMPI users] problems with MPI_Waitsome/MPI_Allstart and OpenMPI on gigabit and IB networks

2008-07-23 Thread Jeff Squyres
On Jul 20, 2008, at 11:55 AM, Joe Landman wrote: update 2: (its like I am talking to myself ... :) must start using decaf ...) Joe Landman wrote: Joe Landman wrote: [...] ok, fixed this. Turns out we have ipoib going, and one adapter needed to be brought down and back up. Now the tcp

Re: [OMPI users] Can't use tcp instead of openib/infinipath

2008-07-23 Thread Jeff Squyres
On Jul 19, 2008, at 7:06 AM, Bill Broadley wrote: I built openib-1.2.6 on centos-5.2 with gcc-4.3.1. I did a tar xvzf, cd openib-1.2.6, mkdir obj, cd obj: (I put gcc-4.3.1/bin first in my path) ../configure --prefix=/opt/pkg/openmpi-1.2.6 --enable-shared -- enable-debug If I look in config.l

Re: [OMPI users] Problem with WRF and pgi-7.2

2008-07-23 Thread Brian Dobbins
Hi Brock, Just to add my two cents now, I finally got around to building WRF with PGI 7.2 as well. I noticed that in the configure script there isn't an option specifically for PGI (Fortran) + PGI (C), and when I try that combination I do get the same error you have - I'm doing this on RHEL5.2,

Re: [OMPI users] Problem with WRF and pgi-7.2

2008-07-23 Thread Brock Palen
Not yet, if you have no ideas I will open a ticket. Brock Palen www.umich.edu/~brockp Center for Advanced Computing bro...@umich.edu (734)936-1985 On Jul 23, 2008, at 12:05 PM, Jeff Squyres wrote: Hmm; I haven't seen this kind of problem before. Have you contacted PGI? On Jul 21, 2008, a

Re: [OMPI users] Problem with WRF and pgi-7.2

2008-07-23 Thread Jeff Squyres
Hmm; I haven't seen this kind of problem before. Have you contacted PGI? On Jul 21, 2008, at 2:08 PM, Brock Palen wrote: Hi, When compiling WRF with PGI-7.2-1 with openmpi-1.2.6 The file buf_for_proc.c fails. Nothing specail about this file sticks out to me. But older versions of PGI

Re: [OMPI users] openmpi on linux-ia64

2008-07-23 Thread Eloi Gaudry
No, our code is supposed to call MPI_init prior to any further MPI_* call. Anyway, I finally found the reason for this error (sorry I spoiled the list being unable to find my own mistakes...) and corrected our build system. For different reasons, we generate a sequential and a parallel binary

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Robert Kubrick
HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I think the API is easier than direct MPI-I/O, maybe even easier than raw read/writes given its support for hierarchal objects and metadata. HDF5 supports multiple storage models and it supports MPI-IO. HDF5 has an open inter

Re: [OMPI users] openmpi on linux-ia64

2008-07-23 Thread Jeff Squyres
On Jul 23, 2008, at 8:33 AM, Eloi Gaudry wrote: I've been encountering some issues with openmpi on a linux-ia64 platform (centos-4.6 with gcc-4.3.1) within a call to MPI_Query_thread (in a fake single process run): An error occurred in MPI_Query_thread *** before MPI was initialized *** MPI

[OMPI users] openmpi on linux-ia64

2008-07-23 Thread Eloi Gaudry
Hi there, I've been encountering some issues with openmpi on a linux-ia64 platform (centos-4.6 with gcc-4.3.1) within a call to MPI_Query_thread (in a fake single process run): An error occurred in MPI_Query_thread *** before MPI was initialized *** MPI_ERRORS_ARE_FATAL (goodbye) I'd like to

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Neil Storer
Jeff, In general NFS servers run a file-locking daemon that should enable clients to lock files. However, in Unix, there are two flavours of file locking, flock() from BSD and lockf() from System V. It varies from system to system which of these mechanisms work with NFS. In Solaris lockf() works

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Jeff Squyres
On Jul 23, 2008, at 8:24 AM, Gabriele Fatigati wrote: >You could always effect your own parallel IO (e.g., use MPI sends and receives to coordinate parallel reads and writes), but >why? It's already done in the MPI-IO implementation. Just a moment: you're saying that i can do fwrite withou

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Gabriele Fatigati
>You could always effect your own parallel IO (e.g., use MPI sends and receives to coordinate parallel reads and writes), but >why? It's already done in the MPI-IO implementation. Just a moment: you're saying that i can do fwrite without any lock? OpenMPI does this? And, what is ROMIO? Where can

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Jeff Squyres
On Jul 23, 2008, at 6:35 AM, Gabriele Fatigati wrote: >There is a whole chapter in the MPI standard about file I/O operations. I'm quite confident you will find whatever you're looking for there :) Hi George, i know this chapter :) But i'm using MPI-1, not MPI-2. I would like to know meth

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Gabriele Fatigati
>There is a whole chapter in the MPI standard about file I/O operations. I'm quite confident you will find whatever you're looking for there :) Hi George, i know this chapter :) But i'm using MPI-1, not MPI-2. I would like to know methods for I/O with MPI-1. 2008/7/23 George Bosilca : > There is

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread George Bosilca
There is a whole chapter in the MPI standard about file I/O operations. I'm quite confident you will find whatever you're looking for there :) Open MPI use ROMIO for file operations, and normally this is compiled in by default. You should not have any troubles using MPI I/O with Open MPI.

[OMPI users] Parallel I/O with MPI-1

2008-07-23 Thread Gabriele Fatigati
Hi, i have a question about parallel i/o. In my application, actually i have implemented a file lock with C system calls, like flock. But, is this the right way to do concurrent write? In this cluster, every node has our operating system, so, the file lock functions only on the processors of that