Hello,
Using a very recent svn version (1.4a1r18899) I'm getting a non-terminating
condition if I use the sm btl with tcp,self or with openib,self.
The program is not finishing a reduce operation. It works if the sm btl
is left out.
Using 2 4 core nodes.
Program is:
-
On Jul 23, 2008, at 5:35 PM, Bill Broadley wrote:
My understanding is that -mca btl foo should fail since there isn't
a transport layer called foo.
It should, but it's getting trumped. See below.
So OFED-1.3.1 (or an openmpi build from source) ./install.pl works
with TCP, but not infinipa
Jeff Squyres wrote:
Sorry for the delay in replying.
What exactly is the relay program timing? Can you run a standard
benchmark like NetPIPE, perchance? (http://www.scl.ameslab.gov/netpipe/)
It gives very similar numbers to osu_latency. Turns out the mca btl seems to
be completely ignor
I forwarded this on to the ROMIO maintainers; let's see what they say...
On Jul 18, 2008, at 11:38 AM, Edgar Gabriel wrote:
here is a patch that we use on our development version to silence
that warning, you have to apply it to.
ompi/ompi/mca/io/romio/romio/mpi-io/io_romio_close.c
I would n
On Jul 20, 2008, at 11:55 AM, Joe Landman wrote:
update 2: (its like I am talking to myself ... :) must start using
decaf ...)
Joe Landman wrote:
Joe Landman wrote:
[...]
ok, fixed this. Turns out we have ipoib going, and one adapter
needed to be brought down and back up. Now the tcp
On Jul 19, 2008, at 7:06 AM, Bill Broadley wrote:
I built openib-1.2.6 on centos-5.2 with gcc-4.3.1.
I did a tar xvzf, cd openib-1.2.6, mkdir obj, cd obj:
(I put gcc-4.3.1/bin first in my path)
../configure --prefix=/opt/pkg/openmpi-1.2.6 --enable-shared --
enable-debug
If I look in config.l
Hi Brock,
Just to add my two cents now, I finally got around to building WRF with
PGI 7.2 as well. I noticed that in the configure script there isn't an
option specifically for PGI (Fortran) + PGI (C), and when I try that
combination I do get the same error you have - I'm doing this on RHEL5.2,
Not yet, if you have no ideas I will open a ticket.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Jul 23, 2008, at 12:05 PM, Jeff Squyres wrote:
Hmm; I haven't seen this kind of problem before. Have you
contacted PGI?
On Jul 21, 2008, a
Hmm; I haven't seen this kind of problem before. Have you contacted
PGI?
On Jul 21, 2008, at 2:08 PM, Brock Palen wrote:
Hi, When compiling WRF with PGI-7.2-1 with openmpi-1.2.6
The file buf_for_proc.c fails. Nothing specail about this file
sticks out to me. But older versions of PGI
No, our code is supposed to call MPI_init prior to any further MPI_* call.
Anyway, I finally found the reason for this error (sorry I spoiled the
list being unable to find my own mistakes...) and corrected our build
system.
For different reasons, we generate a sequential and a parallel binary
HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I
think the API is easier than direct MPI-I/O, maybe even easier than
raw read/writes given its support for hierarchal objects and metadata.
HDF5 supports multiple storage models and it supports MPI-IO.
HDF5 has an open inter
On Jul 23, 2008, at 8:33 AM, Eloi Gaudry wrote:
I've been encountering some issues with openmpi on a linux-ia64
platform
(centos-4.6 with gcc-4.3.1) within a call to MPI_Query_thread (in a
fake
single process run):
An error occurred in MPI_Query_thread
*** before MPI was initialized
*** MPI
Hi there,
I've been encountering some issues with openmpi on a linux-ia64 platform
(centos-4.6 with gcc-4.3.1) within a call to MPI_Query_thread (in a fake
single process run):
An error occurred in MPI_Query_thread
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
I'd like to
Jeff,
In general NFS servers run a file-locking daemon that should enable
clients to lock files.
However, in Unix, there are two flavours of file locking, flock() from
BSD and lockf() from System V. It varies from system to system which of
these mechanisms work with NFS. In Solaris lockf() works
On Jul 23, 2008, at 8:24 AM, Gabriele Fatigati wrote:
>You could always effect your own parallel IO (e.g., use MPI sends
and receives to coordinate parallel reads and writes), but >why?
It's already done in the MPI-IO implementation.
Just a moment: you're saying that i can do fwrite withou
>You could always effect your own parallel IO (e.g., use MPI sends and
receives to coordinate parallel reads and writes), but >why? It's already
done in the MPI-IO implementation.
Just a moment: you're saying that i can do fwrite without any lock? OpenMPI
does this?
And, what is ROMIO? Where can
On Jul 23, 2008, at 6:35 AM, Gabriele Fatigati wrote:
>There is a whole chapter in the MPI standard about file I/O
operations. I'm quite confident you will find whatever you're
looking for there :)
Hi George, i know this chapter :) But i'm using MPI-1, not MPI-2. I
would like to know meth
>There is a whole chapter in the MPI standard about file I/O operations. I'm
quite confident you will find whatever you're looking for there :)
Hi George, i know this chapter :) But i'm using MPI-1, not MPI-2. I would
like to know methods for I/O with MPI-1.
2008/7/23 George Bosilca :
> There is
There is a whole chapter in the MPI standard about file I/O
operations. I'm quite confident you will find whatever you're looking
for there :)
Open MPI use ROMIO for file operations, and normally this is compiled
in by default. You should not have any troubles using MPI I/O with
Open MPI.
Hi,
i have a question about parallel i/o. In my application, actually i have
implemented a file lock with C system calls, like flock. But, is this the
right way to do concurrent write?
In this cluster, every node has our operating system, so, the file lock
functions only on the processors of that
20 matches
Mail list logo