It sounds like we have a missed corner case of the OMPI run-time not
cleaning properly. I know one case like this came up recently (if an
app calls exit() without calling MPI_FINALIZE, OMPI v1.2.x hangs) and
Ralph is working on it.
This could well be what is happening here...?
Do you know
Hi,
Can someone suggest or provide a link to a visual simulation which would run
without problems on SLES 10 and utilizing Open MPI version 2.
Thanks
--
My life has changed. What about yours?
Log on to the new Indiatimes Mail and Live out of the Inbox!
On Sep 9, 2007, at 10:28 AM, Foster, John T wrote:
I'm having trouble configuring Open-MPI 1.2.4 with the Intel C++
Compiler v. 10. I have Mac OS X 10.4.10. I have succesfully
configured and built OMPI with the gcc compilers and a combination
of gcc/ifort. When I try to configure with icc
On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:
When launching an MPI program with mpirun on an xgrid cluster, is
there a way to cause the program being run to be temporarily copied to
the compute nodes in the cluster when executed (i.e., similar to
what the
xgrid command line tool does)? Or is
Jeff, thanks a lot for taking the time,
I looked into this some more and this could very well be a side effect
of a problem in my code, maybe a memory violation that messes things up;
I'm going to valgrind this thing and see what comes up. Most of the time
the app runs just fine, so I'm not su
Hello all,
I am a newbie at much of mpi application and try to provide support for
various users in the hpc community.
I have run into something that I don't quite understand. I have some
code that is meant to open a file for reading, but at compile time I get
"Could not resolve generic procedure
Am 17.09.2007 um 16:34 schrieb Brian Barrett:
On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:
When launching an MPI program with mpirun on an xgrid cluster, is
there a way to cause the program being run to be temporarily
copied to
the compute nodes in the cluster when executed (i.e., similar
Sorry for catching up to this thread a bit late.
In the Open MPI trunk there is a mpirun option to preload a binary
before execution on remote nodes called '--preload-binary' (or '-s').
It has been tested with many of the resource managers (and should be
fairly resource manager agnostic), b
Josh:
Am 17.09.2007 um 21:33 schrieb Josh Hursey:
Sorry for catching up to this thread a bit late.
In the Open MPI trunk there is a mpirun option to preload a binary
before execution on remote nodes called '--preload-binary' (or '-s').
It has been tested with many of the resource managers (and
What version of Open MPI are you using?
This feature is not in any release at the moment, but is scheduled
for v1.3. It is available in the subversion development trunk which
can be downloaded either from subversion or from a nightly snapshot
tarball on the Open MPI website:
http://www.ope
Are you using the MPI F90 bindings perchance?
If so, the issue could be that the prototype for MPI_FILE_SET_VIEW is:
interface MPI_File_set_view
subroutine MPI_File_set_view(fh, disp, etype, filetype, datarep, &
info, ierr)
include 'mpif-config.h'
integer, intent(in) :: fh
integer
11 matches
Mail list logo