I'm interested in getting OpenMPI working with a multi-threaded
application (MPI_THREAD_MULTIPLE is required). I'm trying the trunk
from a couple weeks ago (1.3a1r14001) compiled for multi-threading and
threaded progress, and have had success with some small cases. Larger
cases with the same algo
Hi,
I am trying to run an MPICH2 application over 2 processors on a dual
processor x64 Linux box (SuSE 10). I am getting the following error
message:
--
Fatal error in MPI_Waitall: Other MPI error, error stack:
MPI_Waitall(242)..: MPI_Waitall(
Hello Curtis,
yes, done with ompi-trunk:
Apart from --enable-mpi-threads --enable-progress-threads, You need to compile
Open MPI with --enable-mca-no-build=memory-ptmalloc2 ; and of course the
usual options for debugging (--enable-debug) and the options for
icc/ifort/icpc:
CFLAGS='-debug all -in
Steve,
This list is for supporting Open MPI, not MPICH2 (MPICH2 is an
entirely different software package). You should probably redirect
your question to their support lists.
Thanks,
Tim
On Mar 23, 2007, at 12:46 AM, Jeffrey Stephen wrote:
Hi,
I am trying to run an MPICH2 application
Hi guys
I'm having problems compiling openmpi 1.2 under AIX 5.2. Here are the
configure parameters:
./configure --disable-shared --enable-static \
CC=xlc CXX=xlc++ F77=xlf FC=xlf95
To get it to work I have to do 2 changes:
diff -r openmpi-1.2/ompi/mpi/cxx/mpicxx.cc openm
Nicolas Niclausse ecrivait le 21.03.2007 16:45:
> I'm trying to use netpipe with openmpi on my system (rhel 3, dual opteron,
> myrinet 2G with MX drivers).
>
> Everything is fine when i use a 64bit binary, but it segfaults when i use a
> 32 bit binary :
I rebuilt everything with PGI 6.2 instead
I can volunteer myself as a beta-tester if that's OK. If there is
anything specific you want help with either drop me a mail directly or
mail supp...@quadrics.com
We are not aware of any other current project of this nature.
Ashley,
On Mon, 2007-03-19 at 18:48 -0400, George Bosilca wrote:
> UT
Marcus G. Daniels wrote:
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE. They mention dynamic
overlay loading, which is also how we deal with large
I am presently trying to get OpenMPI up and running on a small cluster
of MacPros (dual dual-core Xeons) using TCP. Opne MPI was compiled using
the intel Fortran Compiler (9.1) and gcc. When I try to launch a job on
a remote node, orted starts on the remote node but then times out. I am
guessing
Marcus G. Daniels wrote:
Marcus G. Daniels wrote:
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE. They mention dynamic
overlay loading,
Todd:
I assume the system time is being consumed by
the calls to send and receive data over the TCP sockets.
As the number of processes in the job increases, then more
time is spent waiting for data from one of the other processes.
I did a little experiment on a single node to see the differenc
Rolf,
> Is it possible that everything is working just as it should?
That's what I'm afraid of :-). But I did not expect to see such
communication overhead due to blocking from mpiBLAST, which is very
course-grained. I then tried HPL, which is computation-heavy, and found the
same thing. Also, th
To ALL
I am getting the following error while attempting to install openmpi on
a Linux
System - as follows
Linux utahwtm.hydropoint.com 2.6.9-42.0.2.ELsmp #1 SMP Wed Aug 23
13:38:27 BST 2006 x86_64 x86_64 x86_64 GNU/Linux
with the lntel compilers that are the latest versions of 9.1
th
So far the described behavior seems as normal as expected. As Open
MPI never goes in blocking mode, the processes will always spin
between active and sleep mode. More processes on the same node leads
to more time in the system mode (because of the empty polls). There
is a trick in the trunk
The main problem with MPI is the huge number of function in the API.
Even if we implement only the 1.0 standard we still have several
hundreds of functions around. Moreover, an MPI library is far from
being a simple self-sufficient library, it requires a way to start
and monitor processes,
George Bosilca wrote:
All in all we end up with a multi-hundreds KB library which in most
of the applications will be only used at 10%.
Seems like it ought to be possible to do some coverage analysis for a
particular application and figure out what parts of the library (and
user code) to ma
16 matches
Mail list logo