Hi,
Sorry it took so long to respond - recompiling everything across the cluster
took a while. Without the --with-threads config flag, it seems to work a
little better - the limit still exists, there is still the same segfault,
but now it's up around 21,000,000 characters, instead of 16,000,000.
A
On Tue, Sep 08, 2009 at 08:32:47AM -0700, Warner Yuen wrote:
> I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac OS X
> v10.6. However, I was successfully able to use 10.6 Server with IFORT
> 11.1.058 and GCC.
That is an interesting result, in light of question #14 of:
http
Jeff Squyres wrote:
...
Two questions then...
1. If the request has already completed, does it mean that since
opal_progress() is not called, no further progress is made?
Correct. It's a latency thing; if your request has already completed,
we just tell you without further delay (i.e., wit
I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac
OS X v10.6. However, I was successfully able to use 10.6 Server with
IFORT 11.1.058 and GCC.
Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859
On Sep 8, 2009, at 7
Christophe,
the 11.1.058 compilers are not (yet) compatible with snow leopard. See
the Intel compiler Forums. The gnu compilers however work.
Marcus
On Sep 8, 2009, at 1:42 AM, Christophe Peyret > wrote:
Hello,
After installing Snow Leopard, I rebulit openMPI 1.3.3 and wrap
intel Compi
Though I would not recommend your technique for initiating a
checkpoint from an application, it may work. Since ompi-checkpoint
will need to contact and interact with every MPI process, this could
cause problems if the application is blocking in system() while ompi-
checkpoint is trying to i
Did you configure Open MPI with the appropriate checkpoint/restart
options? Did you remember to add the '-am ft-enable-cr' parameter to
mpirun? Is BLCR loaded properly on your machines? These are the common
problems that people usually hit when getting started.
There is a C/R Fault Toleranc
Hi Everyone,
I'd like to achieve singular value decomposition with mpi. I heard about
Lanczos algorith and some different kind of algorith for svd, but I need
some help about this theme. Knows anybody some usable code or tutorial about
parallel svd?
Best Regards,
Attila
Charles,
The listen is always posted on each MPI processes. This will fire when
a remote is connecting, so we will setup the connection even if you
didn't yet posted the receive.
So, yes the first MPI_Send to each peer will always call connect(). On
the remote process, the accept is calle
According to the OpenMPI FAQ, OpenMPI creates point-to-point socket
connections "lazily", i.e. only when needed.
I have a few questions about this, and how it affect program performance.
1) Does this mean that MPI_Send will call connect() if necessary, and
MPI_Recv will call accept()?
2) If so,
Hello,
After installing Snow Leopard, I rebulit openMPI 1.3.3 and wrap intel
Compilers v 11.1.058. To do this, I used that small shell script :
#! /bin/bash
./configure --prefix=/usr/local/openmpi-1.3.3-
i64 \
CFLAGS="-arch x86_64" CXXFLAGS="-arc
11 matches
Mail list logo