Re: [OMPI users] Heterogeneous SLURM cluster segfaults on largetransfers

2009-09-08 Thread James
Hi, Sorry it took so long to respond - recompiling everything across the cluster took a while. Without the --with-threads config flag, it seems to work a little better - the limit still exists, there is still the same segfault, but now it's up around 21,000,000 characters, instead of 16,000,000. A

Re: [OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel Compilers 11.1.058 => Segmentation fault

2009-09-08 Thread Douglas Guptill
On Tue, Sep 08, 2009 at 08:32:47AM -0700, Warner Yuen wrote: > I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac OS X > v10.6. However, I was successfully able to use 10.6 Server with IFORT > 11.1.058 and GCC. That is an interesting result, in light of question #14 of: http

Re: [OMPI users] mca_pml_ob1_send blocks

2009-09-08 Thread Shaun Jackman
Jeff Squyres wrote: ... Two questions then... 1. If the request has already completed, does it mean that since opal_progress() is not called, no further progress is made? Correct. It's a latency thing; if your request has already completed, we just tell you without further delay (i.e., wit

Re: [OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel Compilers 11.1.058 => Segmentation fault

2009-09-08 Thread Warner Yuen
I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac OS X v10.6. However, I was successfully able to use 10.6 Server with IFORT 11.1.058 and GCC. Warner Yuen Scientific Computing Consulting Engineer Apple, Inc. email: wy...@apple.com Tel: 408.718.2859 On Sep 8, 2009, at 7

Re: [OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel Compilers 11.1.058 => Segmentation fault

2009-09-08 Thread Marcus Herrmann
Christophe, the 11.1.058 compilers are not (yet) compatible with snow leopard. See the Intel compiler Forums. The gnu compilers however work. Marcus On Sep 8, 2009, at 1:42 AM, Christophe Peyret > wrote: Hello, After installing Snow Leopard, I rebulit openMPI 1.3.3 and wrap intel Compi

Re: [OMPI users] checkpointing 2 or more processes running in parallel

2009-09-08 Thread Josh Hursey
Though I would not recommend your technique for initiating a checkpoint from an application, it may work. Since ompi-checkpoint will need to contact and interact with every MPI process, this could cause problems if the application is blocking in system() while ompi- checkpoint is trying to i

Re: [OMPI users] problem in using blcr

2009-09-08 Thread Josh Hursey
Did you configure Open MPI with the appropriate checkpoint/restart options? Did you remember to add the '-am ft-enable-cr' parameter to mpirun? Is BLCR loaded properly on your machines? These are the common problems that people usually hit when getting started. There is a C/R Fault Toleranc

[OMPI users] SVD with mpi

2009-09-08 Thread Attila Börcs
Hi Everyone, I'd like to achieve singular value decomposition with mpi. I heard about Lanczos algorith and some different kind of algorith for svd, but I need some help about this theme. Knows anybody some usable code or tutorial about parallel svd? Best Regards, Attila

Re: [OMPI users] OMPI Connection Retry Policy

2009-09-08 Thread George Bosilca
Charles, The listen is always posted on each MPI processes. This will fire when a remote is connecting, so we will setup the connection even if you didn't yet posted the receive. So, yes the first MPI_Send to each peer will always call connect(). On the remote process, the accept is calle

[OMPI users] OMPI Connection Retry Policy

2009-09-08 Thread Charles Salvia
According to the OpenMPI FAQ, OpenMPI creates point-to-point socket connections "lazily", i.e. only when needed. I have a few questions about this, and how it affect program performance. 1) Does this mean that MPI_Send will call connect() if necessary, and MPI_Recv will call accept()? 2) If so,

[OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel Compilers 11.1.058 => Segmentation fault

2009-09-08 Thread Christophe Peyret
Hello, After installing Snow Leopard, I rebulit openMPI 1.3.3 and wrap intel Compilers v 11.1.058. To do this, I used that small shell script : #! /bin/bash ./configure --prefix=/usr/local/openmpi-1.3.3- i64 \ CFLAGS="-arch x86_64" CXXFLAGS="-arc