[OMPI users] OpenMPI 1.6.5 and IBM-AIX

2013-07-06 Thread Ilias Miroslav
Dear experts, I am trying to build up OpenMPI 1.6.5 package with the AIX compiler suite: ./configure --prefix=/gpfs/home/ilias/bin/openmpi_xl CXX=xlC CC=xlc F77=xlf FC=xlf90 xl fortran is of version 13.01, xlc/C is 11.01 Configuration goes well, but the compilation fails. Any help, please ?

Re: [OMPI users] OpenMPI 1.6.5 and IBM-AIX

2013-07-06 Thread Ilias Miroslav
Hi again, even for GNU compilers the OpenMPI compilation fails on AIX: . . . Making all in mca/timer/aix make[2]: Entering directory `/gpfs/home/ilias/bin/openmpi_gnu/openmpi-1.6.5/opal/mca/timer/aix' CC timer_aix_component.lo timer_aix_component.c: In function 'opal_timer_aix_open': tim

Re: [OMPI users] OpenMPI 1.6.5 and IBM-AIX

2013-07-06 Thread Ralph Castain
We haven't had access to an AIX machine in quite some time, so it isn't a big surprise that things have bit-rotted. If you're willing to debug, we can try to provide fixes. Just may take a bit to complete. On Jul 6, 2013, at 9:49 AM, Ilias Miroslav wrote: > Hi again, > > even for GNU compile

Re: [OMPI users] openmpi 1.6.3 fails to identify local host if its IP is 127.0.1.1

2013-07-06 Thread Ralph Castain
On Jul 3, 2013, at 1:00 PM, Riccardo Murri wrote: > Hi Jeff, Ralph, > > first of all: thanks for your work on this! > > On 3 July 2013 21:09, Jeff Squyres (jsquyres) wrote: >> 1. The root cause of the issue is that you are assigning a >> non-existent IP address to a name. I.e., maps to 127.

[OMPI users] Trouble with MPI_Recv not filling buffer

2013-07-06 Thread Patrick Brückner
Hello, I am currently learning MPI and there's this problem that I have been dealing with very long now. I am trying to receive a struct, and in some very specific cases (when I run with 2/3/4 instances and only calculating exactly the same number of data). For some weird reason it seems to w

[OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-06 Thread Michael Thomadakis
Hello OpenMPI, I am wondering what level of support is there for CUDA and GPUdirect on OpenMPI 1.6.5 and 1.7.2. I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However, it seems that with configure v1.6.5 it was ignored. Can you identify GPU memory and send messages from it directl

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-06 Thread Ralph Castain
Rolf will have to answer the question on level of support. The CUDA code is not in the 1.6 series as it was developed after that series went "stable". It is in the 1.7 series, although the level of support will likely be incrementally increasing as that "feature" series continues to evolve. On

[OMPI users] Question on handling of memory for communications

2013-07-06 Thread Michael Thomadakis
Hello OpenMPI, When you stack runs on SandyBridge nodes atached to HCAs ove PCI3 *gen 3*do you pay any special attention to the memory buffers according to which socket/memory controller their physical memory belongs to? For instance, if the HCA is attached to the PCIgen3 lanes of Socket 1 do yo

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-06 Thread Michael Thomadakis
thanks, Do you guys have any plan to support Intel Phi in the future? That is, running MPI code on the Phi cards or across the multicore and Phi, as Intel MPI does? thanks... Michael On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain wrote: > Rolf will have to answer the question on level of suppo

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-06 Thread Ralph Castain
There was discussion of this on a prior email thread on the OMPI devel mailing list: http://www.open-mpi.org/community/lists/devel/2013/05/12354.php On Jul 6, 2013, at 2:01 PM, Michael Thomadakis wrote: > thanks, > > Do you guys have any plan to support Intel Phi in the future? That is, > r