Re: [OMPI users] [EXTERNAL] Re: Using shmem_int_fadd() in OpenMPI's SHMEM

2017-11-21 Thread Hammond, Simon David
Hi Howard/OpenMPI Users, I have had a similar seg-fault this week using OpenMPI 2.1.1 with GCC 4.9.3 so I tried to compile the example code in the email below. I see similar behavior to a small benchmark we have in house (but using inc not finc). When I run on a single node (both PE’s on the sa

Re: [OMPI users] [EXTERNAL] OpenMPI 3.0.1 and Power9

2018-04-10 Thread Hammond, Simon David
Steve, We have been able to get OpenMPI 2.1 and 3.0 series working on our POWER9 systems. It is my understanding that the IBM offering has performance improvements over the open source variant. I would add the caveat we are still in fairly early testing of the platform. S. — Si Hammond Scalab

[OMPI users] Compiling OpenMPI 1.8.1 for Cray XC30

2014-06-05 Thread Hammond, Simon David (-EXP)
Hi OpenMPI developers/users, Does anyone have a working configure line for OpenMPI 1.8.1 on a Cray XC30? When we compile the code ALPS is located but when we run compiled binaries using aprun we get n * 1 ranks rather than 1 job of n ranks. Thank you. S. -- Simon Hammond Scalable Computer Arc

[OMPI users] Errors on POWER8 Ubuntu 14.04u2

2015-03-26 Thread Hammond, Simon David (-EXP)
Hi everyone, We are trying to compile custom installs of OpenMPI 1.8.4 on our POWER8 Ubuntu system. We can configure and build correctly but when running ompi_info we see many errors like those listed below. It appears that all of the libraries in the ./lib are static (.a) files. It appears tha

Re: [OMPI users] [EXTERNAL] Re: Errors on POWER8 Ubuntu 14.04u2

2015-03-27 Thread Hammond, Simon David (-EXP)
/community/help/ > On Mar 26, 2015, at 10:55 PM, Ralph Castain wrote: > > Could you please send us your configure line? > >> On Mar 26, 2015, at 4:47 PM, Hammond, Simon David (-EXP) >> wrote: >> >> Hi everyone, >> >> We are trying to compile custom

[OMPI users] Segmentation Fault when using OpenMPI 1.10.6 and PGI 17.1.0 on POWER8

2017-02-21 Thread Hammond, Simon David (-EXP)
Hi OpenMPI Users, Has anyone successfully tested OpenMPI 1.10.6 with PGI 17.1.0 on POWER8 with the LSF scheduler (—with-lsf=..)? I am getting this error when the code hits MPI_Finalize. It causes the job to abort (i.e. exit the LSF session) when I am running interactively. Are there any materi

[OMPI users] Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11

2013-10-07 Thread Hammond, Simon David (-EXP)
Hey everyone, I am trying to build OpenMPI 1.7.2 with CUDA enabled, OpenMPI will configure successfully but I am seeing a build error relating to the inclusion of the CUDA options (at least I think so). Do you guys know if this is a bug or whether something is wrong with how we are configuring Ope

Re: [OMPI users] [EXTERNAL] Re: Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11

2013-10-07 Thread Hammond, Simon David (-EXP)
ng, you could try configuring with >this additional flag: > >--enable-mca-no-build=pml-bfo > >Rolf > >>-Original Message- >>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Hammond, >>Simon David (-EXP) >>Sent: Monday, October 07, 2013

Re: [OMPI users] [EXTERNAL] Re: open-mpi on Mac OS 10.9 (Mavericks)

2013-11-25 Thread Hammond, Simon David (-EXP)
We have occasionally had a problem like this when we set LD_LIBRARY_PATH only. On OSX you may need to set DYLD_LIBRARY_PATH instead ( set it to the same lib directory ) Can you try that and see if it resolves the problem? Si Hammond Sandia National Laboratories Remote Connection -Origin

Re: [OMPI users] [EXTERNAL] Re: Planned support for Intel Phis

2014-02-02 Thread Hammond, Simon David (-EXP)
Will this support native execution? I.e. MIC only, no host involvement? S -- Si Hammond Sandia National Laboratories Remote Connection -Original Message- From: Ralph Castain [r...@open-mpi.org] Sent: Sunday, February 02, 2014 09:02 AM Mountain Standard Time T

Re: [OMPI users] [EXTERNAL] MPI-Checker - Static Analyzer

2015-05-31 Thread Hammond, Simon David (-EXP)
Alex, Do you have a paper on the tool we could look at? Thanks S -- Si Hammond Scalable Computer Architectures Sandia National Laboratories, NM [Sent remotely, please excuse typing errors] From: users on behalf of Alexander Droste Sent: Saturday, May 30, 20

[OMPI users] OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-06-16 Thread Hammond, Simon David via users
Hi OpenMPI Team, We have recently updated an install of OpenMPI on POWER9 system (configuration details below). We migrated from OpenMPI 2.1 to OpenMPI 3.1. We seem to have a symptom where code than ran before is now locking up and making no progress, getting stuck in wait-all operations. While

Re: [OMPI users] OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-06-16 Thread Hammond, Simon David via users
] On 6/16/18, 5:45 PM, "Hammond, Simon David" wrote: Hi OpenMPI Team, We have recently updated an install of OpenMPI on POWER9 system (configuration details below). We migrated from OpenMPI 2.1 to OpenMPI 3.1. We seem to have a symptom where code than ran before is now lock

Re: [OMPI users] [EXTERNAL] Re: OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-07-01 Thread Hammond, Simon David via users
ly tarball for v3.1.x. Should be fixed. > On Jun 16, 2018, at 5:48 PM, Hammond, Simon David via users wrote: > > The output from the test in question is: > > Single thread test. Time: 0 s 10182 us 10 nsec/poppush > Atomics thread finished. Time: 0

Re: [OMPI users] [EXTERNAL] Re: OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-07-02 Thread Hammond, Simon David via users
> On Jun 30, 2018, at 3:18 PM, Hammond, Simon David via users > mailto:users@lists.open-mpi.org>> wrote: > > Nathan, > > Same issue with OpenMPI 3.1.1 on POWER9 with GCC 7.2.0 and CUDA9.2. > > S. > > -- > Si Hammond > Scalable Computer Architectures >

[OMPI users] ARM HPC Compiler 18.4.0 / OpenMPI 2.1.4 Hang for IMB All Reduce Test on 4 Ranks

2018-08-15 Thread Hammond, Simon David via users
Hi OpenMPI Users, I am compiling OpenMPI 2.1.4 with the ARM 18.4.0 HPC Compiler on our ARM ThunderX2 system. Configuration options below. For now, I am using the simplest configuration test we can use on our system. If I use the OpenMPI 2.1.4 which I have compiled and run a simple 4 rank run of

[OMPI users] Providing an Initial CPU Affinity List to mpirun

2018-11-20 Thread Hammond, Simon David via users
Hi OpenMPI Users, I wonder if you can help us with a problem we are having when trying to force OpenMPI to use specific cores. We want to supply an initial CPU affinity list to mpirun and then have it select its appropriate binding from within that set. So for instance, to provide it with two c

[OMPI users] MPI_Reduce_Scatter Segmentation Fault with Intel 2019 Update 1 Compilers on OPA-1

2018-12-03 Thread Hammond, Simon David via users
Hi Open MPI Users, Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0 when using the Intel 2019 Update 1 compilers on our Skylake/OmniPath-1 cluster. The bug occurs when running the Github master src_c variant of the Intel MPI Benchmarks. Configuration: ./configure --prefi

Re: [OMPI users] [EXTERNAL] Re: MPI_Reduce_Scatter Segmentation Fault with Intel 2019 Update 1 Compilers on OPA-1

2018-12-05 Thread Hammond, Simon David via users
ks/pull/11.patch Cheers, Gilles On 12/4/2018 4:41 AM, Hammond, Simon David via users wrote: > Hi Open MPI Users, > > Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0 when using the Intel 2019 Update 1 compi

Re: [OMPI users] [EXTERNAL] hwloc support for Power9/IBM AC922 servers

2019-04-16 Thread Hammond, Simon David via users
Hi Prentice, We are using OpenMPI and HWLOC on POWER9 servers. The topology information looks good from our initial use. Let me know if you need anything specifically. S. — Si Hammond Scalable Computer Architectures Sandia National Laboratories, NM > On Apr 16, 2019, at 11:28 AM, Prentice Bis