[OMPI users] Problems building OpenMPI 2.1.1 on Intel KNL

2017-11-20 Thread Åke Sandgren
Hi! When the xppsl-libmemkind-dev package version 1.5.3 is installed building OpenMPI fails. opal/mca/mpool/memkind uses the macro MEMKIND_NUM_BASE_KIND which has been moved to memkind/internal/memkind_private.h Current master is also using that so I think that will also fail. Are there anyone

Re: [OMPI users] Problems building OpenMPI 2.1.1 on Intel KNL

2017-11-20 Thread Howard Pritchard
Hello Ake, Would you mind opening an issue on Github so we can track this? https://github.com/open-mpi/ompi/issues There's a template to show what info we need to fix this. Thanks very much for reporting this, Howard 2017-11-20 3:26 GMT-07:00 Åke Sandgren : > Hi! > > When the xppsl-libmemki

Re: [OMPI users] Problems building OpenMPI 2.1.1 on Intel KNL

2017-11-20 Thread Åke Sandgren
Done, issue 4519 On 11/20/2017 07:02 PM, Howard Pritchard wrote: > Hello Ake, > > Would you mind opening an issue on Github so we can track this? > > https://github.com/open-mpi/ompi/issues > > There's a template to show what info we need to fix this. > > Thanks very much for reporting this, >

[OMPI users] Using shmem_int_fadd() in OpenMPI's SHMEM

2017-11-20 Thread Benjamin Brock
What's the proper way to use shmem_int_fadd() in OpenMPI's SHMEM? A minimal example seems to seg fault: #include #include #include int main(int argc, char **argv) { shmem_init(); const size_t shared_segment_size = 1024; void *shared_segment = shmem_malloc(shared_segment_size); int *

Re: [OMPI users] Using shmem_int_fadd() in OpenMPI's SHMEM

2017-11-20 Thread Howard Pritchard
HI Ben, What version of Open MPI are you trying to use? Also, could you describe something about your system. If its a cluster what sort of interconnect is being used. Howard 2017-11-20 14:13 GMT-07:00 Benjamin Brock : > What's the proper way to use shmem_int_fadd() in OpenMPI's SHMEM? > > A

Re: [OMPI users] --map-by

2017-11-20 Thread r...@open-mpi.org
So there are two options here that will work and hopefully provide you with the desired pattern: * if you want the procs to go in different NUMA regions: $ mpirun --map-by numa:PE=2 --report-bindings -n 2 /bin/true [rhc001:131460] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core 1[hw