Re: [OMPI users] Is gridengine integration broken in openmpi 2.0.2?

2017-02-06 Thread Mark Dixon

On Fri, 3 Feb 2017, r...@open-mpi.org wrote:

I do see a diff between 2.0.1 and 2.0.2 that might have a related 
impact. The way we handled the MCA param that specifies the launch agent 
(ssh, rsh, or whatever) was modified, and I don’t think the change is 
correct. It basically says that we don’t look for qrsh unless the MCA 
param has been changed from the coded default, which means we are not 
detecting SGE by default.


Try setting "-mca plm_rsh_agent foo" on your cmd line - that will get 
past the test, and then we should auto-detect SGE again

...

Ah-ha! "-mca plm_rsh_agent foo" fixes it!

Thanks very much - presumably I can stick that in the system-wide 
openmpi-mca-params.conf for now.


Cheers,

Mark___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI not running any job on Mac OS X 10.12

2017-02-06 Thread Howard Pritchard
Hi Michel,

Could you try running the app with

export TMPDIR=/tmp

set in the shell you are using?

Howard


2017-02-02 13:46 GMT-07:00 Michel Lesoinne :

Howard,

First, thanks to you and Jeff for looking into this with me. 👍
I tried ../configure --disable-shared --enable-static --prefix ~/.local
The result is the same as without --disable-shared. i.e. I get the
following error:

[Michels-MacBook-Pro.local:92780] [[46617,0],0] ORTE_ERROR_LOG: Bad
parameter in file ../../orte/orted/pmix/pmix_server.c at line 262

[Michels-MacBook-Pro.local:92780] [[46617,0],0] ORTE_ERROR_LOG: Bad
parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line
666

--

It looks like orte_init failed for some reason; your parallel process is

likely to abort.  There are many reasons that a parallel process can

fail during orte_init; some of which are due to configuration or

environment problems.  This failure appears to be an internal failure;

here's some additional information (which may only be relevant to an

Open MPI developer):


  pmix server init failed

  --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS

--

On Thu, Feb 2, 2017 at 12:29 PM, Howard Pritchard 
wrote:

Hi Michel

Try adding --enable-static to the configure.
That fixed the problem for me.

Howard

Michel Lesoinne  schrieb am Mi. 1. Feb. 2017 um
19:07:

I have compiled OpenMPI 2.0.2 on a new Macbook running OS X 10.12 and have
been trying to run simple program.
I configured openmpi with
../configure --disable-shared --prefix ~/.local
make all install

Then I have  a simple code only containing a call to MPI_Init.
I compile it with
mpirun -np 2 ./mpitest

The output is:

[Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
unable to open mca_patcher_overwrite: File not found (ignored)

[Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
unable to open mca_shmem_mmap: File not found (ignored)

[Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
unable to open mca_shmem_posix: File not found (ignored)

[Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
unable to open mca_shmem_sysv: File not found (ignored)

--

It looks like opal_init failed for some reason; your parallel process is

likely to abort.  There are many reasons that a parallel process can

fail during opal_init; some of which are due to configuration or

environment problems.  This failure appears to be an internal failure;

here's some additional information (which may only be relevant to an

Open MPI developer):


  opal_shmem_base_select failed

  --> Returned value -1 instead of OPAL_SUCCESS

--

Without the --disable-shared in the configuration, then I get:


[Michels-MacBook-Pro.local:68818] [[53415,0],0] ORTE_ERROR_LOG: Bad
parameter in file ../../orte/orted/pmix/pmix_server.c at line 264

[Michels-MacBook-Pro.local:68818] [[53415,0],0] ORTE_ERROR_LOG: Bad
parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line
666

--

It looks like orte_init failed for some reason; your parallel process is

likely to abort.  There are many reasons that a parallel process can

fail during orte_init; some of which are due to configuration or

environment problems.  This failure appears to be an internal failure;

here's some additional information (which may only be relevant to an

Open MPI developer):


  pmix server init failed

  --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS

--




Has anyone seen this? What am I missing?
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users



___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Performance Issues on SMP Workstation

2017-02-06 Thread Andy Witzig
Hi all,

My apologies for not replying sooner on this issue - I’ve been swamped with 
other tasking.  Here’s my latest:

1.) I have looked deep into bindings on both systems (used --report-bindings 
option) and nothing came to light.  I’ve tried multiple variations on bindings 
settings and only minor improvements were made on the workstation.

2.) I used the mpirun --tag-output grep Cpus_allowed_list /proc/self/status 
command and everything was in order on both systems.

3.) I used ompi_info -c (per recommendation of Penguin Computing support staff) 
and looked at the differences in configuration.  I’m pasting the output below 
for reference.  The only settings in the cluster configuration that were not 
present in the workstation configuration were: --enable-__cxa_atexit, 
--disable-libunwind-exceptions, and --disable-dssi.  There were several 
settings present in the workstation configuration that were not set in the 
cluster configuration.  Any reason why the same version of OpenMPI would have 
such different settings? 

3.) I used hwloc and lstopo to compare system hardware and confirmed that the 
workstation has either equivalent or superior specs to the cluster node setup.

3.) Primary differences I can see right now are:
a.) OpenMPI 1.6.4 was compiled using gcc 4.4.7 on the cluster and I am 
compiling with gcc 5.4.0 on the workstation;
b.) OpenMPI compile configurations are different;
b.) the cluster uses Torque/PBS to submit the jobs and; 
c.) the workstation is hyper threaded and cluster is not
d.) Workstation runs on Ubuntu while cluster runs on CentOS

My next steps will be to compile/install gcc 4.4.7 on the Workstation and 
recompile OpenMPI 1.6.4 to ensure the software configuration is equivalent, and 
do my best to replicate the cluster configuration settings.  I will also look 
into the profiling tools that Christoph mentioned and see if any details come 
to light.

Thanks much,
Andy

---WORKSTATION OMPI_INFO -C 
OUTPUT---
Using built-in specs.
COLLECT_GCC=/usr/bin/gfortran
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/5/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v 
--with-pkgversion='Ubuntu 5.4.0-6ubuntu1~16.04.4' 
--with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ 
--prefix=/usr 
--program-suffix=-5 
--enable-shared 
--enable-linker-build-id 
--libexecdir=/usr/lib 
--without-included-gettext 
--enable-threads=posix 
--libdir=/usr/lib 
--enable-nls 
--with-sysroot=/ 
--enable-clocale=gnu 
--enable-libstdcxx-debug 
--enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new 
--enable-gnu-unique-object 
--disable-vtable-verify
--enable-libmpx
--enable-plugin
--with-system-zlib
--disable-browser-plugin
--enable-java-awt=gtk
--enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-amd64/jre 
--enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-amd64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-amd64 
--with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-objc-gc 
--enable-multiarch 
--disable-werror 
--with-arch-32=i686 
--with-abi=m64 
--with-multilib-list=m32,m64,mx32 
--enable-multilib 
--with-tune=generic 
--enable-checking=release 
--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu 
--target=x86_64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)

---CLUSTER OMPI_INFO -C 
OUTPUT---
Using built-in specs. 
Target: x86_64-redhat-linux 
Configured with: ./configure 

--prefix=/public/apps/gcc/4.4.7 
--enable-shared 
--enable-threads=posix 
--enable-checking=release 
--with-system-zlib 
--enable-__cxa_atexit 
--disable-libunwind-exceptions 
--enable-gnu-unique-object 
--disable-dssi 
--with-arch_32=i686 
--build=x86_64-redhat-linux build_alias=x86_64-redhat-linux 
--enable-languages=c,c++,fortran,objc,obj-c++ 
Thread model: posix 
gcc version 4.4.7 (GCC)


On Feb 2, 2017, at 5:28 AM, Gilles Gouaillardet  
wrote:

i cannot remember what is the default binding (if any) on Open MPI 1.6 
nor whether the default is the same with or without PBS

you can simply
mpirun --tag-output grep Cpus_allowed_list /proc/self/status
and see if you note any discrepancy between your systems

you might also consider upgrading to the latest Open MPI 2.0.2, and see how 
things gi

Cheers,

Gilles

On Thursday, February 2, 2017, mailto:nietham...@hlrs.de>> 
wrote:
Hello Andy,

You can also use the --report-bindings option of mpirun to check which cores
your program will use and to which cores the processes are bound to.


Are you using the same backend compiler on both systems?

Do you have performance tools available on the systems where you can see in
which part of the Program the time is lost? Common tools would be Score-P/
Vampir/CUBE, TAU, extrae/Paraver.

Best
Christoph

O

Re: [OMPI users] Performance Issues on SMP Workstation

2017-02-06 Thread Elken, Tom
“c.) the workstation is hyper threaded and cluster is not”

You might turn off hyperthreading (HT) on the workstation, and re-run.
I’ve seen some OS’s on some systems get confused and assign multiple OS “cpus” 
to the same HW core/thread.

In any case, if you turn HT off, and top shows you that tasks are running on 
different ‘cpus’, you can be sure they are running on different cores, and less 
likely to interfere with each other.

-Tom

From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Andy Witzig
Sent: Monday, February 06, 2017 8:25 AM
To: Open MPI Users 
Subject: Re: [OMPI users] Performance Issues on SMP Workstation

Hi all,

My apologies for not replying sooner on this issue - I’ve been swamped with 
other tasking.  Here’s my latest:

1.) I have looked deep into bindings on both systems (used --report-bindings 
option) and nothing came to light.  I’ve tried multiple variations on bindings 
settings and only minor improvements were made on the workstation.

2.) I used the mpirun --tag-output grep Cpus_allowed_list /proc/self/status 
command and everything was in order on both systems.

3.) I used ompi_info -c (per recommendation of Penguin Computing support staff) 
and looked at the differences in configuration.  I’m pasting the output below 
for reference.  The only settings in the cluster configuration that were not 
present in the workstation configuration were: --enable-__cxa_atexit, 
--disable-libunwind-exceptions, and --disable-dssi.  There were several 
settings present in the workstation configuration that were not set in the 
cluster configuration.  Any reason why the same version of OpenMPI would have 
such different settings?

3.) I used hwloc and lstopo to compare system hardware and confirmed that the 
workstation has either equivalent or superior specs to the cluster node setup.

3.) Primary differences I can see right now are:
a.) OpenMPI 1.6.4 was compiled using gcc 4.4.7 on the cluster and I 
am compiling with gcc 5.4.0 on the workstation;
b.) OpenMPI compile configurations are different;
b.) the cluster uses Torque/PBS to submit the jobs and;
c.) the workstation is hyper threaded and cluster is not
d.) Workstation runs on Ubuntu while cluster runs on CentOS

My next steps will be to compile/install gcc 4.4.7 on the Workstation and 
recompile OpenMPI 1.6.4 to ensure the software configuration is equivalent, and 
do my best to replicate the cluster configuration settings.  I will also look 
into the profiling tools that Christoph mentioned and see if any details come 
to light.

Thanks much,
Andy

---WORKSTATION OMPI_INFO -C 
OUTPUT---
Using built-in specs.
COLLECT_GCC=/usr/bin/gfortran
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/5/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v
--with-pkgversion='Ubuntu 5.4.0-6ubuntu1~16.04.4'
--with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++
--prefix=/usr
--program-suffix=-5
--enable-shared
--enable-linker-build-id
--libexecdir=/usr/lib
--without-included-gettext
--enable-threads=posix
--libdir=/usr/lib
--enable-nls
--with-sysroot=/
--enable-clocale=gnu
--enable-libstdcxx-debug
--enable-libstdcxx-time=yes
--with-default-libstdcxx-abi=new
--enable-gnu-unique-object
--disable-vtable-verify
--enable-libmpx
--enable-plugin
--with-system-zlib
--disable-browser-plugin
--enable-java-awt=gtk
--enable-gtk-cairo
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-amd64/jre
--enable-java-home
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-amd64
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-amd64
--with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar
--enable-objc-gc
--enable-multiarch
--disable-werror
--with-arch-32=i686
--with-abi=m64
--with-multilib-list=m32,m64,mx32
--enable-multilib
--with-tune=generic
--enable-checking=release
--build=x86_64-linux-gnu
--host=x86_64-linux-gnu
--target=x86_64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)

---CLUSTER OMPI_INFO -C 
OUTPUT---
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ./configure

--prefix=/public/apps/gcc/4.4.7
--enable-shared
--enable-threads=posix
--enable-checking=release
--with-system-zlib
--enable-__cxa_atexit
--disable-libunwind-exceptions
--enable-gnu-unique-object
--disable-dssi
--with-arch_32=i686
--build=x86_64-redhat-linux build_alias=x86_64-redhat-linux
--enable-languages=c,c++,fortran,objc,obj-c++
Thread model: posix
gcc version 4.4.7 (GCC)

On Feb 2, 2017, at 5:28 AM, Gilles Gouaillardet 
mailto:gilles.gouaillar...@gmail.com>> wrote:

i cannot remember what is the default binding (if any) on Open MPI 1.6
nor whether the default is the same with or without PBS

you can simply
mpirun --tag-output grep Cpus_allowed_list /proc/self/status
and see if 

Re: [OMPI users] Is gridengine integration broken in openmpi 2.0.2?

2017-02-06 Thread Mark Dixon

On Mon, 6 Feb 2017, Mark Dixon wrote:
...

Ah-ha! "-mca plm_rsh_agent foo" fixes it!

Thanks very much - presumably I can stick that in the system-wide 
openmpi-mca-params.conf for now.

...

Except if I do that, it means running ompi outside of the SGE environment 
no longer works :(


Should I just revoke the following commit?

Cheers,

Mark

commit d51c2af76b0c011177aca8e08a5a5fcf9f5e67db
Author: Jeff Squyres 
Date:   Tue Aug 16 06:58:20 2016 -0500

rsh: robustify the check for plm_rsh_agent default value

Don't strcmp against the default value -- the default value may change
over time.  Instead, check to see if the MCA var source is not
DEFAULT.

Signed-off-by: Jeff Squyres 

(cherry picked from commit 
open-mpi/ompi@71ec5cfb436977ea9ad409ba634d27e6addf6fae)

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] Performance Issues on SMP Workstation

2017-02-06 Thread Andy Witzig
Thanks, Tom.  I did try using the mpirun —bind-to-core option and confirmed 
that individual MPI processes were placed on unique cores (also without other 
interfering MPI runs); however, it did not make a significant difference.  That 
said, I do agree that turning off hyper-threading is an important test to rule 
out any fundamental differences that may be at play.  I’ll turn off 
hyper-threading and let you know what I find.

Best regards,
Andy

On Feb 6, 2017, at 10:44 AM, Elken, Tom  wrote:

“c.) the workstation is hyper threaded and cluster is not”
 
You might turn off hyperthreading (HT) on the workstation, and re-run.
I’ve seen some OS’s on some systems get confused and assign multiple OS “cpus” 
to the same HW core/thread.
 
In any case, if you turn HT off, and top shows you that tasks are running on 
different ‘cpus’, you can be sure they are running on different cores, and less 
likely to interfere with each other.
 
-Tom
 
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Andy Witzig
Sent: Monday, February 06, 2017 8:25 AM
To: Open MPI Users 
Subject: Re: [OMPI users] Performance Issues on SMP Workstation
 
Hi all,
 
My apologies for not replying sooner on this issue - I’ve been swamped with 
other tasking.  Here’s my latest:
 
1.) I have looked deep into bindings on both systems (used --report-bindings 
option) and nothing came to light.  I’ve tried multiple variations on bindings 
settings and only minor improvements were made on the workstation.
 
2.) I used the mpirun --tag-output grep Cpus_allowed_list /proc/self/status 
command and everything was in order on both systems.
 
3.) I used ompi_info -c (per recommendation of Penguin Computing support staff) 
and looked at the differences in configuration.  I’m pasting the output below 
for reference.  The only settings in the cluster configuration that were not 
present in the workstation configuration were: --enable-__cxa_atexit, 
--disable-libunwind-exceptions, and --disable-dssi.  There were several 
settings present in the workstation configuration that were not set in the 
cluster configuration.  Any reason why the same version of OpenMPI would have 
such different settings? 
 
3.) I used hwloc and lstopo to compare system hardware and confirmed that the 
workstation has either equivalent or superior specs to the cluster node setup.
 
3.) Primary differences I can see right now are:
a.) OpenMPI 1.6.4 was compiled using gcc 4.4.7 on the cluster and I 
am compiling with gcc 5.4.0 on the workstation;
b.) OpenMPI compile configurations are different;
b.) the cluster uses Torque/PBS to submit the jobs and; 
c.) the workstation is hyper threaded and cluster is not
d.) Workstation runs on Ubuntu while cluster runs on CentOS
 
My next steps will be to compile/install gcc 4.4.7 on the Workstation and 
recompile OpenMPI 1.6.4 to ensure the software configuration is equivalent, and 
do my best to replicate the cluster configuration settings.  I will also look 
into the profiling tools that Christoph mentioned and see if any details come 
to light.
 
Thanks much,
Andy
 
---WORKSTATION OMPI_INFO -C 
OUTPUT---
Using built-in specs.
COLLECT_GCC=/usr/bin/gfortran
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/5/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v 
--with-pkgversion='Ubuntu 5.4.0-6ubuntu1~16.04.4' 
--with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ 
--prefix=/usr 
--program-suffix=-5 
--enable-shared 
--enable-linker-build-id 
--libexecdir=/usr/lib 
--without-included-gettext 
--enable-threads=posix 
--libdir=/usr/lib 
--enable-nls 
--with-sysroot=/ 
--enable-clocale=gnu 
--enable-libstdcxx-debug 
--enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new 
--enable-gnu-unique-object 
--disable-vtable-verify
--enable-libmpx
--enable-plugin
--with-system-zlib
--disable-browser-plugin
--enable-java-awt=gtk
--enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-amd64/jre 
--enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-amd64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-amd64 
--with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-objc-gc 
--enable-multiarch 
--disable-werror 
--with-arch-32=i686 
--with-abi=m64 
--with-multilib-list=m32,m64,mx32 
--enable-multilib 
--with-tune=generic 
--enable-checking=release 
--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu 
--target=x86_64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)

---CLUSTER OMPI_INFO -C 
OUTPUT---
Using built-in specs. 
Target: x86_64-redhat-linux 
Configured with: ./configure 

--prefix=/public/apps/gcc/4.4.7 
--enable-shared 
--enable-threads=posix 
--enable-checking=release 
--with-system-zlib 
--enable-__cx

[OMPI users] openmpi single node jobs using btl openib

2017-02-06 Thread Jingchao Zhang
Hi,


We recently noticed openmpi is using btl openib over self,sm for single node 
jobs, which has caused performance degradation for some applications, e.g. 
'cp2k'. For opempi version 2.0.1, our test shows single node 'cp2k' job using 
openib is ~25% slower than using self,sm. We advise users do '--mca 
btl_base_exclude openib' as a temporary fix. I need to point out that not all 
applications are affected by this feature. Many of them have the same 
single-node performance with/without openib. Why doesn't openmpi use self,sm by 
default for single node jobs? Is this the intended behavior?


Thanks,

Jingchao
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI not running any job on Mac OS X 10.12

2017-02-06 Thread Michel Lesoinne
Hi Howard!
Good news. This makes things work finally.
Thanks.
I think however that it should not be necessary, but if I have to do that
every time, I guess I will 👍

On Mon, Feb 6, 2017 at 8:09 AM, Howard Pritchard 
wrote:

> Hi Michel,
>
> Could you try running the app with
>
> export TMPDIR=/tmp
>
> set in the shell you are using?
>
> Howard
>
>
> 2017-02-02 13:46 GMT-07:00 Michel Lesoinne :
>
> Howard,
>
> First, thanks to you and Jeff for looking into this with me. 👍
> I tried ../configure --disable-shared --enable-static --prefix ~/.local
> The result is the same as without --disable-shared. i.e. I get the
> following error:
>
> [Michels-MacBook-Pro.local:92780] [[46617,0],0] ORTE_ERROR_LOG: Bad
> parameter in file ../../orte/orted/pmix/pmix_server.c at line 262
>
> [Michels-MacBook-Pro.local:92780] [[46617,0],0] ORTE_ERROR_LOG: Bad
> parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at
> line 666
>
> --
>
> It looks like orte_init failed for some reason; your parallel process is
>
> likely to abort.  There are many reasons that a parallel process can
>
> fail during orte_init; some of which are due to configuration or
>
> environment problems.  This failure appears to be an internal failure;
>
> here's some additional information (which may only be relevant to an
>
> Open MPI developer):
>
>
>   pmix server init failed
>
>   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
>
> --
>
> On Thu, Feb 2, 2017 at 12:29 PM, Howard Pritchard 
> wrote:
>
> Hi Michel
>
> Try adding --enable-static to the configure.
> That fixed the problem for me.
>
> Howard
>
> Michel Lesoinne  schrieb am Mi. 1. Feb. 2017 um
> 19:07:
>
> I have compiled OpenMPI 2.0.2 on a new Macbook running OS X 10.12 and have
> been trying to run simple program.
> I configured openmpi with
> ../configure --disable-shared --prefix ~/.local
> make all install
>
> Then I have  a simple code only containing a call to MPI_Init.
> I compile it with
> mpirun -np 2 ./mpitest
>
> The output is:
>
> [Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
> unable to open mca_patcher_overwrite: File not found (ignored)
>
> [Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
> unable to open mca_shmem_mmap: File not found (ignored)
>
> [Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
> unable to open mca_shmem_posix: File not found (ignored)
>
> [Michels-MacBook-Pro.local:45101] mca_base_component_repository_open:
> unable to open mca_shmem_sysv: File not found (ignored)
>
> --
>
> It looks like opal_init failed for some reason; your parallel process is
>
> likely to abort.  There are many reasons that a parallel process can
>
> fail during opal_init; some of which are due to configuration or
>
> environment problems.  This failure appears to be an internal failure;
>
> here's some additional information (which may only be relevant to an
>
> Open MPI developer):
>
>
>   opal_shmem_base_select failed
>
>   --> Returned value -1 instead of OPAL_SUCCESS
>
> --
>
> Without the --disable-shared in the configuration, then I get:
>
>
> [Michels-MacBook-Pro.local:68818] [[53415,0],0] ORTE_ERROR_LOG: Bad
> parameter in file ../../orte/orted/pmix/pmix_server.c at line 264
>
> [Michels-MacBook-Pro.local:68818] [[53415,0],0] ORTE_ERROR_LOG: Bad
> parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at
> line 666
>
> --
>
> It looks like orte_init failed for some reason; your parallel process is
>
> likely to abort.  There are many reasons that a parallel process can
>
> fail during orte_init; some of which are due to configuration or
>
> environment problems.  This failure appears to be an internal failure;
>
> here's some additional information (which may only be relevant to an
>
> Open MPI developer):
>
>
>   pmix server init failed
>
>   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
>
> --
>
>
>
>
> Has anyone seen this? What am I missing?
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https: