Hi Gilles and Ralph,

I was able to sort out my mess. In my last email I compared the
files from "SunOS_sparc/openmpi-2.0.0_64_gcc/lib64/openmpi" from
the attachment of my email to Ralph with the files from
"SunOS_sparc/openmpi-2.0.0_64_cc/lib64/openmpi" from my current
file system. That's the reason while I have had different
timestamps. The other problem was that Ralph didn't recognize
that "mca_pmix_pmix112.so" wasn't built on Solaris with the
Sun C compiler. I've removed most of the files from the attachment
of my email so that it is easier to see the relevant files. Below
I try to give you more information that may be relevant to track
down the problem. I still get an error running one of my small
test programs, when I use my gcc-version of Open MPI.
"mca_pmix_pmix112.so" is a 64 bits library.

Linux_x86_64/openmpi-2.0.0_64_cc/lib64/openmpi:
...
-rwxr-xr-x 1 root root  261327 Apr 19 16:46 mca_plm_slurm.so
-rwxr-xr-x 1 root root    1002 Apr 19 16:45 mca_pmix_pmix112.la
-rwxr-xr-x 1 root root 3906526 Apr 19 16:45 mca_pmix_pmix112.so
-rwxr-xr-x 1 root root     966 Apr 19 16:51 mca_pml_cm.la
-rwxr-xr-x 1 root root 1574265 Apr 19 16:51 mca_pml_cm.so
...

Linux_x86_64/openmpi-2.0.0_64_gcc/lib64/openmpi:
...
-rwxr-xr-x 1 root root   70371 Apr 19 16:43 mca_plm_slurm.so
-rwxr-xr-x 1 root root    1008 Apr 19 16:42 mca_pmix_pmix112.la
-rwxr-xr-x 1 root root 1029005 Apr 19 16:42 mca_pmix_pmix112.so
-rwxr-xr-x 1 root root     972 Apr 19 16:46 mca_pml_cm.la
-rwxr-xr-x 1 root root  284858 Apr 19 16:46 mca_pml_cm.so
...

SunOS_sparc/openmpi-2.0.0_64_cc/lib64/openmpi:
...
-rwxr-xr-x 1 root root  319816 Apr 19 19:58 mca_plm_rsh.so
-rwxr-xr-x 1 root root     970 Apr 19 20:00 mca_pml_cm.la
-rwxr-xr-x 1 root root 1507440 Apr 19 20:00 mca_pml_cm.so
...

SunOS_sparc/openmpi-2.0.0_64_gcc/lib64/openmpi:
...
-rwxr-xr-x 1 root root  153280 Apr 19 19:49 mca_plm_rsh.so
-rwxr-xr-x 1 root root    1007 Apr 19 19:47 mca_pmix_pmix112.la
-rwxr-xr-x 1 root root 1400512 Apr 19 19:47 mca_pmix_pmix112.so
-rwxr-xr-x 1 root root     971 Apr 19 19:52 mca_pml_cm.la
-rwxr-xr-x 1 root root  342440 Apr 19 19:52 mca_pml_cm.so
...

SunOS_x86_64/openmpi-2.0.0_64_cc/lib64/openmpi:
...
-rwxr-xr-x 1 root root  300096 Apr 19 17:18 mca_plm_rsh.so
-rwxr-xr-x 1 root root     970 Apr 19 17:23 mca_pml_cm.la
-rwxr-xr-x 1 root root 1458816 Apr 19 17:23 mca_pml_cm.so
...

SunOS_x86_64/openmpi-2.0.0_64_gcc/lib64/openmpi:
...
-rwxr-xr-x 1 root root  133096 Apr 19 17:42 mca_plm_rsh.so
-rwxr-xr-x 1 root root    1007 Apr 19 17:41 mca_pmix_pmix112.la
-rwxr-xr-x 1 root root 1320240 Apr 19 17:41 mca_pmix_pmix112.so
-rwxr-xr-x 1 root root     971 Apr 19 17:46 mca_pml_cm.la
-rwxr-xr-x 1 root root  419848 Apr 19 17:46 mca_pml_cm.so
...


Yesterday I've installed openmpi-v2.x-dev-1290-gbd0e4e1 so that we
have a current version for the investigation of the problem. Once
more mca_pmix_pmix112.so isn't available on Solaris if I use the
Sun C compiler.

"config.log" for gcc-5.1.0 shows the following.

...
configure:127799: /bin/bash '../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/
pmix/configure' succeeded for opal/mca/pmix/pmix112/pmix
configure:127916: checking if MCA component pmix:pmix112 can compile
configure:127918: result: yes
configure:5637: --- MCA component pmix:external (m4 configuration macro)
configure:128523: checking for MCA component pmix:external compile mode
configure:128529: result: dso
configure:129054: checking if MCA component pmix:external can compile
configure:129056: result: no
...
config.status:3897: creating opal/mca/pmix/Makefile
config.status:3897: creating opal/mca/pmix/s1/Makefile
config.status:3897: creating opal/mca/pmix/cray/Makefile
config.status:3897: creating opal/mca/pmix/s2/Makefile
config.status:3897: creating opal/mca/pmix/pmix112/Makefile
config.status:3897: creating opal/mca/pmix/external/Makefile
...
MCA_BUILD_opal_pmix_cray_DSO_FALSE='#'
MCA_BUILD_opal_pmix_cray_DSO_TRUE=''
MCA_BUILD_opal_pmix_external_DSO_FALSE='#'
MCA_BUILD_opal_pmix_external_DSO_TRUE=''
MCA_BUILD_opal_pmix_pmix112_DSO_FALSE='#'
MCA_BUILD_opal_pmix_pmix112_DSO_TRUE=''
MCA_BUILD_opal_pmix_s1_DSO_FALSE='#'
MCA_BUILD_opal_pmix_s1_DSO_TRUE=''
MCA_BUILD_opal_pmix_s2_DSO_FALSE='#'
MCA_BUILD_opal_pmix_s2_DSO_TRUE=''
...
MCA_opal_FRAMEWORKS='common allocator backtrace btl dl event hwloc if installdirs memchecker memcpy memory mpool pmix pstat rcache sec shmem timer' MCA_opal_FRAMEWORKS_SUBDIRS='mca/common mca/allocator mca/backtrace mca/btl mca/dl mca/event mca/hwloc mca/if mca/installdirs mca/memchecker mca/memcpy mca/memory mca/mpool mca/pmix mca/pstat mca/rcache mca/sec mca/shmem mca/timer' MCA_opal_FRAMEWORK_COMPONENT_ALL_SUBDIRS='$(MCA_opal_common_ALL_SUBDIRS) $(MCA_opal_allocator_ALL_SUBDIRS) $(MCA_opal_backtrace_ALL_SUBDIRS) $(MCA_opal_btl_ALL_SUBDIRS) $(MCA_opal_dl_ALL_SUBDIRS) $(MCA_opal_event_ALL_SUBDIRS) $(MCA_opal_hwloc_ALL_SUBDIRS) $(MCA_opal_if_ALL_SUBDIRS) $(MCA_opal_installdirs_ALL_SUBDIRS) $(MCA_opal_memchecker_ALL_SUBDIRS) $(MCA_opal_memcpy_ALL_SUBDIRS) $(MCA_opal_memory_ALL_SUBDIRS) $(MCA_opal_mpool_ALL_SUBDIRS) $(MCA_opal_pmix_ALL_SUBDIRS) $(MCA_opal_pstat_ALL_SUBDIRS) $(MCA_opal_rcache_ALL_SUBDIRS) $(MCA_opal_sec_ALL_SUBDIRS) $(MCA_opal_shmem_ALL_SUBDIRS) $(MCA_opal_timer_ALL_SUBDIRS)' MCA_opal_FRAMEWORK_COMPONENT_DSO_SUBDIRS='$(MCA_opal_common_DSO_SUBDIRS) $(MCA_opal_allocator_DSO_SUBDIRS) $(MCA_opal_backtrace_DSO_SUBDIRS) $(MCA_opal_btl_DSO_SUBDIRS) $(MCA_opal_dl_DSO_SUBDIRS) $(MCA_opal_event_DSO_SUBDIRS) $(MCA_opal_hwloc_DSO_SUBDIRS) $(MCA_opal_if_DSO_SUBDIRS) $(MCA_opal_installdirs_DSO_SUBDIRS) $(MCA_opal_memchecker_DSO_SUBDIRS) $(MCA_opal_memcpy_DSO_SUBDIRS) $(MCA_opal_memory_DSO_SUBDIRS) $(MCA_opal_mpool_DSO_SUBDIRS) $(MCA_opal_pmix_DSO_SUBDIRS) $(MCA_opal_pstat_DSO_SUBDIRS) $(MCA_opal_rcache_DSO_SUBDIRS) $(MCA_opal_sec_DSO_SUBDIRS) $(MCA_opal_shmem_DSO_SUBDIRS) $(MCA_opal_timer_DSO_SUBDIRS)' MCA_opal_FRAMEWORK_COMPONENT_STATIC_SUBDIRS='$(MCA_opal_common_STATIC_SUBDIRS) $(MCA_opal_allocator_STATIC_SUBDIRS) $(MCA_opal_backtrace_STATIC_SUBDIRS) $(MCA_opal_btl_STATIC_SUBDIRS) $(MCA_opal_dl_STATIC_SUBDIRS) $(MCA_opal_event_STATIC_SUBDIRS) $(MCA_opal_hwloc_STATIC_SUBDIRS) $(MCA_opal_if_STATIC_SUBDIRS) $(MCA_opal_installdirs_STATIC_SUBDIRS) $(MCA_opal_memchecker_STATIC_SUBDIRS) $(MCA_opal_memcpy_STATIC_SUBDIRS) $(MCA_opal_memory_STATIC_SUBDIRS) $(MCA_opal_mpool_STATIC_SUBDIRS) $(MCA_opal_pmix_STATIC_SUBDIRS) $(MCA_opal_pstat_STATIC_SUBDIRS) $(MCA_opal_rcache_STATIC_SUBDIRS) $(MCA_opal_sec_STATIC_SUBDIRS) $(MCA_opal_shmem_STATIC_SUBDIRS) $(MCA_opal_timer_STATIC_SUBDIRS)' MCA_opal_FRAMEWORK_LIBS=' $(MCA_opal_common_STATIC_LTLIBS) mca/allocator/libmca_allocator.la $(MCA_opal_allocator_STATIC_LTLIBS) mca/backtrace/libmca_backtrace.la $(MCA_opal_backtrace_STATIC_LTLIBS) mca/btl/libmca_btl.la $(MCA_opal_btl_STATIC_LTLIBS) mca/dl/libmca_dl.la $(MCA_opal_dl_STATIC_LTLIBS) mca/event/libmca_event.la $(MCA_opal_event_STATIC_LTLIBS) mca/hwloc/libmca_hwloc.la $(MCA_opal_hwloc_STATIC_LTLIBS) mca/if/libmca_if.la $(MCA_opal_if_STATIC_LTLIBS) mca/installdirs/libmca_installdirs.la $(MCA_opal_installdirs_STATIC_LTLIBS) mca/memchecker/libmca_memchecker.la $(MCA_opal_memchecker_STATIC_LTLIBS) mca/memcpy/libmca_memcpy.la $(MCA_opal_memcpy_STATIC_LTLIBS) mca/memory/libmca_memory.la $(MCA_opal_memory_STATIC_LTLIBS) mca/mpool/libmca_mpool.la $(MCA_opal_mpool_STATIC_LTLIBS) mca/pmix/libmca_pmix.la $(MCA_opal_pmix_STATIC_LTLIBS) mca/pstat/libmca_pstat.la $(MCA_opal_pstat_STATIC_LTLIBS) mca/rcache/libmca_rcache.la $(MCA_opal_rcache_STATIC_LTLIBS) mca/sec/libmca_sec.la $(MCA_opal_sec_STATIC_LTLIBS) mca/shmem/libmca_shmem.la $(MCA_opal_shmem_STATIC_LTLIBS) mca/timer/libmca_timer.la $(MCA_opal_timer_STATIC_LTLIBS)'
...
MCA_opal_pmix_ALL_COMPONENTS=' s1 cray s2 pmix112 external'
MCA_opal_pmix_ALL_SUBDIRS=' mca/pmix/s1 mca/pmix/cray mca/pmix/s2 mca/pmix/pmix112 mca/pmix/external'
MCA_opal_pmix_DSO_COMPONENTS=' pmix112'
MCA_opal_pmix_DSO_SUBDIRS=' mca/pmix/pmix112'
MCA_opal_pmix_STATIC_COMPONENTS=''
MCA_opal_pmix_STATIC_LTLIBS=''
MCA_opal_pmix_STATIC_SUBDIRS=''
...
opal_pmix_ext_CPPFLAGS=''
opal_pmix_ext_LDFLAGS=''
opal_pmix_ext_LIBS=''
opal_pmix_pmix112_CPPFLAGS='-I$(OPAL_TOP_BUILDDIR)/opal/mca/pmix/pmix112/pmix/include/pmix -I$(OPAL_TOP_BUILDDIR)/opal/mca/pmix/pmix112/pmix/include -I$(OPAL_TOP_BUILDDIR)/opal/mca/pmix/pmix112/pmix -I$(OPAL_TOP_SRCDIR)/opal/mca/pmix/pmix112/pmix'
opal_pmix_pmix112_LIBS='$(OPAL_TOP_BUILDDIR)/opal/mca/pmix/pmix112/pmix/libpmix.la'
...



"config.log" for Sun C 5.13 shows the following.

...
configure:127803: /bin/bash '../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/
pmix/configure' *failed* for opal/mca/pmix/pmix112/pmix
configure:128379: checking if MCA component pmix:pmix112 can compile
configure:128381: result: no
configure:5637: --- MCA component pmix:external (m4 configuration macro)
configure:128523: checking for MCA component pmix:external compile mode
configure:128529: result: dso
configure:129054: checking if MCA component pmix:external can compile
configure:129056: result: no
...
config.status:3887: creating opal/mca/pmix/Makefile
config.status:3887: creating opal/mca/pmix/s1/Makefile
config.status:3887: creating opal/mca/pmix/cray/Makefile
config.status:3887: creating opal/mca/pmix/s2/Makefile
config.status:3887: creating opal/mca/pmix/pmix112/Makefile
config.status:3887: creating opal/mca/pmix/external/Makefile
...
MCA_BUILD_opal_pmix_cray_DSO_FALSE='#'
MCA_BUILD_opal_pmix_cray_DSO_TRUE=''
MCA_BUILD_opal_pmix_external_DSO_FALSE='#'
MCA_BUILD_opal_pmix_external_DSO_TRUE=''
MCA_BUILD_opal_pmix_pmix112_DSO_FALSE='#'
MCA_BUILD_opal_pmix_pmix112_DSO_TRUE=''
MCA_BUILD_opal_pmix_s1_DSO_FALSE='#'
MCA_BUILD_opal_pmix_s1_DSO_TRUE=''
MCA_BUILD_opal_pmix_s2_DSO_FALSE='#'
MCA_BUILD_opal_pmix_s2_DSO_TRUE=''
...
MCA_opal_FRAMEWORKS='common allocator backtrace btl dl event hwloc if installdirs memchecker memcpy memory mpool pmix pstat rcache sec shmem timer' MCA_opal_FRAMEWORKS_SUBDIRS='mca/common mca/allocator mca/backtrace mca/btl mca/dl mca/event mca/hwloc mca/if mca/installdirs mca/memchecker mca/memcpy mca/memory mca/mpool mca/pmix mca/pstat mca/rcache mca/sec mca/shmem mca/timer' MCA_opal_FRAMEWORK_COMPONENT_ALL_SUBDIRS='$(MCA_opal_common_ALL_SUBDIRS) $(MCA_opal_allocator_ALL_SUBDIRS) $(MCA_opal_backtrace_ALL_SUBDIRS) $(MCA_opal_btl_ALL_SUBDIRS) $(MCA_opal_dl_ALL_SUBDIRS) $(MCA_opal_event_ALL_SUBDIRS) $(MCA_opal_hwloc_ALL_SUBDIRS) $(MCA_opal_if_ALL_SUBDIRS) $(MCA_opal_installdirs_ALL_SUBDIRS) $(MCA_opal_memchecker_ALL_SUBDIRS) $(MCA_opal_memcpy_ALL_SUBDIRS) $(MCA_opal_memory_ALL_SUBDIRS) $(MCA_opal_mpool_ALL_SUBDIRS) $(MCA_opal_pmix_ALL_SUBDIRS) $(MCA_opal_pstat_ALL_SUBDIRS) $(MCA_opal_rcache_ALL_SUBDIRS) $(MCA_opal_sec_ALL_SUBDIRS) $(MCA_opal_shmem_ALL_SUBDIRS) $(MCA_opal_timer_ALL_SUBDIRS)' MCA_opal_FRAMEWORK_COMPONENT_DSO_SUBDIRS='$(MCA_opal_common_DSO_SUBDIRS) $(MCA_opal_allocator_DSO_SUBDIRS) $(MCA_opal_backtrace_DSO_SUBDIRS) $(MCA_opal_btl_DSO_SUBDIRS) $(MCA_opal_dl_DSO_SUBDIRS) $(MCA_opal_event_DSO_SUBDIRS) $(MCA_opal_hwloc_DSO_SUBDIRS) $(MCA_opal_if_DSO_SUBDIRS) $(MCA_opal_installdirs_DSO_SUBDIRS) $(MCA_opal_memchecker_DSO_SUBDIRS) $(MCA_opal_memcpy_DSO_SUBDIRS) $(MCA_opal_memory_DSO_SUBDIRS) $(MCA_opal_mpool_DSO_SUBDIRS) $(MCA_opal_pmix_DSO_SUBDIRS) $(MCA_opal_pstat_DSO_SUBDIRS) $(MCA_opal_rcache_DSO_SUBDIRS) $(MCA_opal_sec_DSO_SUBDIRS) $(MCA_opal_shmem_DSO_SUBDIRS) $(MCA_opal_timer_DSO_SUBDIRS)' MCA_opal_FRAMEWORK_COMPONENT_STATIC_SUBDIRS='$(MCA_opal_common_STATIC_SUBDIRS) $(MCA_opal_allocator_STATIC_SUBDIRS) $(MCA_opal_backtrace_STATIC_SUBDIRS) $(MCA_opal_btl_STATIC_SUBDIRS) $(MCA_opal_dl_STATIC_SUBDIRS) $(MCA_opal_event_STATIC_SUBDIRS) $(MCA_opal_hwloc_STATIC_SUBDIRS) $(MCA_opal_if_STATIC_SUBDIRS) $(MCA_opal_installdirs_STATIC_SUBDIRS) $(MCA_opal_memchecker_STATIC_SUBDIRS) $(MCA_opal_memcpy_STATIC_SUBDIRS) $(MCA_opal_memory_STATIC_SUBDIRS) $(MCA_opal_mpool_STATIC_SUBDIRS) $(MCA_opal_pmix_STATIC_SUBDIRS) $(MCA_opal_pstat_STATIC_SUBDIRS) $(MCA_opal_rcache_STATIC_SUBDIRS) $(MCA_opal_sec_STATIC_SUBDIRS) $(MCA_opal_shmem_STATIC_SUBDIRS) $(MCA_opal_timer_STATIC_SUBDIRS)' MCA_opal_FRAMEWORK_LIBS=' $(MCA_opal_common_STATIC_LTLIBS) mca/allocator/libmca_allocator.la $(MCA_opal_allocator_STATIC_LTLIBS) mca/backtrace/libmca_backtrace.la $(MCA_opal_backtrace_STATIC_LTLIBS) mca/btl/libmca_btl.la $(MCA_opal_btl_STATIC_LTLIBS) mca/dl/libmca_dl.la $(MCA_opal_dl_STATIC_LTLIBS) mca/event/libmca_event.la $(MCA_opal_event_STATIC_LTLIBS) mca/hwloc/libmca_hwloc.la $(MCA_opal_hwloc_STATIC_LTLIBS) mca/if/libmca_if.la $(MCA_opal_if_STATIC_LTLIBS) mca/installdirs/libmca_installdirs.la $(MCA_opal_installdirs_STATIC_LTLIBS) mca/memchecker/libmca_memchecker.la $(MCA_opal_memchecker_STATIC_LTLIBS) mca/memcpy/libmca_memcpy.la $(MCA_opal_memcpy_STATIC_LTLIBS) mca/memory/libmca_memory.la $(MCA_opal_memory_STATIC_LTLIBS) mca/mpool/libmca_mpool.la $(MCA_opal_mpool_STATIC_LTLIBS) mca/pmix/libmca_pmix.la $(MCA_opal_pmix_STATIC_LTLIBS) mca/pstat/libmca_pstat.la $(MCA_opal_pstat_STATIC_LTLIBS) mca/rcache/libmca_rcache.la $(MCA_opal_rcache_STATIC_LTLIBS) mca/sec/libmca_sec.la $(MCA_opal_sec_STATIC_LTLIBS) mca/shmem/libmca_shmem.la $(MCA_opal_shmem_STATIC_LTLIBS) mca/timer/libmca_timer.la $(MCA_opal_timer_STATIC_LTLIBS)'
...
MCA_opal_pmix_ALL_COMPONENTS=' s1 cray s2 pmix112 external'
MCA_opal_pmix_ALL_SUBDIRS=' mca/pmix/s1 mca/pmix/cray mca/pmix/s2 mca/pmix/pmix112 mca/pmix/external'
MCA_opal_pmix_DSO_COMPONENTS=''
MCA_opal_pmix_DSO_SUBDIRS=''
MCA_opal_pmix_STATIC_COMPONENTS=''
MCA_opal_pmix_STATIC_LTLIBS=''
MCA_opal_pmix_STATIC_SUBDIRS=''
...
opal_pmix_ext_CPPFLAGS=''
opal_pmix_ext_LDFLAGS=''
opal_pmix_ext_LIBS=''
opal_pmix_pmix112_CPPFLAGS=''
opal_pmix_pmix112_LIBS=''
...




I've attached the config.log files for pmix.

tyr openmpi-2.0.0 142 tar zvft pmix_config.log.tar.gz
-rw-r--r-- root/root 136291 2016-04-25 08:05:34 openmpi-v2.x-dev-1290-gbd0e4e1-SunOS.sparc.64_cc/opal/mca/pmix/pmix112/pmix/config.log -rw-r--r-- root/root 528808 2016-04-25 08:07:54 openmpi-v2.x-dev-1290-gbd0e4e1-SunOS.sparc.64_gcc/opal/mca/pmix/pmix112/pmix/config.log
tyr openmpi-2.0.0 143



I've also attached the output for the broken execution of
"spawn_multiple_master" for my gcc-version of Open MPI.
"spawn_master" works as expected with my gcc-version of Open MPI.

Hopefully you can fix the problem.


Kind regards and thank you very much for your help

Siegmar



Am 23.04.2016 um 21:34 schrieb Siegmar Gross:
Hi Gilles,

I don't know what happened, but the files are not available now
and they were definitely available when I answered the email from
Ralph. The files also have a different timestamp now. This is an
extract from my email to Ralph for Solaris Sparc.

-rwxr-xr-x 1 root root     977 Apr 19 19:49 mca_plm_rsh.la
-rwxr-xr-x 1 root root  153280 Apr 19 19:49 mca_plm_rsh.so
-rwxr-xr-x 1 root root    1007 Apr 19 19:47 mca_pmix_pmix112.la
-rwxr-xr-x 1 root root 1400512 Apr 19 19:47 mca_pmix_pmix112.so
-rwxr-xr-x 1 root root     971 Apr 19 19:52 mca_pml_cm.la
-rwxr-xr-x 1 root root  342440 Apr 19 19:52 mca_pml_cm.so

Now I have the following output for these files.

-rwxr-xr-x 1 root root     976 Apr 19 19:58 mca_plm_rsh.la
-rwxr-xr-x 1 root root  319816 Apr 19 19:58 mca_plm_rsh.so
-rwxr-xr-x 1 root root     970 Apr 19 20:00 mca_pml_cm.la
-rwxr-xr-x 1 root root 1507440 Apr 19 20:00 mca_pml_cm.so

I'll try to find out what happened next week when I'm back in
my office.


Kind regards

Siegmar





Am 23.04.16 um 02:12 schrieb Gilles Gouaillardet:
Siegmar,

I will try to reproduce this on my solaris11 x86_64 vm

In the mean time, can you please double check mca_pmix_pmix_pmix112.so
is a 64 bits library ?
(E.g, confirm "-m64" was correctly passed to pmix)

Cheers,

Gilles

On Friday, April 22, 2016, Siegmar Gross
<siegmar.gr...@informatik.hs-fulda.de
<mailto:siegmar.gr...@informatik.hs-fulda.de>> wrote:

    Hi Ralph,

    I've already used "-enable-debug". "SYSTEM_ENV" is "SunOS" or
    "Linux" and "MACHINE_ENV" is "sparc" or "x86_84".

    mkdir openmpi-v2.x-dev-1280-gc110ae8-${SYSTEM_ENV}.${MACHINE_ENV}.64_gcc
    cd openmpi-v2.x-dev-1280-gc110ae8-${SYSTEM_ENV}.${MACHINE_ENV}.64_gcc

    ../openmpi-v2.x-dev-1280-gc110ae8/configure \
      --prefix=/usr/local/openmpi-2.0.0_64_gcc \
      --libdir=/usr/local/openmpi-2.0.0_64_gcc/lib64 \
      --with-jdk-bindir=/usr/local/jdk1.8.0/bin \
      --with-jdk-headers=/usr/local/jdk1.8.0/include \
      JAVA_HOME=/usr/local/jdk1.8.0 \
      LDFLAGS="-m64" CC="gcc" CXX="g++" FC="gfortran" \
      CFLAGS="-m64" CXXFLAGS="-m64" FCFLAGS="-m64" \
      CPP="cpp" CXXCPP="cpp" \
      --enable-mpi-cxx \
      --enable-cxx-exceptions \
      --enable-mpi-java \
      --enable-heterogeneous \
      --enable-mpi-thread-multiple \
      --with-hwloc=internal \
      --without-verbs \
      --with-wrapper-cflags="-std=c11 -m64" \
      --with-wrapper-cxxflags="-m64" \
      --with-wrapper-fcflags="-m64" \
      --enable-debug \
      |& tee log.configure.$SYSTEM_ENV.$MACHINE_ENV.64_gcc


    mkdir openmpi-v2.x-dev-1280-gc110ae8-${SYSTEM_ENV}.${MACHINE_ENV}.64_cc
    cd openmpi-v2.x-dev-1280-gc110ae8-${SYSTEM_ENV}.${MACHINE_ENV}.64_cc

    ../openmpi-v2.x-dev-1280-gc110ae8/configure \
      --prefix=/usr/local/openmpi-2.0.0_64_cc \
      --libdir=/usr/local/openmpi-2.0.0_64_cc/lib64 \
      --with-jdk-bindir=/usr/local/jdk1.8.0/bin \
      --with-jdk-headers=/usr/local/jdk1.8.0/include \
      JAVA_HOME=/usr/local/jdk1.8.0 \
      LDFLAGS="-m64" CC="cc" CXX="CC" FC="f95" \
      CFLAGS="-m64" CXXFLAGS="-m64 -library=stlport4" FCFLAGS="-m64" \
      CPP="cpp" CXXCPP="cpp" \
      --enable-mpi-cxx \
      --enable-cxx-exceptions \
      --enable-mpi-java \
      --enable-heterogeneous \
      --enable-mpi-thread-multiple \
      --with-hwloc=internal \
      --without-verbs \
      --with-wrapper-cflags="-m64" \
      --with-wrapper-cxxflags="-m64 -library=stlport4" \
      --with-wrapper-fcflags="-m64" \
      --with-wrapper-ldflags="" \
      --enable-debug \
      |& tee log.configure.$SYSTEM_ENV.$MACHINE_ENV.64_cc


    Kind regards

    Siegmar

    Am 21.04.2016 um 18:18 schrieb Ralph Castain:

        Can you please rebuild OMPI with -enable-debug in the configure
        cmd? It will let us see more error output


            On Apr 21, 2016, at 8:52 AM, Siegmar Gross
            <siegmar.gr...@informatik.hs-fulda.de> wrote:

            Hi Ralph,

            I don't see any additional information.

            tyr hello_1 108 mpiexec -np 4 --host
            tyr,sunpc1,linpc1,ruester -mca
            mca_base_component_show_load_errors 1 hello_1_mpi
            [tyr.informatik.hs-fulda.de:06211
            <http://tyr.informatik.hs-fulda.de:06211>] [[48741,0],0]
            ORTE_ERROR_LOG: Not found in file

../../../../../openmpi-v2.x-dev-1280-gc110ae8/orte/mca/ess/hnp/ess_hnp_module.c
            at line 638

--------------------------------------------------------------------------
            It looks like orte_init failed for some reason; your
            parallel process is
            likely to abort.  There are many reasons that a parallel
            process can
            fail during orte_init; some of which are due to configuration or
            environment problems.  This failure appears to be an
            internal failure;
            here's some additional information (which may only be
            relevant to an
            Open MPI developer):

             opal_pmix_base_select failed
             --> Returned value Not found (-13) instead of ORTE_SUCCESS

--------------------------------------------------------------------------


            tyr hello_1 109 mpiexec -np 4 --host
            tyr,sunpc1,linpc1,ruester -mca
            mca_base_component_show_load_errors 1 -mca pmix_base_verbose
            10 -mca pmix_server_verbose 5 hello_1_mpi
            [tyr.informatik.hs-fulda.de:06212
            <http://tyr.informatik.hs-fulda.de:06212>] mca: base:
            components_register: registering framework pmix components
            [tyr.informatik.hs-fulda.de:06212
            <http://tyr.informatik.hs-fulda.de:06212>] mca: base:
            components_open: opening pmix components
            [tyr.informatik.hs-fulda.de:06212
            <http://tyr.informatik.hs-fulda.de:06212>] mca:base:select:
            Auto-selecting pmix components
            [tyr.informatik.hs-fulda.de:06212
            <http://tyr.informatik.hs-fulda.de:06212>] mca:base:select:(
            pmix) No component selected!
            [tyr.informatik.hs-fulda.de:06212
            <http://tyr.informatik.hs-fulda.de:06212>] [[48738,0],0]
            ORTE_ERROR_LOG: Not found in file

../../../../../openmpi-v2.x-dev-1280-gc110ae8/orte/mca/ess/hnp/ess_hnp_module.c
            at line 638

--------------------------------------------------------------------------
            It looks like orte_init failed for some reason; your
            parallel process is
            likely to abort.  There are many reasons that a parallel
            process can
            fail during orte_init; some of which are due to configuration or
            environment problems.  This failure appears to be an
            internal failure;
            here's some additional information (which may only be
            relevant to an
            Open MPI developer):

             opal_pmix_base_select failed
             --> Returned value Not found (-13) instead of ORTE_SUCCESS

--------------------------------------------------------------------------
            tyr hello_1 110


            Kind regards

            Siegmar


            Am 21.04.2016 um 17:24 schrieb Ralph Castain:

                Hmmm…it looks like you built the right components, but
                they are not being picked up. Can you run your mpiexec
                command again, adding “-mca
                mca_base_component_show_load_errors 1” to the cmd line?


                    On Apr 21, 2016, at 8:16 AM, Siegmar Gross
                    <siegmar.gr...@informatik.hs-fulda.de> wrote:

                    Hi Ralph,

                    I have attached ompi_info output for both compilers
                    from my
                    sparc machine and the listings for both compilers
                    from the
                    <prefix>/lib/openmpi directories. Hopefully that
                    helps to
                    find the problem.

                    hermes tmp 3 tar zvft openmpi-2.x_info.tar.gz
                    -rw-r--r-- root/root     10969 2016-04-21 17:06
                    ompi_info_SunOS_sparc_cc.txt
                    -rw-r--r-- root/root     11044 2016-04-21 17:06
                    ompi_info_SunOS_sparc_gcc.txt
                    -rw-r--r-- root/root     71252 2016-04-21 17:02
                    lib64_openmpi.txt
                    hermes tmp 4


                    Kind regards and thank you very much once more for
                    your help

                    Siegmar


                    Am 21.04.2016 um 15:54 schrieb Ralph Castain:

                        Odd - it would appear that none of the pmix
                        components built? Can you send
                        along the output from ompi_info? Or just send a
                        listing of the files in the
                        <prefix>/lib/openmpi directory?


                            On Apr 21, 2016, at 1:27 AM, Siegmar Gross
                            <siegmar.gr...@informatik.hs-fulda.de
                            <mailto:siegmar.gr...@informatik.hs-fulda.de>>
                            wrote:

                            Hi Ralph,

                            Am 21.04.2016 um 00:18 schrieb Ralph Castain:

                                Could you please rerun these test and
                                add “-mca pmix_base_verbose 10
                                -mca pmix_server_verbose 5” to your cmd
                                line? I need to see why the
                                pmix components failed.



                            tyr spawn 111 mpiexec -np 1 --host
                            tyr,sunpc1,linpc1,ruester -mca
                            pmix_base_verbose 10 -mca
                            pmix_server_verbose 5 spawn_multiple_master
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:26652] mca:
                            base: components_register: registering
                            framework pmix components
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:26652] mca:
                            base: components_open: opening pmix components
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:26652]
                            mca:base:select: Auto-selecting pmix components
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:26652]
                            mca:base:select:( pmix) No component selected!
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:26652]
                            [[52794,0],0] ORTE_ERROR_LOG: Not found in file

../../../../../openmpi-v2.x-dev-1280-gc110ae8/orte/mca/ess/hnp/ess_hnp_module.c
                            at line 638

--------------------------------------------------------------------------
                            It looks like orte_init failed for some
                            reason; your parallel process is
                            likely to abort.  There are many reasons
                            that a parallel process can
                            fail during orte_init; some of which are due
                            to configuration or
                            environment problems.  This failure appears
                            to be an internal failure;
                            here's some additional information (which
                            may only be relevant to an
                            Open MPI developer):

                            opal_pmix_base_select failed
                            --> Returned value Not found (-13) instead
                            of ORTE_SUCCESS

--------------------------------------------------------------------------
                            tyr spawn 112




                            tyr hello_1 116 mpiexec -np 1 --host
                            tyr,sunpc1,linpc1,ruester -mca
                            pmix_base_verbose 10 -mca
                            pmix_server_verbose 5 hello_1_mpi
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:27261] mca:
                            base: components_register: registering
                            framework pmix components
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:27261] mca:
                            base: components_open: opening pmix components
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:27261]
                            mca:base:select: Auto-selecting pmix components
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:27261]
                            mca:base:select:( pmix) No component selected!
                            [tyr.informatik.hs-fulda.de
                            <http://tyr.informatik.hs-fulda.de>
                            <http://tyr.informatik.hs-fulda.de/>:27261]
                            [[52315,0],0] ORTE_ERROR_LOG: Not found in file

../../../../../openmpi-v2.x-dev-1280-gc110ae8/orte/mca/ess/hnp/ess_hnp_module.c
                            at line 638

--------------------------------------------------------------------------
                            It looks like orte_init failed for some
                            reason; your parallel process is
                            likely to abort.  There are many reasons
                            that a parallel process can
                            fail during orte_init; some of which are due
                            to configuration or
                            environment problems.  This failure appears
                            to be an internal failure;
                            here's some additional information (which
                            may only be relevant to an
                            Open MPI developer):

                            opal_pmix_base_select failed
                            --> Returned value Not found (-13) instead
                            of ORTE_SUCCESS

--------------------------------------------------------------------------
                            tyr hello_1 117



                            Thank you very much for your help.


                            Kind regards

                            Siegmar




                                Thanks
                                Ralph

                                    On Apr 20, 2016, at 10:12 AM,
                                    Siegmar Gross
                                    <siegmar.gr...@informatik.hs-fulda.de

<mailto:siegmar.gr...@informatik.hs-fulda.de>>
                                    wrote:

                                    Hi,

                                    I have built
                                    openmpi-v2.x-dev-1280-gc110ae8 on my
                                    machines
                                    (Solaris 10 Sparc, Solaris 10
                                    x86_64, and openSUSE Linux
                                    12.1 x86_64) with gcc-5.1.0 and Sun
                                    C 5.13. Unfortunately I get
                                    runtime errors for some programs.


                                    Sun C 5.13:
                                    ===========

                                    For all my test programs I get the
                                    same error on Solaris Sparc and
                                    Solaris x86_64, while the programs
                                    work fine on Linux.

                                    tyr hello_1 115 mpiexec -np 2
                                    hello_1_mpi
                                    [tyr.informatik.hs-fulda.de
                                    <http://tyr.informatik.hs-fulda.de>
                                    <http://tyr.informatik.hs-fulda.de>:22373]
                                    [[61763,0],0] ORTE_ERROR_LOG: Not
                                    found in file

../../../../../openmpi-v2.x-dev-1280-gc110ae8/orte/mca/ess/hnp/ess_hnp_module.c
                                    at line 638

--------------------------------------------------------------------------
                                    It looks like orte_init failed for
                                    some reason; your parallel process is
                                    likely to abort.  There are many
                                    reasons that a parallel process can
                                    fail during orte_init; some of which
                                    are due to configuration or
                                    environment problems.  This failure
                                    appears to be an internal failure;
                                    here's some additional information
                                    (which may only be relevant to an
                                    Open MPI developer):

                                    opal_pmix_base_select failed
                                    --> Returned value Not found (-13)
                                    instead of ORTE_SUCCESS

--------------------------------------------------------------------------
                                    tyr hello_1 116




                                    GCC-5.1.0:
                                    ==========

                                    tyr spawn 121 mpiexec -np 1 --host
                                    tyr,sunpc1,linpc1,ruester
                                    spawn_multiple_master

                                    Parent process 0 running on
                                    tyr.informatik.hs-fulda.de
                                    <http://tyr.informatik.hs-fulda.de>
                                    <http://tyr.informatik.hs-fulda.de>
                                    I create 3 slave processes.

                                    [tyr.informatik.hs-fulda.de
                                    <http://tyr.informatik.hs-fulda.de>
                                    <http://tyr.informatik.hs-fulda.de>:25366]
                                    PMIX ERROR: UNPACK-PAST-END in file

../../../../../../openmpi-v2.x-dev-1280-gc110ae8/opal/mca/pmix/pmix112/pmix/src/server/pmix_server_ops.c

                                    at line 829
                                    [tyr.informatik.hs-fulda.de
                                    <http://tyr.informatik.hs-fulda.de>
                                    <http://tyr.informatik.hs-fulda.de>:25366]
                                    PMIX ERROR: UNPACK-PAST-END in file

../../../../../../openmpi-v2.x-dev-1280-gc110ae8/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c

                                    at line 2176
                                    [tyr:25377] *** An error occurred in
                                    MPI_Comm_spawn_multiple
                                    [tyr:25377] *** reported by process
                                    [3308257281,0]
                                    [tyr:25377] *** on communicator
                                    MPI_COMM_WORLD
                                    [tyr:25377] *** MPI_ERR_SPAWN: could
                                    not spawn processes
                                    [tyr:25377] *** MPI_ERRORS_ARE_FATAL
                                    (processes in this communicator will
                                    now abort,
                                    [tyr:25377] ***    and potentially
                                    your MPI job)
                                    tyr spawn 122


                                    I would be grateful if somebody can
                                    fix the problems. Thank you very
                                    much for any help in advance.


                                    Kind regards

                                    Siegmar

<hello_1_mpi.c><spawn_multiple_master.c>_______________________________________________

                                    users mailing list
                                    us...@open-mpi.org
<mailto:us...@open-mpi.org>
                                    Subscription:

http://www.open-mpi.org/mailman/listinfo.cgi/users
                                    Link to this post:

http://www.open-mpi.org/community/lists/users/2016/04/28983.php


                                _______________________________________________
                                users mailing list
                                us...@open-mpi.org <mailto:us...@open-mpi.org>
                                Subscription:

http://www.open-mpi.org/mailman/listinfo.cgi/users
                                Link to this
                                post:

http://www.open-mpi.org/community/lists/users/2016/04/28986.php


                            _______________________________________________
                            users mailing list
                            us...@open-mpi.org <mailto:us...@open-mpi.org>
                            Subscription:
                            http://www.open-mpi.org/mailman/listinfo.cgi/users
                            Link to this
                            post:

http://www.open-mpi.org/community/lists/users/2016/04/28987.php




                        _______________________________________________
                        users mailing list
                        us...@open-mpi.org
                        Subscription:
                        http://www.open-mpi.org/mailman/listinfo.cgi/users
                        Link to this post:

http://www.open-mpi.org/community/lists/users/2016/04/28988.php


<openmpi-2.x_info.tar.gz>_______________________________________________
                    users mailing list
                    us...@open-mpi.org
                    Subscription:
                    http://www.open-mpi.org/mailman/listinfo.cgi/users
                    Link to this post:

http://www.open-mpi.org/community/lists/users/2016/04/28989.php


                _______________________________________________
                users mailing list
                us...@open-mpi.org
                Subscription:
                http://www.open-mpi.org/mailman/listinfo.cgi/users
                Link to this post:
                http://www.open-mpi.org/community/lists/users/2016/04/28990.php


            _______________________________________________
            users mailing list
            us...@open-mpi.org
            Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
            Link to this post:
            http://www.open-mpi.org/community/lists/users/2016/04/28991.php


        _______________________________________________
        users mailing list
        us...@open-mpi.org
        Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
        Link to this post:
        http://www.open-mpi.org/community/lists/users/2016/04/28992.php

    _______________________________________________
    users mailing list
    us...@open-mpi.org
    Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
    Link to this post:
    http://www.open-mpi.org/community/lists/users/2016/04/28993.php



_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2016/04/28999.php

_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2016/04/29009.php

Attachment: pmix_config.log.tar.gz
Description: application/gzip

tyr spawn 133 mpiexec -np 1 --host tyr,sunpc1,linpc1,ruester 
spawn_multiple_master

Parent process 0 running on tyr.informatik.hs-fulda.de
  I create 3 slave processes.

[tyr.informatik.hs-fulda.de:21766] PMIX ERROR: UNPACK-PAST-END in file 
../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server_ops.c
 at line 829
[tyr.informatik.hs-fulda.de:21766] PMIX ERROR: UNPACK-PAST-END in file 
../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c
 at line 2176
[tyr:21777] *** An error occurred in MPI_Comm_spawn_multiple
[tyr:21777] *** reported by process [4078960641,0]
[tyr:21777] *** on communicator MPI_COMM_WORLD
[tyr:21777] *** MPI_ERR_SPAWN: could not spawn processes
[tyr:21777] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[tyr:21777] ***    and potentially your MPI job)






tyr spawn 134 mpiexec -np 1 --host tyr,sunpc1,linpc1,ruester --mca 
pmix_base_verbose 5 --mca pmix_server_verbose 10 --mca 
mca_base_component_show_load_errors 1 spawn_multiple_master

[tyr.informatik.hs-fulda.de:21779] pmix:server init called
[tyr.informatik.hs-fulda.de:21779] sec: native init
[tyr.informatik.hs-fulda.de:21779] sec: SPC native active
[tyr.informatik.hs-fulda.de:21779] pmix:server constructed uri 
pmix-server:21779:/tmp/openmpi-sessions-1026@tyr_0/62261/0/0/pmix-21779
[tyr.informatik.hs-fulda.de:21779] PMIX server errreg_cbfunc - error handler 
registered status=0, reference=1
[sunpc1:09930] pmix:server init called
[sunpc1:09930] sec: native init
[sunpc1:09930] sec: SPC native active
[ruester.informatik.hs-fulda.de:26007] pmix:server init called
[sunpc1:09930] pmix:server constructed uri 
pmix-server:9930:/tmp/openmpi-sessions-1026@sunpc1_0/62261/0/1/pmix-9930
[sunpc1:09930] PMIX server errreg_cbfunc - error handler registered status=0, 
reference=1
[ruester.informatik.hs-fulda.de:26007] sec: native init
[ruester.informatik.hs-fulda.de:26007] sec: SPC native active
[ruester.informatik.hs-fulda.de:26007] pmix:server constructed uri 
pmix-server:26007:/tmp/openmpi-sessions-1026@ruester_0/62261/0/3/pmix-26007
[ruester.informatik.hs-fulda.de:26007] PMIX server errreg_cbfunc - error 
handler registered status=0, reference=1
[linpc1:13422] pmix:server init called
[linpc1:13422] sec: native init
[linpc1:13422] sec: SPC native active
[linpc1:13422] pmix:server constructed uri 
pmix-server:13422:/tmp/openmpi-sessions-1026@linpc1_0/62261/0/2/pmix-13422
[linpc1:13422] PMIX server errreg_cbfunc - error handler registered status=0, 
reference=1
[tyr.informatik.hs-fulda.de:21779] [[62261,0],0] register nspace for [62261,1]
[tyr.informatik.hs-fulda.de:21779] pmix:server register client 4080336897:0
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_client for nspace 
4080336897 rank 0
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.jobid
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.offset
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.nmap
[tyr.informatik.hs-fulda.de:21779] pmix:extract:nodes: checking list: tyr
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.pmap
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.nodeid
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.node.size
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.lpeers
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.lcpus
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.lldr
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.univ.size
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.job.size
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.local.size
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.max.size
[tyr.informatik.hs-fulda.de:21779] pmix:server _register_nspace recording 
pmix.pdata
[sunpc1:09930] [[62261,0],1] register nspace for [62261,1]
[linpc1:13422] [[62261,0],2] register nspace for [62261,1]
[linpc1:13422] pmix:server _register_nspace
[ruester.informatik.hs-fulda.de:26007] [[62261,0],3] register nspace for 
[62261,1]
[linpc1:13422] pmix:server _register_nspace recording pmix.jobid
[linpc1:13422] pmix:server _register_nspace recording pmix.offset
[linpc1:13422] pmix:server _register_nspace recording pmix.nmap
[linpc1:13422] pmix:extract:nodes: checking list: tyr
[linpc1:13422] pmix:server _register_nspace recording pmix.pmap
[linpc1:13422] pmix:server _register_nspace recording pmix.nodeid
[linpc1:13422] pmix:server _register_nspace recording pmix.univ.size
[linpc1:13422] pmix:server _register_nspace recording pmix.job.size
[linpc1:13422] pmix:server _register_nspace recording pmix.local.size
[linpc1:13422] pmix:server _register_nspace recording pmix.max.size
[linpc1:13422] pmix:server _register_nspace recording pmix.pdata
[tyr.informatik.hs-fulda.de:21779] pmix:server setup_fork for nspace 4080336897 
rank 0
[tyr.informatik.hs-fulda.de:21790] pmix: init called
[tyr.informatik.hs-fulda.de:21790] posting notification recv on tag 0
[tyr.informatik.hs-fulda.de:21790] sec: native init
[tyr.informatik.hs-fulda.de:21790] sec: SPC native active
[tyr.informatik.hs-fulda.de:21790] PMIx_client initialized
[tyr.informatik.hs-fulda.de:21790] PMIx_client init
[tyr.informatik.hs-fulda.de:21790] usock_peer_try_connect: attempting to 
connect to server
[tyr.informatik.hs-fulda.de:21790] usock_peer_try_connect: attempting to 
connect to server on socket 15
[tyr.informatik.hs-fulda.de:21790] pmix: SEND CONNECT ACK
[tyr.informatik.hs-fulda.de:21790] sec: native create_cred
[tyr.informatik.hs-fulda.de:21790] sec: using credential 1026:100
[tyr.informatik.hs-fulda.de:21790] pmix: RECV CONNECT ACK FROM SERVER
[tyr.informatik.hs-fulda.de:21779] RECV CONNECT ACK FROM PEER ON SOCKET 32
[tyr.informatik.hs-fulda.de:21779] connect-ack recvd from peer 4080336897:0
[tyr.informatik.hs-fulda.de:21779] sec: native validate_cred 1026:100
[tyr.informatik.hs-fulda.de:21779] sec: native credential valid
[tyr.informatik.hs-fulda.de:21779] client credential validated
[tyr.informatik.hs-fulda.de:21779] connect-ack from client completed
[tyr.informatik.hs-fulda.de:21779] pmix:server client 4080336897:0 has 
connected on socket 32
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21790] pmix: RECV CONNECT CONFIRMATION
[tyr.informatik.hs-fulda.de:21790] sock_peer_try_connect: Connection across to 
server succeeded
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/client/pmix_client.c:214]
 post send to server
[tyr.informatik.hs-fulda.de:21790] posting recv on tag 1
[tyr.informatik.hs-fulda.de:21790] sock:send_handler SENDING TO PEER 
pmix-server:21779 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21790] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21790] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate data region of 
size 16
[tyr.informatik.hs-fulda.de:21779] RECVD COMPLETE MESSAGE FROM SERVER OF 16 
BYTES FOR TAG 1 ON PEER SOCKET 32
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21779] message received 16 bytes for tag 1 on 
socket 32
[tyr.informatik.hs-fulda.de:21779] checking msg on tag 1 for tag 4294967295
[tyr.informatik.hs-fulda.de:21779] SWITCHYARD for 4080336897:0:32
[tyr.informatik.hs-fulda.de:21779] recvd pmix cmd 0 from 4080336897:0
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:2081]
 queue reply to 4080336897:0 on tag 1
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:147]
 queue callback called: reply to 4080336897:0 on tag 1
[tyr.informatik.hs-fulda.de:21779] sock:send_handler SENDING TO PEER 
4080336897:0 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21779] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler called with peer 
pmix-server:21779
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler read hdr on socket 15
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate data region of 
size 4121
[tyr.informatik.hs-fulda.de:21790] RECVD COMPLETE MESSAGE FROM SERVER OF 4121 
BYTES FOR TAG 1 ON PEER SOCKET 15
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21790] message received 4121 bytes for tag 1 on 
socket 15
[tyr.informatik.hs-fulda.de:21790] checking msg on tag 1 for tag 1
[tyr.informatik.hs-fulda.de:21790] pmix: PROCESSING BLOB FOR NSPACE 4080336897
[tyr.informatik.hs-fulda.de:21790] PMIX client errreg_cbfunc - error handler 
registered status=0, reference=1
[tyr.informatik.hs-fulda.de:21790] PMIx_client initialized
[tyr.informatik.hs-fulda.de:21790] 
[[62261,1],0][../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/orte/mca/ess/pmi/ess_pmi_module.c:120]
 MODEX RECV VALUE FOR PROC [[62261,1],0] KEY pmix.lrank
[tyr.informatik.hs-fulda.de:21790] [[62261,1],0] PMIx_client get on proc 
[[62261,1],0] key pmix.lrank
[tyr.informatik.hs-fulda.de:21790] pmix: 4080336897:0 getting value for proc 
4080336897:0 key pmix.lrank
[tyr.informatik.hs-fulda.de:21790] pmix: get_nb value for proc 4080336897:0 key 
pmix.lrank
[tyr.informatik.hs-fulda.de:21790] pmix: getnbfn value for proc 4080336897:0 
key pmix.lrank
[tyr.informatik.hs-fulda.de:21790] pmix:client get completed
[tyr.informatik.hs-fulda.de:21790] 
[[62261,1],0][../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/orte/mca/ess/pmi/ess_pmi_module.c:129]
 MODEX RECV VALUE FOR PROC [[62261,1],0] KEY pmix.nrank
[tyr.informatik.hs-fulda.de:21790] pmix: 4080336897:0 getting value for proc 
4080336897:0 key pmix.nrank
[tyr.informatik.hs-fulda.de:21790] pmix: get_nb value for proc 4080336897:0 key 
pmix.nrank
[tyr.informatik.hs-fulda.de:21790] [[62261,1],0] PMIx_client get on proc 
[[62261,1],0] key pmix.nrank
[tyr.informatik.hs-fulda.de:21790] pmix: getnbfn value for proc 4080336897:0 
key pmix.nrank
[tyr.informatik.hs-fulda.de:21790] pmix:client get completed
[tyr.informatik.hs-fulda.de:21790] 
[[62261,1],0][../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/orte/mca/ess/pmi/ess_pmi_module.c:138]
 MODEX RECV VALUE FOR PROC [[62261,1],0] KEY pmix.univ.size
[tyr.informatik.hs-fulda.de:21790] [[62261,1],0] PMIx_client get on proc 
[[62261,1],0] key pmix.univ.size
[tyr.informatik.hs-fulda.de:21790] pmix: 4080336897:0 getting value for proc 
4080336897:0 key pmix.univ.size
[tyr.informatik.hs-fulda.de:21790] pmix: get_nb value for proc 4080336897:0 key 
pmix.univ.size
[tyr.informatik.hs-fulda.de:21790] pmix: getnbfn value for proc 4080336897:0 
key pmix.univ.size
[sunpc1:09930] pmix:server _register_nspace
[sunpc1:09930] pmix:server _register_nspace recording pmix.jobid
[sunpc1:09930] pmix:server _register_nspace recording pmix.offset
[sunpc1:09930] pmix:server _register_nspace recording pmix.nmap
[sunpc1:09930] pmix:extract:nodes: checking list: tyr
[sunpc1:09930] pmix:server _register_nspace recording pmix.pmap
[sunpc1:09930] pmix:server _register_nspace recording pmix.nodeid
[sunpc1:09930] pmix:server _register_nspace recording pmix.univ.size
[sunpc1:09930] pmix:server _register_nspace recording pmix.job.size
[sunpc1:09930] pmix:server _register_nspace recording pmix.local.size
[sunpc1:09930] pmix:server _register_nspace recording pmix.max.size
[sunpc1:09930] pmix:server _register_nspace recording pmix.pdata
[tyr.informatik.hs-fulda.de:21790] pmix:client get completed
[tyr.informatik.hs-fulda.de:21790] 
[[62261,1],0][../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/orte/mca/ess/pmi/ess_pmi_module.c:161]
 MODEX RECV VALUE OPTIONAL FOR PROC [[62261,1],0] KEY pmix.appnum
[tyr.informatik.hs-fulda.de:21790] [[62261,1],0] PMIx_client get on proc 
[[62261,1],0] key pmix.appnum
[tyr.informatik.hs-fulda.de:21790] pmix: 4080336897:0 getting value for proc 
4080336897:0 key pmix.appnum
[tyr.informatik.hs-fulda.de:21790] pmix: get_nb value for proc 4080336897:0 key 
pmix.appnum
[tyr.informatik.hs-fulda.de:21790] pmix: getnbfn value for proc 4080336897:0 
key pmix.appnum
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.jobid
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.offset
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.nmap
[ruester.informatik.hs-fulda.de:26007] pmix:extract:nodes: checking list: tyr
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.pmap
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.nodeid
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.univ.size
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.job.size
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.local.size
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.max.size
[ruester.informatik.hs-fulda.de:26007] pmix:server _register_nspace recording 
pmix.pdata
[tyr.informatik.hs-fulda.de:21790] pmix:client get completed
[tyr.informatik.hs-fulda.de:21790] pmix: 4080336897:0 getting value for proc 
4080336897:0 key pmix.local.size
[tyr.informatik.hs-fulda.de:21790] pmix: get_nb value for proc 4080336897:0 key 
pmix.local.size
[tyr.informatik.hs-fulda.de:21790] 
[[62261,1],0][../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/orte/mca/ess/pmi/ess_pmi_module.c:171]
 MODEX RECV VALUE FOR PROC [[62261,1],0] KEY pmix.local.size
[tyr.informatik.hs-fulda.de:21790] [[62261,1],0] PMIx_client get on proc 
[[62261,1],0] key pmix.local.size
[tyr.informatik.hs-fulda.de:21790] pmix: getnbfn value for proc 4080336897:0 
key pmix.local.size
[tyr.informatik.hs-fulda.de:21790] pmix:client get completed
[tyr.informatik.hs-fulda.de:21790] 
[[62261,1],0][../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/orte/mca/ess/pmi/ess_pmi_module.c:199]
 MODEX RECV VALUE OPTIONAL FOR PROC [[62261,1],0] KEY pmix.ltopo
[tyr.informatik.hs-fulda.de:21790] [[62261,1],0] PMIx_client get on proc 
[[62261,1],0] key pmix.ltopo
[tyr.informatik.hs-fulda.de:21790] pmix: 4080336897:0 getting value for proc 
4080336897:0 key pmix.ltopo
[tyr.informatik.hs-fulda.de:21790] pmix: get_nb value for proc 4080336897:0 key 
pmix.ltopo
[tyr.informatik.hs-fulda.de:21790] pmix: getnbfn value for proc 4080336897:0 
key pmix.ltopo
[tyr.informatik.hs-fulda.de:21790] pmix:client get completed
[tyr.informatik.hs-fulda.de:21790] pmix: executing put for key pmix.cpuset type 
3
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.cpuset data for key global in 
local cache
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.cpuset data for key global in 
remote cache
[tyr.informatik.hs-fulda.de:21790] PMIx_client put
[tyr.informatik.hs-fulda.de:21790] pmix: executing put for key pmix.puri type 3
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.puri data for key global in 
local cache
[tyr.informatik.hs-fulda.de:21790] PMIx_client put
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.puri data for key global in 
remote cache
[tyr.informatik.hs-fulda.de:21790] PMIx_client put
[tyr.informatik.hs-fulda.de:21790] pmix: executing put for key pmix.hname type 3
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.hname data for key global in 
local cache
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.hname data for key global in 
remote cache
[tyr.informatik.hs-fulda.de:21790] PMIx_client put
[tyr.informatik.hs-fulda.de:21790] pmix: executing put for key MPI_THREAD_LEVEL 
type 28
[tyr.informatik.hs-fulda.de:21790] pmix: put MPI_THREAD_LEVEL data for key 
global in local cache
[tyr.informatik.hs-fulda.de:21790] pmix: put MPI_THREAD_LEVEL data for key 
global in remote cache
[tyr.informatik.hs-fulda.de:21790] pmix: executing put for key pmix.arch type 14
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.arch data for key global in 
local cache
[tyr.informatik.hs-fulda.de:21790] PMIx_client put
[tyr.informatik.hs-fulda.de:21790] pmix: put pmix.arch data for key global in 
remote cache
[tyr.informatik.hs-fulda.de:21790] PMIx_client put
[tyr.informatik.hs-fulda.de:21790] pmix: executing put for key btl.tcp.2.0 type 
28
[tyr.informatik.hs-fulda.de:21790] pmix: put btl.tcp.2.0 data for key global in 
local cache
[tyr.informatik.hs-fulda.de:21790] pmix: put btl.tcp.2.0 data for key global in 
remote cache
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate data region of 
size 886
[tyr.informatik.hs-fulda.de:21779] RECVD COMPLETE MESSAGE FROM SERVER OF 886 
BYTES FOR TAG 2 ON PEER SOCKET 32
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21779] message received 886 bytes for tag 2 on 
socket 32
[tyr.informatik.hs-fulda.de:21779] checking msg on tag 2 for tag 4294967295
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/client/pmix_client.c:676]
 post send to server
[tyr.informatik.hs-fulda.de:21779] SWITCHYARD for 4080336897:0:32
[tyr.informatik.hs-fulda.de:21779] recvd pmix cmd 2 from 4080336897:0
[tyr.informatik.hs-fulda.de:21790] sock:send_handler SENDING TO PEER 
pmix-server:21779 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21790] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21790] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21790] PMIx_client fence
[tyr.informatik.hs-fulda.de:21790] pmix: executing fence
[tyr.informatik.hs-fulda.de:21790] pmix: fence_nb called
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/client/pmix_client_fence.c:163]
 post send to server
[tyr.informatik.hs-fulda.de:21790] posting recv on tag 3
[tyr.informatik.hs-fulda.de:21790] sock:send_handler SENDING TO PEER 
pmix-server:21779 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21790] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21790] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate data region of 
size 157
[tyr.informatik.hs-fulda.de:21779] RECVD COMPLETE MESSAGE FROM SERVER OF 157 
BYTES FOR TAG 3 ON PEER SOCKET 32
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21779] message received 157 bytes for tag 3 on 
socket 32
[tyr.informatik.hs-fulda.de:21779] checking msg on tag 3 for tag 4294967295
[tyr.informatik.hs-fulda.de:21779] SWITCHYARD for 4080336897:0:32
[tyr.informatik.hs-fulda.de:21779] recvd pmix cmd 3 from 4080336897:0
[tyr.informatik.hs-fulda.de:21779] recvd FENCE
[tyr.informatik.hs-fulda.de:21779] recvd fence with 1 procs
[tyr.informatik.hs-fulda.de:21779] get_tracker called with 1 procs
[tyr.informatik.hs-fulda.de:21779] new_tracker called with 1 procs
[tyr.informatik.hs-fulda.de:21779] get_tracker called with 1 procs
[tyr.informatik.hs-fulda.de:21779] adding new tracker with 1 procs
[tyr.informatik.hs-fulda.de:21779] adding local proc 4080336897.0 to tracker
[tyr.informatik.hs-fulda.de:21779] fence complete
[tyr.informatik.hs-fulda.de:21779] fence - assembling data
[tyr.informatik.hs-fulda.de:21779] server:modex_cbfunc called with 531 bytes
[tyr.informatik.hs-fulda.de:21779] server:modex_cbfunc unpacked blob for npsace 
4080336897
[tyr.informatik.hs-fulda.de:21779] client:unpack fence received blob for rank 0
[tyr.informatik.hs-fulda.de:21779] server:modex_cbfunc reply being sent to 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:1867]
 queue reply to 4080336897:0 on tag 3
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:147]
 queue callback called: reply to 4080336897:0 on tag 3
[tyr.informatik.hs-fulda.de:21779] sock:send_handler SENDING TO PEER 
4080336897:0 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21779] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler called with peer 
pmix-server:21779
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler read hdr on socket 15
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate data region of 
size 24
[tyr.informatik.hs-fulda.de:21790] RECVD COMPLETE MESSAGE FROM SERVER OF 24 
BYTES FOR TAG 3 ON PEER SOCKET 15
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21790] message received 24 bytes for tag 3 on 
socket 15
[tyr.informatik.hs-fulda.de:21790] checking msg on tag 3 for tag 3
[tyr.informatik.hs-fulda.de:21790] pmix: fence_nb callback recvd
[tyr.informatik.hs-fulda.de:21790] client:unpack fence called
[tyr.informatik.hs-fulda.de:21790] client:unpack fence received status 0
[tyr.informatik.hs-fulda.de:21790] pmix: fence released
[tyr.informatik.hs-fulda.de:21790] PMIx_client fence
[tyr.informatik.hs-fulda.de:21790] pmix: executing fence
[tyr.informatik.hs-fulda.de:21790] pmix: fence_nb called
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/client/pmix_client_fence.c:163]
 post send to server
[tyr.informatik.hs-fulda.de:21790] posting recv on tag 4
[tyr.informatik.hs-fulda.de:21790] sock:send_handler SENDING TO PEER 
pmix-server:21779 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate data region of 
size 111
[tyr.informatik.hs-fulda.de:21779] RECVD COMPLETE MESSAGE FROM SERVER OF 111 
BYTES FOR TAG 4 ON PEER SOCKET 32
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21779] message received 111 bytes for tag 4 on 
socket 32
[tyr.informatik.hs-fulda.de:21779] checking msg on tag 4 for tag 4294967295
[tyr.informatik.hs-fulda.de:21779] SWITCHYARD for 4080336897:0:32
[tyr.informatik.hs-fulda.de:21790] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21790] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21779] recvd pmix cmd 3 from 4080336897:0
[tyr.informatik.hs-fulda.de:21779] recvd FENCE
[tyr.informatik.hs-fulda.de:21779] recvd fence with 1 procs
[tyr.informatik.hs-fulda.de:21779] get_tracker called with 1 procs
[tyr.informatik.hs-fulda.de:21779] new_tracker called with 1 procs
[tyr.informatik.hs-fulda.de:21779] get_tracker called with 1 procs
[tyr.informatik.hs-fulda.de:21779] adding new tracker with 1 procs
[tyr.informatik.hs-fulda.de:21779] adding local proc 4080336897.0 to tracker
[tyr.informatik.hs-fulda.de:21779] fence complete
[tyr.informatik.hs-fulda.de:21779] server:modex_cbfunc called with 13 bytes
[tyr.informatik.hs-fulda.de:21779] server:modex_cbfunc reply being sent to 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:1867]
 queue reply to 4080336897:0 on tag 4
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:147]
 queue callback called: reply to 4080336897:0 on tag 4
[tyr.informatik.hs-fulda.de:21779] sock:send_handler SENDING TO PEER 
4080336897:0 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21779] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler called with peer 
pmix-server:21779
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler read hdr on socket 15
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate data region of 
size 24
[tyr.informatik.hs-fulda.de:21790] RECVD COMPLETE MESSAGE FROM SERVER OF 24 
BYTES FOR TAG 4 ON PEER SOCKET 15
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21790] message received 24 bytes for tag 4 on 
socket 15
[tyr.informatik.hs-fulda.de:21790] checking msg on tag 4 for tag 4
[tyr.informatik.hs-fulda.de:21790] pmix: fence_nb callback recvd
[tyr.informatik.hs-fulda.de:21790] client:unpack fence called
[tyr.informatik.hs-fulda.de:21790] client:unpack fence received status 0
[tyr.informatik.hs-fulda.de:21790] pmix: fence released

Parent process 0 running on tyr.informatik.hs-fulda.de
  I create 3 slave processes.

[tyr.informatik.hs-fulda.de:21790] pmix: spawn called
[tyr.informatik.hs-fulda.de:21790] pmix: spawn called
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/client/pmix_client_spawn.c:167]
 post send to server
[tyr.informatik.hs-fulda.de:21790] posting recv on tag 5
[tyr.informatik.hs-fulda.de:21790] sock:send_handler SENDING TO PEER 
pmix-server:21779 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate data region of 
size 2299
[tyr.informatik.hs-fulda.de:21779] RECVD COMPLETE MESSAGE FROM SERVER OF 2299 
BYTES FOR TAG 5 ON PEER SOCKET 32
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21779] message received 2299 bytes for tag 5 on 
socket 32
[tyr.informatik.hs-fulda.de:21779] checking msg on tag 5 for tag 4294967295
[tyr.informatik.hs-fulda.de:21779] SWITCHYARD for 4080336897:0:32
[tyr.informatik.hs-fulda.de:21779] recvd pmix cmd 9 from 4080336897:0
[tyr.informatik.hs-fulda.de:21790] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21790] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21779] recvd SPAWN
[tyr.informatik.hs-fulda.de:21779] PMIX ERROR: UNPACK-PAST-END in file 
../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server_ops.c
 at line 829
[tyr.informatik.hs-fulda.de:21779] PMIX ERROR: UNPACK-PAST-END in file 
../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c
 at line 2176
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:2221]
 queue reply to 4080336897:0 on tag 5
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:147]
 queue callback called: reply to 4080336897:0 on tag 5
[tyr.informatik.hs-fulda.de:21779] sock:send_handler SENDING TO PEER 
4080336897:0 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21779] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler called with peer 
pmix-server:21779
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler read hdr on socket 15
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate data region of 
size 24
[tyr.informatik.hs-fulda.de:21790] RECVD COMPLETE MESSAGE FROM SERVER OF 24 
BYTES FOR TAG 5 ON PEER SOCKET 15
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21790] message received 24 bytes for tag 5 on 
socket 15
[tyr.informatik.hs-fulda.de:21790] checking msg on tag 5 for tag 5
[tyr.informatik.hs-fulda.de:21790] pmix:client recv callback activated with 24 
bytes
[tyr:21790] *** An error occurred in MPI_Comm_spawn_multiple
[tyr:21790] *** reported by process [4080336897,0]
[tyr:21790] *** on communicator MPI_COMM_WORLD
[tyr:21790] *** MPI_ERR_SPAWN: could not spawn processes
[tyr:21790] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[tyr:21790] ***    and potentially your MPI job)
[tyr.informatik.hs-fulda.de:21790] PMIx_client abort
[tyr.informatik.hs-fulda.de:21790] pmix:client abort called
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/client/pmix_client.c:529]
 post send to server
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate data region of 
size 88
[tyr.informatik.hs-fulda.de:21790] posting recv on tag 6
[tyr.informatik.hs-fulda.de:21779] RECVD COMPLETE MESSAGE FROM SERVER OF 88 
BYTES FOR TAG 6 ON PEER SOCKET 32
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21779] message received 88 bytes for tag 6 on 
socket 32
[tyr.informatik.hs-fulda.de:21790] sock:send_handler SENDING TO PEER 
pmix-server:21779 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21790] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21790] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21790] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21779] checking msg on tag 6 for tag 4294967295
[tyr.informatik.hs-fulda.de:21779] SWITCHYARD for 4080336897:0:32
[tyr.informatik.hs-fulda.de:21779] recvd pmix cmd 1 from 4080336897:0
[tyr.informatik.hs-fulda.de:21779] recvd ABORT
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:1612]
 queue reply to 4080336897:0 on tag 6
[tyr.informatik.hs-fulda.de:21779] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/server/pmix_server.c:147]
 queue callback called: reply to 4080336897:0 on tag 6
[tyr.informatik.hs-fulda.de:21779] sock:send_handler SENDING TO PEER 
4080336897:0 with NON-NULL msg
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING HEADER
[tyr.informatik.hs-fulda.de:21779] usock:send_handler HEADER SENT
[tyr.informatik.hs-fulda.de:21779] usock:send_handler SENDING BODY OF MSG
[tyr.informatik.hs-fulda.de:21779] usock:send_handler BODY SENT
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler called with peer 
pmix-server:21779
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler read hdr on socket 15
[tyr.informatik.hs-fulda.de:21790] usock:recv:handler allocate data region of 
size 24
[tyr.informatik.hs-fulda.de:21790] RECVD COMPLETE MESSAGE FROM SERVER OF 24 
BYTES FOR TAG 6 ON PEER SOCKET 15
[tyr.informatik.hs-fulda.de:21790] 
[../../../../../../openmpi-v2.x-dev-1290-gbd0e4e1/opal/mca/pmix/pmix112/pmix/src/usock/usock_sendrecv.c:328]
 post msg
[tyr.informatik.hs-fulda.de:21790] message received 24 bytes for tag 6 on 
socket 15
[tyr.informatik.hs-fulda.de:21790] checking msg on tag 6 for tag 6
[tyr.informatik.hs-fulda.de:21790] pmix:client recv callback activated with 24 
bytes
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler called with peer 
4080336897:0
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler allocate new recv msg
[tyr.informatik.hs-fulda.de:21779] usock:recv:handler read hdr on socket 32
[tyr.informatik.hs-fulda.de:21779] pmix_usock_msg_recv: peer closed connection
[sunpc1:09930] [[62261,0],1] Finalizing PMIX server
[ruester.informatik.hs-fulda.de:26007] [[62261,0],3] Finalizing PMIX server
[linpc1:13422] [[62261,0],2] Finalizing PMIX server
[linpc1:13422] pmix:server finalize called
[linpc1:13422] sec: native finalize
[linpc1:13422] pmix:server finalize complete
[tyr.informatik.hs-fulda.de:21779] [[62261,0],0] Finalizing PMIX server
[tyr.informatik.hs-fulda.de:21779] pmix:server finalize called
[tyr.informatik.hs-fulda.de:21779] sec: native finalize
[tyr.informatik.hs-fulda.de:21779] pmix:server finalize complete
[sunpc1:09930] pmix:server finalize called
[sunpc1:09930] sec: native finalize
[sunpc1:09930] pmix:server finalize complete
[ruester.informatik.hs-fulda.de:26007] pmix:server finalize called
[ruester.informatik.hs-fulda.de:26007] sec: native finalize
[ruester.informatik.hs-fulda.de:26007] pmix:server finalize complete

Reply via email to