Hello, Bennet.
One odd thing that I see in the error output that you have provided is that
pmix2x_client.c is active.
Looking into the v3.1.x branch
(https://github.com/open-mpi/ompi/tree/v3.1.x/opal/mca/pmix) I see the
following components:
* ext1x
* ext2x
...
*pmix2x
Pmix2x_client is in inte
I rebuilt and examined the logs more closely. There was a warning
about a failure with the external hwloc, and that led to finding that
the CentOS hwloc-devel package was not installed.
I also added the options that we have been using for a while,
--disable-dlopen and --enable-shared, to the conf
Odd - Artem, do you have any suggestions?
> On Jun 7, 2018, at 7:41 AM, Bennet Fauber wrote:
>
> Thanks, Ralph,
>
> I just tried it with
>
>srun --mpi=pmix_v2 ./test_mpi
>
> and got these messages
>
>
> srun: Step created for job 89
> [cav02.arc-ts.umich.edu:92286] PMIX ERROR: OUT-OF-RE
Thanks, Ralph,
I just tried it with
srun --mpi=pmix_v2 ./test_mpi
and got these messages
srun: Step created for job 89
[cav02.arc-ts.umich.edu:92286] PMIX ERROR: OUT-OF-RESOURCE in file
client/pmix_client.c at line 234
[cav02.arc-ts.umich.edu:92286] OPAL ERROR: Error in file
pmix2x_client.
I think you need to set your MPIDefault to pmix_v2 since you are using a PMIx
v2 library
> On Jun 7, 2018, at 6:25 AM, Bennet Fauber wrote:
>
> Hi, Ralph,
>
> Thanks for the reply, and sorry for the missing information. I hope
> this fills in the picture better.
>
> $ srun --version
> slurm
Hi, Ralph,
Thanks for the reply, and sorry for the missing information. I hope
this fills in the picture better.
$ srun --version
slurm 17.11.7
$ srun --mpi=list
srun: MPI types are...
srun: pmix_v2
srun: openmpi
srun: none
srun: pmi2
srun: pmix
We have pmix configured as the default in /opt/s
You didn’t show your srun direct launch cmd line or what version of Slurm is
being used (and how it was configured), so I can only provide some advice. If
you want to use PMIx, then you have to do two things:
1. Slurm must be configured to use PMIx - depending on the version, that might
be ther
We are trying out MPI on an aarch64 cluster.
Our system administrators installed SLURM and PMIx 2.0.2 from .rpm.
I compiled OpenMPI using the ARM distributed gcc/7.1.0 using the
configure flags shown in this snippet from the top of config.log
It was created by Open MPI configure 3.1.0, which was
Hi,
it seems that the problem results from a compiler bug, because today I was
able to build the package with the compilers from "Intel Parallel Studio XE
2018" and "Portland Group Community Edition 2018". Unfortunately, I have no
contact person at Oracle to report the bug, so that it can be fixe