Hi
Currently i am approaching a similar problem/workflow with spack and an AWS
S3 shared storage. Mounting the storage from a laptop gives you same layout
as on each node of my AWC EC2 cluster.
As others mentioned before: you still have to recompile your work, to take
advantage of the XEON class c
As far as I know you have to wire up the connections among MPI clients,
allocate resources etc. PMIx is a library to set up all processes, and
shipped with openmpi.
The standard HPC method to launch tasks is through job schedulers such as
SLURM or GRID Engine. SLURM srun is very similar to mpirun:
Hi
this is steven. I am building custom clusters on AWS Ec2 and had some
problems in the past. I am getting good result with external pmix 3.1.3
./autogen.sh && ./configure --prefix=/usr/local/ --with-platform=optimized
--with-hwloc=/usr/local --with-libevent=/usr/local --enable-pmix-binaries
--en
Hi i am fighting similar. Did you try to update the pmix most recent 3.1.3
series release?
On Wed, Jul 10, 2019, 12:24 Raymond Arter via users, <
users@lists.open-mpi.org> wrote:
> Hi,
>
> I have the following issue with version 4.0.1 when running on a node with
> two 16 core CPUs (Intel Xeon Gol