Hello,
I'm working as a research intern in a lab where we're studying virtualization.
And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP, GPAW
and Incompact3d from Phrononix Test suite).
To briefly explain my experiments, I'm running those benchmarks on several
virtual
Lucas,
the number of MPI tasks started by mpirun is either
- explicitly passed via the command line (e.g. mpirun -np 2306 ...)
- equals to the number of available slots, and this value is either
a) retrieved from the resource manager (such as a SLURM allocation)
b) explicitly set in a
Let me add a little more color on what Gilles stated.
First, you should probably upgrade to the latest v4.1.x release: v4.1.4. It
has a bunch of bug fixes compared to v4.1.0.
Second, you should know that it is relatively uncommon to run HPC/MPI apps
inside VMs because the virtualization infras
Hello over there.
We have a very strange issue when the program tries to send a non-blocking
message with MPI_Isend() and packed data: if we run this send after some
unnecessary code (see details below), it works, but without it, not.
This program uses dynamic spawning to launch processes. Bel
Hi Martin,
Your code seems to have several issues in inform_my_completion: comm is used
uninitialized in the my_pack macro.
If the intention is that isend is executed by spawned processes, MPI_COMM_WORLD
is probably the wrong communicator to use.
Best
Joachim
Fr