[OMPI users] Hardware topology influence

2022-09-13 Thread Lucas Chaloyard via users
Hello, I'm working as a research intern in a lab where we're studying virtualization. And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP, GPAW and Incompact3d from Phrononix Test suite). To briefly explain my experiments, I'm running those benchmarks on several virtual

Re: [OMPI users] Hardware topology influence

2022-09-13 Thread Gilles Gouaillardet via users
Lucas, the number of MPI tasks started by mpirun is either - explicitly passed via the command line (e.g. mpirun -np 2306 ...) - equals to the number of available slots, and this value is either a) retrieved from the resource manager (such as a SLURM allocation) b) explicitly set in a

Re: [OMPI users] Hardware topology influence

2022-09-13 Thread Jeff Squyres (jsquyres) via users
Let me add a little more color on what Gilles stated. First, you should probably upgrade to the latest v4.1.x release: v4.1.4. It has a bunch of bug fixes compared to v4.1.0. Second, you should know that it is relatively uncommon to run HPC/MPI apps inside VMs because the virtualization infras

[OMPI users] Cygwin. Strange issue with MPI_Isend() and packed data

2022-09-13 Thread Martín Morales via users
Hello over there. We have a very strange issue when the program tries to send a non-blocking message with MPI_Isend() and packed data: if we run this send after some unnecessary code (see details below), it works, but without it, not. This program uses dynamic spawning to launch processes. Bel

Re: [OMPI users] Cygwin. Strange issue with MPI_Isend() and packed data

2022-09-13 Thread Protze, Joachim via users
Hi Martin, Your code seems to have several issues in inform_my_completion: comm is used uninitialized in the my_pack macro. If the intention is that isend is executed by spawned processes, MPI_COMM_WORLD is probably the wrong communicator to use. Best Joachim Fr