Hello,

Make sure the MPI_TAG being used with mpi calls is not becoming a negative 
value. 

Regards,

Érico 

> Em 19 de set. de 2021, à(s) 12:45, Feng Wade via users 
> <users@lists.open-mpi.org> escreveu:
> 
> Hi,
> 
> Good morning.
> 
> I am using openmpi/4.0.3 on Compute Canada to do 3D flow simulation. My grid 
> size is Lx*Ly*Lz=700*169*500. It worked quite well for lower resolution. 
> However, after increasing my resolution from Nx*Ny*Nz=64*109*62 to 
> 256*131*192, openmpi reported errors as shown below:
> 
> [gra541:21749] *** An error occurred in MPI_Recv
> [gra541:21749] *** reported by process [2068774913,140]
> [gra541:21749] *** on communicator MPI COMMUNICATOR 13 DUP FROM 0
> [gra541:21749] *** MPI_ERR_TAG: invalid tag
> [gra541:21749] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will 
> now abort,
> [gra541:21749] ***    and potentially your MPI job)
> [gra529:07588] 210 more processes have sent help message help-mpi-errors.txt 
> / mpi_errors_are_fatal
> [gra529:07588] Set MCA parameter "orte_base_help_aggregate" to 0 to see all 
> help / error messages
> 
> This is my computation parameters and command to run openmpi:
> #!/bin/bash
> #SBATCH --time=0-10:00:00
> #SBATCH --job-name=3D_EIT_Wi64
> #SBATCH --output=log-%j
> #SBATCH --ntasks=128
> #SBATCH --nodes=4
> #SBATCH --mem-per-cpu=4000M
> mpirun ./vepoiseuilleFD_5.x
> 
> I guess The value of the PATH and LD_LIBRARY_PATH environment variables are 
> all set correct because my simulation worked for lower resolution ones.
> 
> Thank you for your time.
> 
> Sincerely
> 
> Wade

Reply via email to