[OMPI users] Open MPI 5.0.0rc8 failure but 4.1.4version work well

2022-10-30 Thread mrlong via users
Two machines. A: 192.168.180.48 B: 192.168.60.203 The hostfile content is 192.168.60.203 slots=2 1. using openmpi 4.1.4, execute "mpirun -n 2 --machinefile hostfile hostname" on machine A. The hostname of B is printed correctly. 2. However, using openmpi 5.0.0rc8, the result on machine A is $m

[OMPI users] --mca btl_base_verbose 30 not working in version 5.0

2022-10-30 Thread mrlong via users
mpirun --mca btl self,sm,tcp --mca btl_base_verbose 30 -np 2 --machinefile hostfile  hostname Why this sentence does not print IP addresses are routable in openmpi 5.0.0.rc9?

[OMPI users] OFI, destroy_vni_context(1137).......: OFI domain close failed (ofi_init.c:1137:destroy_vni_context:Device or resource busy)

2022-11-01 Thread mrlong via users
Hi, teachers code: import mpi4py import time import numpy as np from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() print("rank",rank) if __name__ == '__main__':     if rank == 0:     mem = np.array([0], dtype='i')     win = MPI.Win.Create(mem, comm=comm)     else:    

Re: [OMPI users] [EXTERNAL] OFI, destroy_vni_context(1137).......: OFI domain close failed (ofi_init.c:1137:destroy_vni_context:Device or resource busy)

2022-11-01 Thread mrlong via users
mpich users/help mail list. Howard *From: *users on behalf of mrlong via users *Reply-To: *Open MPI Users *Date: *Tuesday, November 1, 2022 at 11:26 AM *To: *"de...@lists.open-mpi.org" , "users@lists.open-mpi.org" *Cc: *mrlong *Subject: *[EXTERNAL] [OMPI users] OFI,

[OMPI users] [LOG_CAT_ML] component basesmuma is not available but requested in hierarchy: basesmuma, basesmuma, ucx_p2p:basesmsocket, basesmuma, p2p

2022-11-07 Thread mrlong via users
The execution of openmpi 5.0.0rc9 results in the following: (py3.9) [user@machine01 share]$  mpirun -n 2 python test.py [LOG_CAT_ML] component basesmuma is not available but requested in hierarchy: basesmuma,basesmuma,ucx_p2p:basesmsocket,basesmuma,p2p [LOG_CAT_ML] ml_discover_hierarchy exited

[OMPI users] There are not enough slots available in the system to satisfy the 2, slots that were requested by the application

2022-11-07 Thread mrlong via users
*Two machines, each with 64 cores. The contents of the hosts file are:* 192.168.180.48 slots=1 192.168.60.203 slots=1 *Why do you get the following error when running with openmpi 5.0.0rc9?* (py3.9) [user@machine01 share]$  mpirun -n 2 --machinefile hosts hostname --