Dear Patrick, Thanks so much for your reply, Yes, we use ssh to log on the node. From the frontend, we can ssh to the nodes without password. the mpirun --version in all 3 nodes are identical, openmpi 2.1.1, and same place when testing with "whereis mpirun" So is there any problem with mpirun causing it to not launch to other nodes?
Regards HaChi On Thu, 4 Jun 2020 at 14:35, Patrick Bégou via users < users@lists.open-mpi.org> wrote: > Hi Ha Chi > > do you use a batch scheduler with Rocks Cluster or do you log on the node > with ssh ? > If ssh, can you check that you can ssh from one node to the other without > password ? > Ping just says the network is alive, not that you can connect. > > Patrick > > Le 04/06/2020 à 09:06, Hà Chi Nguyễn Nhật via users a écrit : > > Dear Open MPI users, > > Please help me to find the solution for the problem using mpirun with a > ROCK cluster, 3 nodes. I use the command: > mpirun -np 12 --machinefile machinefile.txt --allow-run-as-root ./wrf.exe > But mpirun was unable to access other nodes (as the below photo). But > actually I checked the connection of three nodes by command "ping node's > IP", they are well connected. > [image: 2.png] > My machinefile.txt includes IP of three nodes (frontend and 2 connected > nodes), like this: > 10.1.85.1 slots=4 > 10.1.85.254 slots=4 > 10.1.85.253 slots=4 > > My cluster is built by a ROCK cluster, with 3 nodes, CPUS 8 per each node. > *My question is: How can I connect 3 nodes to run together?* > > Please advise > Thanks > Ha Chi > > -- > *Ms. Nguyen Nhat Ha Chi* > PhD student > Environmental Engineering and Management > Asian Institute of Technology (AIT) > Thailand > > > -- *Ms. Nguyen Nhat Ha Chi* PhD student Environmental Engineering and Management Asian Institute of Technology (AIT) Thailand