Hi Ralph
Thank you.
I switched back to memlock unlimited, rebooted the nodes,
and after that OpenMPI is working right with Infinband.
As for why the problem happened first place,
I can only think that somehow the Infiniband kernel modules and
driver didn't like my reducing the memlock limit,
and
Greetings,
I am using OpenMPI 1.4.3-1.1.el6 on RedHawk Linux 6.0.1 (Glacier) / RedHat
Enterprise Linux Workstation Release 6.1 (Santiago). I am currently working
through some issues that I encountered after upgrading from RedHawk 5.2 / RHEL
5.2 and OpenMPI 1.4.3-1 (openmpi-gcc_1.4.3-1). It se
Hi Chris
As you said, pending prior communication,
is a candidate.
Another that I saw is MPI_Finalize inside a conditional,
where the condition may or may not be met by all ranks:
if (condition) {
MPI_Finalize();
}
Regardless of the cause,
to check the ranks that reach MPI_Finalize,
did you try