>I just re-read the thread. I think there's a little confusion between the
terms "processor" and "MPI process" here. You said "As a pre-processing
step, each processor must figure out which other processors it must
communicate with by virtue of sharing neighboring grid points." Did you
mean "MPI pr
> Any chance you could upgrade to Open MPI 1.5.5? It has a better version
of the processor affinity stuff than the 1.4 series.
Did this and recompiled everything that depended on OMPI. No difference
whatsoever. It still tells me, if I specify -np 2 for example, that "There
are not enough slots ava
I tried this and got the same result. Any other thing I might be missing...?
>Did you tell it --bind-to-core? If not, then the procs would be unbound to
any particular core - so your code might well think they are "sharing"
cores.
Right, I tried using a hostfile, and it made no difference. This is running
OpenMPI 1.4.4 on CentOS 5.x machines. The original issue was an error trap
built into my code, where it said one of the cores was asking for
information it already owned. I'm sorry to be vague, but I can't share
anything fr
I'm having a problem trying to use OpenMPI on some multicore machines I
have. The code I am running was giving me errors which suggested that MPI
was assigning multiple processes to the same core (which I do not want).
So, I tried launching my job using the -nooversubscribe option, and I get
this e