Question,
If we are using torque with TM with cpusets enabled for pinning should we not
enable numactl? Would they conflict with each other?
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
Hi Jeff
Thank you for your suggestion!
You're right.
It works with '-tp=shanghai-64', which is great.
A snippet of the make log where I had errors before is included below,
the crucial line being:
libtool: link: pgcc -shared -fpic -DPIC .libs/dummy.o -lnsl -lutil -lc
-tp=shanghai-64 -Wl,-
Hello,
We did more tests concerning the latency using 512 MPI ranks
on our super-computer. (64 machines * 8 cores per machine)
By default in Ray, any rank can communicate directly with any other.
Thus we have a complete graph with 512 vertices and 130816 edges (512*511/2)
where vertices are ran
On Nov 9, 2011, at 12:16 PM, amosl...@gmail.com wrote:
>The file was the output to the command:
>"mpicc hello_cc.c -o hello_cc
> and lists files which do not appear to be present. I checked the permissions
> and they seem to be correct so I am stumped, I
Hi Jeff,
The file was the output to the command:
"mpicc hello_cc.c -o hello_cc
and lists files which do not appear to be present. I checked the
permissions and they seem to be correct so I am stumped, I did use the
make and install commands and they seemed to
In general, yes, OPAL_PREFIX should be enough.
However, it is certainly easier to configure properly if you have the same
prefix on all nodes, even if it's actually different one one node.
Check out this FAQ entry for more details:
http://www.open-mpi.org/faq/?category=building#where-to-ins
The output in your file appears to be a bit jumbled; I can't quite tell exactly
what command you were trying to execute.
Did you install Open MPI after building it?
On Nov 7, 2011, at 7:50 PM, amosl...@gmail.com wrote:
> Hi all,
> I have been trying to compile and run openmpi-1.4.4 usin
I could swear that we had an FAQ entry about this, but I can't find it.
It is certainly easiest if you can open random TCP ports between MPI processes
in your cluster. Can your admin open all inbound TCP ports from all nodes in
your cluster (this is different than opening up all inbound TCP por
• Few Additionnal Informations about my Network configuration
/opt is a share point it uses NFS
/Network/opt
is the point where /opt can be found accross the Network
I declared OPAL_PREFIX because openmpi was built with prefix /opt and runs it
directory /Network/opt
• If a copy the directo