You may also want to check with the admin. I know on the system I use, he
will prevent you from using many nodes until you demonstrate you know what
you are doing.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Jeff Squyres
Sent: Wedn
I run a program with the following command line, and obtain the error message
mpirun -x LD_LIBRARY_PATH=/home/haoanyi1/socIntel/goto --prefix
/home/haoanyi1/openmpi1.4.1 -np 2 -host intel01,intel02 -rf hosts ./main 62 62
tests/ > newtest_64x64_np2_omp
[btl_tcp_endpoint.c:638:mca_btl_tcp_endpoin
Yes, I can do all of these on each node.
In 2010-03-25 04:33:24, "Jeff Squyres" wrote :
>Can you mpirun non-MPI applications, like "hostname"? I frequently run this
>as a first step to debugging a wonky install. For example:
>
>shell$ hostname
>barney
>shell$ mpirun hostname
>barney
>shell$
Can you mpirun non-MPI applications, like "hostname"? I frequently run this as
a first step to debugging a wonky install. For example:
shell$ hostname
barney
shell$ mpirun hostname
barney
shell$ cat hosts
barney
rubble
shell$ mpirun --hostfile hosts hostname
barney
rubble
shell$
On Mar 24, 20
Hi,
I installed OpenMPI1.4.1 as a non-root user on a cluster. It is totally OK when
I run with mpirun or mpiexec on one single node for many processes. However,
when I lauch many processes on multiple nodes, I can observe jobs are
distributed to those nodes (by using "top"), but all the jobs j
On Mar 23, 2010, at 12:06 PM, Junwei Huang wrote:
> I am still using LAM/MPI on an old cluster and wonder if I can get
> some help from this mail list.
Please upgrade to Open MPI if possible. :-)
> Here is the problem. I am using a 18
> node cluster, each node has 2 CPU and each CPU supports up
The description for MCA parameter "opal_cr_use_thread" is very short at
URL: http://osl.iu.edu/research/ft/ompi-cr/api.php
Can someone explain the usefulness of enabling this parameter vs
disabling it? In other words, what are pros/cons of disabling it?
I found that this gets enabled automa
Intel compiler 11.0.074
OpenMPI 1.4.1
Two different OSes: centos 5.4 (2.6.18 kernel) and Fedora-12 (2.6.32 kernel)
Two different CPUs: Opteron 248 and Opteron 8356.
same binary for OpenMPI. Same binary for user code (vasp compiled for older
arch)
When I supply rankfile, then depending on combo