have you set up ssh keys for all nodes to access each other?
On Tue, Apr 5, 2011 at 7:42 AM, mohd naseem wrote:
> Sir,
> i made the bewoulf cluster but when i try to run examples that is
> given in the mpich2.i get error i.e permission denied on other node.
>
> please help me
>
> _
Warnett, Jason wrote:
Hello
I am running on Linux, latest version of mpi built but I've run into a
few issues with a program which I am trying to run. It is a widely used
open source application called LIGGGHTS so I know the code works and
should compile, so I obviously have a setting wrong w
I am not sure Fedora comes with Open MPI installed on it by default (at
least my FC13 did not). You may want to look at trying to install the
Open MPI from yum or some other package mananger. Or you can download
the source tarball from http://www.open-mpi.org/software/ompi/v1.4/,
build and in
It was asked during the community concall whether the below may be
related to ticket #2722 https://svn.open-mpi.org/trac/ompi/ticket/2722?
--td
On 04/04/2011 10:17 PM, David Zhang wrote:
Any error messages? Maybe the nodes ran out of memory? I know MPI
implement some kind of buffering under
Hello
I am running on Linux, latest version of mpi built but I've run into a few
issues with a program which I am trying to run. It is a widely used open source
application called LIGGGHTS so I know the code works and should compile, so I
obviously have a setting wrong with MPI. I saw a similar
Sir,
i made the bewoulf cluster but when i try to run examples that is
given in the mpich2.i get error i.e permission denied on other node.
please help me
Hello, Rob, and Open MPI users,
Thank you for your advice.
I understand that current Open MPI 1.5.3 Win-32bit binary distributed
from this site
don't support MPI-IO on NTFS.
I try to check this problem with using other code.
The code is the following code:
http://www.mcs.anl.gov/research/projects
On 04/05/2011 07:37 AM, SLIM H.A. wrote:
Hi Terry
I think the problem may have been caused now by our lustre file system
being sick, so I'll wait until that is fixed.
It worked outside gridengine but I think I did not include --mca btl
self,sm,ib or the corresponding environment variables w
Hi Reuti
1.4.2 is still in the same location and I also built 1.4.3 anew. It
appeared the lustre and ib where not playing along and it is working
now.
Thanks
henk
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Reuti
> Sent: 0
There are no messages being spit out, but i'm not sure i have all the
correct debugs turn on. I turned on -debug-devel -debug-daemons and
mca_verbose. but it appears that the process just hangs.
If it's memory exhaustion its not from the core memory these nodes
have 48GB of memory, it could be a
Hi Terry
I think the problem may have been caused now by our lustre file system
being sick, so I'll wait until that is fixed.
It worked outside gridengine but I think I did not include --mca btl
self,sm,ib or the corresponding environment variables with gridengine,
although it usually find
Hi
On my workstation and the cluster i set up OpenMPI (v 1.4.2) so that
it works in "text-mode":
$ mpirun -np 4 -x DISPLAY -host squid_0 printenv | grep WORLD_RANK
OMPI_COMM_WORLD_RANK=0
OMPI_COMM_WORLD_RANK=1
OMPI_COMM_WORLD_RANK=2
OMPI_COMM_WORLD_RANK=3
but when i use the -xterm
Am 05.04.2011 um 11:11 schrieb SLIM H.A.:
> After an upgrade of our system I receive the following error message
> (openmpi 1.4.2 with gridengine):
Did you move openmpi 1.4.2 to a new (i.e. different) location?
-- Reuti
>> quote
> ---
On 04/05/2011 05:11 AM, SLIM H.A. wrote:
After an upgrade of our system I receive the following error message
(openmpi 1.4.2 with gridengine):
quote
--
Sorry! You were supposed to get help about:
orte-odls-default:e
After an upgrade of our system I receive the following error message
(openmpi 1.4.2 with gridengine):
>quote
--
Sorry! You were supposed to get help about:
orte-odls-default:execv-error
But I couldn't open the help file:
Did you request an allocation from PCM? If not, then PCM will block you from
arbitrarily launching jobs on non-allocated nodes. Print out your environment
and look for any envars from PCM and/or LSF (e.g., LSB_JOBID).
I don't know what you mean about "no OMPI application is yet integrated with
16 matches
Mail list logo