Hi Ralph!
On 12.01.13 17:29, Ralph Castain wrote:
> Sadly, we incorrectly removed the required grpcomm component to make that
> work.
> Meantime, you can use the PMI support in its place.
Success. I used the following options, --with-pmi won't accept a path:
./configure ... --with-pmi CFLAGS="-
Sadly, we incorrectly removed the required grpcomm component to make that work.
I'm restoring it this weekend and we will be issuing a 1.6.4 shortly.
Meantime, you can use the PMI support in its place. Just configure OMPI
--with-pmi= and you will be able to direct-launch your
job.
Sorry for th
Hello!
I'm currently trying to run OpenMPI 1.6.3 binaries directly under SLURM
2.5.1 [1]. OpenMPI is built using --with-slurm, $SLURM_STEP_RESV_PORTS
is successfully set by SLURM. According to the error message I assume a
shared library couldn't be found, unfortunately I'm not able to find a
faile
21, 2006 12:25 AM
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] OpenMPI and SLURM Confiuration ?
>
> >I think that you need to install Open MPI into other machine as well.
> >You might want to setup NSF (network file system) for the
> master (you are
> >say
>I think that you need to install Open MPI into other machine as well.
>You might want to setup NSF (network file system) for the master (you are
>saying your local machine) so that your slave nodes could see your
>mpi executable.
>
>> bash line 1: orted : command not found
>
>This error might go a
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of semper
> Sent: Thursday, April 20, 2006 9:50 PM
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] OpenMPI and SLURM configuration ??
>
> > No, the l
> No, the location of $HOME should not matter.
>
> What happens if you "mpirun -np 2 uptime"? (i.e., use mpirun to launch
> a non-MPI application)
>
Thanks.
it returns right result! but still only 2 local processes.
I tried again to add a hostfile option "--hostfile $HOME/openmpi/bin/host
day, April 20, 2006 10:17 AM
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] OpenMPI and SLURM configuration ?
>
> >Do you really have a shared /tmp on your cluster ? We do not
> copy the
> >file on the nodes, they have to be on a shared file system, or at
> &
>Do you really have a shared /tmp on your cluster ? We do not copy the
>file on the nodes, they have to be on a shared file system, or at
>least they have to exist on the same place on all the nodes.
>
> george.
>
No,but I put a copy of cpi in the /tmp directory on each node.
Because I wonde
On Apr 20, 2006, at 9:57 AM, 杨科 wrote:
[semper@IA64_node2] srun ¨CN 2 ¨CA
[semper@IA64_node2] mpirun -np 2 /tmp/cpi
Do you really have a shared /tmp on your cluster ? We do not copy the
file on the nodes, they have to be on a shared file system, or at
least they have to exist on the sa
Hi,all£¬ recently I've installed OpenMPI on our cluster (4 nodes, each node got
one installation), but finally I found it could not work well with the
underlying
resource management system SLURM. That is,after I typed the following, it seemed
mpirun hang there:
[semper@IA64_node2] srun ¨CN 2 ¨CA
11 matches
Mail list logo