rico
>
> -Original Message-
> From: Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Tuesday, June 17, 2008 1:09 PM
> To: Sacerdoti, Federico; Open MPI Users
> Subject: Re: [OMPI users] SLURM and OpenMPI
>
> I can believe 1.2.x has problems in that regard. Some
nal Message-
> From: Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Tuesday, June 17, 2008 1:09 PM
> To: Sacerdoti, Federico; Open MPI Users
> Subject: Re: [OMPI users] SLURM and OpenMPI
>
> I can believe 1.2.x has problems in that regard. Some of that has
> nothing to
&
t; orterun dummy-binary-I-dont-exist
> [hang]
>
> Thanks,
> Federico
>
> -----Original Message-----
> From: Sacerdoti, Federico
> Sent: Friday, March 21, 2008 5:41 PM
> To: 'Open MPI Users'
> Subject: RE: [OMPI users] SLURM and OpenMPI
>
>
> Ralph wro
On Thu, 20 Mar 2008 16:40:41 -0600
Ralph Castain wrote:
> I am no slurm expert. However, it is our understanding that
> SLURM_TASKS_PER_NODE means the number of slots allocated to the job,
> not the number of tasks to be executed on each node. So the 4(x2)
> tells us that we have 4 slots on each
On Fri, 21 Mar 2008 17:41:28 -0400
"Sacerdoti, Federico" wrote:
> Ralph, we wrote a launcher for mvapich that uses srun to launch but
> keeps tight control of where processes are started. The way we did it
> was to force srun to launch a single process on a particular node.
>
> The launcher cal
hter orterun/slurm
integration as you know).
Regards,
Federico
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Thursday, March 20, 2008 6:41 PM
To: Open MPI Users
Cc: Ralph Castain
Subject: Re: [OMPI users] SLURM
Hi there
I am no slurm expert. However, it is our understanding that
SLURM_TASKS_PER_NODE means the number of slots allocated to the job, not the
number of tasks to be executed on each node. So the 4(x2) tells us that we
have 4 slots on each of two nodes to work with. You got 4 slots on each node
Hi Werner,
Open MPI does things a little bit differently than other MPIs when it
comes to supporting SLURM. See
http://www.open-mpi.org/faq/?category=slurm
for general information about running with Open MPI on SLURM.
After trying the commands you sent, I am actually a bit surprised by the
re
Thanks for the help I renamed the nodes, and now slurm and openmpi seem
to be playing nicely with each other.
Bob
On 1/19/07, Jeff Squyres wrote:
I think the SLURM code in Open MPI is making an assumption that is
failing in your case: we assume that your nodes will have a specific
naming
I think the SLURM code in Open MPI is making an assumption that is
failing in your case: we assume that your nodes will have a specific
naming convention:
mycluster.example.com --> head node
mycluster01.example.com --> cluster node 1
mycluster02.example.com --> cluster node 2
...etc.
OMPI is
Thanks for your response. The program that I have been using for testing
purposes is a simple hello:
#include
#include
#include
#include
#include
#include
main(int argc, char *argv)
{
char name[BUFSIZ];
int length;
int rank;
struct rlimit rlim;
FILE *output;
MPI_Init(&argc, &argv
Open MPI and SLURM should work together just fine right out-of-the-box. The
typical command progression is:
srun -n x -A
mpirun -n y .
If you are doing those commands and still see everything running on the head
node, then two things could be happening:
(a) you really aren't getting an allo
12 matches
Mail list logo