Hi,
I am trying to use the rankmap to bind a 4-proc mpi job to one socket of a
two-socket, 8 core machine. However I'm getting a strange error.
CMDS USED
orterun --hostfile hostlist.1 -n 4 --mca rmaps_rank_file_path ./rankmap.1
desres-netscan -o $OUTDIR
$ cat rankmap.1
rank 0=drdb0235.en
hanks
Federico
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph H Castain
Sent: Thursday, June 19, 2008 10:24 AM
To: Sacerdoti, Federico; Open MPI Users
Subject: Re: [OMPI users] null characters in output
No, I haven't seen tha
Ralph,
Thanks for your reply. Let me know if I can help in any way.
fds
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph H Castain
Sent: Thursday, June 19, 2008 10:24 AM
To: Sacerdoti, Federico; Open MPI Users
Subject: Re
Ralph wrote:
"I don't know if I would say we "interfere" with SLURM - I would say
that we
are only lightly integrated with SLURM at this time. We use SLURM as a
resource manager to assign nodes, and then map processes onto those
nodes
according to the user's wishes. We chose to do this because sru
: jet...@llnl.gov [mailto:jet...@llnl.gov]
Sent: Wednesday, March 05, 2008 2:21 PM
To: Sacerdoti, Federico; Open MPI Users
Subject: RE: [OMPI users] slurm and all-srun orterun
Slurm and its APIs are available under the GPL license.
Since Open MPI is not available under the GPL license it
can not link wit
nd
perhaps get the SLURM version working sometime this month - but they
will
need validation before being included in an official release.
I can keep you posted if you like - once this gets into our repository,
you
are certainly welcome to try it out. I would welcome feedback on it.
Hope that he
Hi,
We are migrating to openmpi on our large (~1000 node) cluster, and plan
to use it exclusively on a multi-thousand core infiniband cluster in the
near future. We had extensive problems with parallel processes not dying
after a job crash, which was largely solved by switching to the slurm
resour
,
fds
-Original Message-
From: Brightwell, Ronald [mailto:rbbr...@sandia.gov]
Sent: Monday, February 04, 2008 4:35 PM
To: Sacerdoti, Federico
Cc: Open MPI Users
Subject: Re: [OMPI users] openmpi credits for eager messages
On Mon Feb 4, 2008 14:23:13... Sacerdoti, Federico wrote
> To keep
To keep this out of the weeds, I have attached a program called "bug3"
that illustrates this problem on openmpi 1.2.5 using the openib BTL. In
bug3 process with rank 0 uses all available memory buffering
"unexpected" messages from its neighbors.
Bug3 is a test-case derived from a real, scalable ap
Hi,
I am readying an openmpi 1.2.5 software stack for use with a
many-thousand core cluster. I have a question about sending small
messages that I hope can be answered on this list.
I was under the impression that if node A wants to send a small MPI
message to node B, it must have a credit to do
10 matches
Mail list logo