With 2 nodes using 1.1.7 with the patch we measured (using mpich-mx
1.2.7..4):
3.07us
With mx 1.2.1-rc18 we measure:
3.69 us
And with mpich-mx 1.2.7..4 we see:
3.20us
Our Open MPI settings:
---
# env | grep OMPI
OMPI_MCA_pml=cm
OMPI_MCA_mpi_keep_hostnames=1
OMPI_MCA_oob_tc
rt the results?
>
> Thanks,
>
> Galen
>
>
> On May 18, 2007, at 1:15 PM, Maestas, Christopher Daniel wrote:
>
> > Hello,
> >
> > I was wondering why we would see ~ 100MB/s difference
> between mpich-mx
> > and Open MPI with SendRecv from the
Hello,
I was wondering why we would see ~ 100MB/s difference between mpich-mx
and Open MPI with SendRecv from the Intel MPI benchmarks. Maybe I'm
missing turning something on?
The hardware is:
---
# mx_info -q
MX Version: 1.1.7
MX Build: root@tocc1:/projects/global/SOURCES/myricom/mx-1.1.7 Fri M
M
> To: Open MPI Users
> Subject: Re: [OMPI users] Pernode request
>
>
>
>
> On 12/12/06 9:18 AM, "Maestas, Christopher Daniel"
>
> wrote:
>
> > Ralph,
> >
> > I figured I should of run an mpi program ...here's what it
> do
t specify "byslot", then we default
> to assigning
> ranks by node.
>
> Make sense? If so, I can probably have that going before the holiday.
>
> Ralph
>
>
>
> On 12/11/06 7:51 PM, "Maestas, Christopher Daniel"
>
> wrote:
>
>
Hello,
Sometimes we have users that like to do from within a single job (think
schedule within an job scheduler allocation):
"mpiexec -n X myprog"
"mpiexec -n Y myprog2"
Does mpiexec within Open MPI keep track of the node list it is using if
it binds to a particular scheduler?
For
Hello Ralph,
This is great news! Thanks for doing this. I will try and get around
to it soon before the holiday break.
The allocation scheme always seems to get to me. From what you describe
that is how I would have seen it. As I've gotten to know osc mpiexec
through the years I think they li
Ralph,
I agree with what you stated in points 1-4. That is what we are looking
for.
I understand your point now about the non-MPI users too. :-)
Thanks,
-cdm
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Wednesda
Ralph,
Thanks for the feedback. Glad we are clearing these things up. :-)
So here's what osc mpiexec is doing now:
---
-pernode : allocate only one process per compute node
-npernode : allocate no more than processes per
compute node
---
> Cdm> I think I originally requested the -pernode
Thanks for the feedback Ralph. Comments below.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Tuesday, November 28, 2006 3:27 PM
To: Open MPI Users
Subject: Re: [OMPI users] Pernode request
We already have --perno
I recently saw this on the mpiexec mailing list and pondered that this
would be a useful feature for Open MPI as well. :-)
I can't seem to enter a trac ticket and seem to be having issues w/ my
browser at the moment, but wanted to get this out there.
---
> > > 1) mpiexec already has "-pernode" but
I believe the quote regarding thunderbird on the following site is not
correct:
http://nowlab.cse.ohio-state.edu/projects/mpi-iba/
We do have mvapich installed on thunderbird, but I believe the quote is
misleading in leading people to believe mvapich was used to obtain our
recent top500 nu
Some more background information:
1) the environment is all run inside an initrd with a static pbs_mom.
2) the file we change in the torque distributions is:
torque-2.1.2/src/include/dis.h
---
255 /* NOTE: increase THE_BUF_SIZE to 131072 for systems > 5k nodes */
256
257 /* OLD: #define T
How fast/well are MPI collectives implemented in ompi?
I'm running the Intel MPI 1.1. benchmarks and seeing the need to set
wall clock times > 12 hours for run sizes of 200 and 300 nodes for 1ppn
and 2ppn cases. The collective tests that usually pass in 2ppn cases:
Barrier, Reduce scatter, allred
Has anyone ever seen this?
---
[dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
base/rmaps_base_node.c at line 153
[dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
rmaps_rr.c at line 270
[dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource
Hello,
I was wondering if openmpi had a -pernode like behavior similar to osc
mpiexec
mpiexec -pernode mpi_hello
Would launch N mpi processes on N nodes ... No more no less.
Openmpi already will try and run N*2 nodes if you don't specify -np
mpirun -np mpi_hello
Launches N*2
16 matches
Mail list logo