From: nee...@crlindia.com
> Subject: Re: [OMPI users] Performance question about OpenMPI and
> MVAPICH2 on IB
> To: Open MPI Users
> Cc: us...@open-mpi.org, users-boun...@open-mpi.org
> Message-ID:
>
>
: Craig Tierney
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 onIB
To: Open MPI Users
Message-ID: <4a7b612c.8070...@noaa.gov>
Content-Type: text/plain; charset=ISO-8859-1
A followup
Part of problem was affinity. I had written a script to do processo
Date: Fri, 07 Aug 2009 07:12:45 -0600
From: Craig Tierney
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 on IB
To: Open MPI Users
Message-ID: <4a7c284d.3040...@noaa.gov>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Terry Dontje
Craig Tierney *
Sent by: users-boun...@open-mpi.org
08/07/2009 04:43 AM
Please respond to
Open MPI Users
To
Open MPI Users
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAPICH2 on
IB
Gus Correa wrote:
> Hi Craig, list
t work'
in most cases.
I will try the above options and get back to you.
Craig
thanks,
--td
Message: 4
Date: Thu, 06 Aug 2009 17:03:08 -0600
From: Craig Tierney
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 onIB
To: Open MPI Users
Message-ID: <4
-6620 9863 (Fax) +91-20-6620 9862
M: +91.9225520634
Terry Dontje
Sent by: users-boun...@open-mpi.org
08/07/2009 05:15 PM
Please respond to
Open MPI Users
To
us...@open-mpi.org
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAPICH2on
IB
Hi Neeraj,
Were
Hi Neeraj,
Were there specific collectives that were slower? Also what kind of cluster
were you running on? How many nodes and cores per node?
thanks,
--td
Message: 3
Date: Fri, 7 Aug 2009 16:51:05 +0530
From: nee...@crlindia.com
Subject: Re: [OMPI users] Performance question about OpenMPI
:43 AM
Please respond to
Open MPI Users
To
Open MPI Users
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAPICH2 on IB
Gus Correa wrote:
> Hi Craig, list
>
> I suppose WRF uses MPI collective calls (MPI_Reduce,
> MPI_Bcast, MPI_Alltoall etc),
>
: [OMPI users] Performance question about OpenMPI and MVAPICH2on
IB
Craig,
Did your affinity script bind the processes per socket or linearly to
cores. If the former you'll want to look at using rankfiles and place the
ranks based on sockets. TWe have found this especially u
100 --mca coll_sm_priority 100". We
would be very interested in any results you get (failures, improvements,
non-improvements).
thanks,
--td
Message: 4
Date: Thu, 06 Aug 2009 17:03:08 -0600
From: Craig Tierney
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 on
: [OMPI users] Performance question about OpenMPI and MVAPICH2 on IB
Gus Correa wrote:
> Hi Craig, list
>
> I suppose WRF uses MPI collective calls (MPI_Reduce,
> MPI_Bcast, MPI_Alltoall etc),
> just like the climate models we run here do.
> A recursive grep on the sou
Craig,
Let me look at your script, if you'd like... I may be able to help
there. I've also been seeing some "interesting results for WRF on
OpenMPI, and we may want to see if we're taking complimentary approaches...
gerry
Craig Tierney wrote:
A followup
Part of problem was affinity.
Gus Correa wrote:
> Hi Craig, list
>
> I suppose WRF uses MPI collective calls (MPI_Reduce,
> MPI_Bcast, MPI_Alltoall etc),
> just like the climate models we run here do.
> A recursive grep on the source code will tell.
>
I will check this out. I am not the WRF expert, but
I was under the impre
A followup
Part of problem was affinity. I had written a script to do processor
and memory affinity (which works fine with MVAPICH2). It is an
idea that I got from TACC. However, the script didn't seem to
work correctly with OpenMPI (or I still have bugs).
Setting --mca mpi_paffinity_alone
Hi Craig, list
I suppose WRF uses MPI collective calls (MPI_Reduce,
MPI_Bcast, MPI_Alltoall etc),
just like the climate models we run here do.
A recursive grep on the source code will tell.
If that is the case, you may need to tune the collectives dynamically.
We are experimenting with tuned col
I am running openmpi-1.3.3 on my cluster which is using
OFED-1.4.1 for Infiniband support. I am comparing performance
between this version of OpenMPI and Mvapich2, and seeing a
very large difference in performance.
The code I am testing is WRF v3.0.1. I am running the
12km benchmark.
The two bu
On Feb 19, 2007, at 1:53 PM, Mark Kosmowski wrote:
[snipped good description of cluster]
Sorry for the delay in replying -- traveling for a week-long OMPI
developer meeting and trying to get v1.2 out the door has sucked up
all of our time recently. :-(
For just the one system with two
Dear OMPI Community:
I have a modest personal cluster (3 node, 6 processor Opterons - all
single core, two are 242's and 4 are 844's - each machine has 4 Gb of
RAM) over gigabit (unmanaged switch) that I put together to run
computational chemistry projects for my doctoral studies. I'm using
the
18 matches
Mail list logo