Re: [OMPI users] ifort and gfortran module

2009-07-17 Thread Jim Kress
Why not generate an ifort version with a prefix of _intel And the gfortran version with a prefix of _gcc ? That's what I do and then use mpi-selector to switch between versions as required. Jim -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Be

Re: [OMPI users] [Open MPI Announce] Open MPI v1.3.3 released

2009-07-15 Thread Jim Kress
s > Subject: Re: [OMPI users] [Open MPI Announce] Open MPI v1.3.3 released > > I believe that was the intent, per other emails on that subject. > > However, I am not personally aware of people who have tested > it - though they may well exist. > > > > On Wed,

Re: [OMPI users] [Open MPI Announce] Open MPI v1.3.3 released

2009-07-15 Thread Jim Kress
> Does use of 1.3.3 require recompilation of applications that > were compiled using 1.3.2? > -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of jimkress_58 > Sent: Tuesday, July 14, 2009 3:05 PM > To: us...@open-mpi.org > Subject: R

[OMPI users] Infiniband requirements

2009-06-25 Thread Jim Kress
Is it correct to assume that, when one is configuring openmpi v1.3.2 and if one leaves out the --with-openib=/dir from the ./configure command line, that InfiniBand support will NOT be built into openmpi v1.3.2? Then, if an Ethernet network is present that connects all the nodes, openmpi will us

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-24 Thread Jim Kress ORG
sed nor have I asked about the use of --enable-static for their 1.3.2 configurations. I will have to follow-up on that. Jim On Wed, 2009-06-24 at 19:30 -0400, Gus Correa wrote: > Hi Jim > > Jim Kress ORG wrote: > > Hey Gus. I was correct. > > > > If I did: >

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-24 Thread Jim Kress ORG
stery solved. Thanks for your help. Jim On Wed, 2009-06-24 at 17:22 -0400, Gus Correa wrote: > Hi Jim > > > Jim Kress wrote: > > Noam, Gus and List, > > > > Did you statically link your openmpi when you built it? If you did (the > > default is NO

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-24 Thread Jim Kress ORG
ly, I have forgotton what I do with all the RPMs OFED generates. Do I install them all on my compute nodes or just a subset? Thanks for the help. Jim On Wed, 2009-06-24 at 17:22 -0400, Gus Correa wrote: > Hi Jim > > > Jim Kress wrote: > > Noam, Gus and List, > > > &

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-24 Thread Jim Kress
Noam, Gus and List, Did you statically link your openmpi when you built it? If you did (the default is NOT to do this) then that could explain the discrepancy. Jim > -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of Noam Bernstein

Re: [OMPI users] 50% performance reduction due toOpenMPI v 1.3.2 forcing all MPI traffic over Ethernet insteadof using Infiniband

2009-06-23 Thread Jim Kress ORG
until v1.3.2 -- if the > executable was compiled/linked against any version prior to that, it's > pure luck that it works with the 1.3.2 shared libraries at all. > > > On Jun 23, 2009, at 7:25 PM, Jim Kress ORG wrote: > > > This is what I get > &g

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress ORG
This is what I get [root@master ~]# ompi_info | grep openib MCA btl: openib (MCA v2.0, API v2.0, Component v1.3.2) [root@master ~]# Jim On Tue, 2009-06-23 at 18:51 -0400, Jeff Squyres wrote: > openib (OpenFabrics) plugin is installed > and at least marginally opera

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress ORG
.3.2, or > 1.2.8? I see a 1.2.8 in your app name, hence the question. > > This option only works with 1.3.2, I'm afraid - it was a new feature. > > Ralph > > On Jun 23, 2009, at 2:31 PM, Jim Kress ORG wrote: > > > Ralph, > > > > I did the following

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress ORG
. > > This option only works with 1.3.2, I'm afraid - it was a new feature. > > Ralph > > On Jun 23, 2009, at 2:31 PM, Jim Kress ORG wrote: > > > Ralph, > > > > I did the following: > > > > export OMPI_MCA_mpi_show_mca_params=&q

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress ORG
he MCA param that you think it is. Try adding: > > -mca mpi_show_mca_params file,env > > to your cmd line. This will cause rank=0 to output the MCA params it > thinks were set via the default files and/or environment (including > cmd line). > > Ralph > > On Jun

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress ORG
tting the MCA param that you think it is. Try adding: > > -mca mpi_show_mca_params file,env > > to your cmd line. This will cause rank=0 to output the MCA params it > thinks were set via the default files and/or environment (including > cmd line). > > Ralph > >

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress
hernet instead > of using Infiniband > > Assuming you aren't oversubscribing your nodes, set > mpi_paffinity_alone=1. > > BTW: did you set that mpi_show_mca_params option to ensure > the app is actually seeing these params? > > > > On Tue, Jun 23, 2009 at 1

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress
> > > Pavel Shamis (Pasha) wrote: > > Jim, > > Can you please share with us you mca conf file. > > > > Pasha. > > Jim Kress ORG wrote: > >> For the app I am using, ORCA (a Quantum Chemis

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Jim Kress
el Shamis (Pasha) > Sent: Tuesday, June 23, 2009 7:24 AM > To: Open MPI Users > Subject: Re: [OMPI users] 50% performance reduction due to > OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead > of using Infiniband > > Jim, > Can you please share with us you mca c

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Jim Kress ORG
; > -mca mpi_show_mca_params file,env > > to your cmd line. This will cause rank=0 to output the MCA params it > thinks were set via the default files and/or environment (including > cmd line). > > Ralph > > On Jun 22, 2009, at 6:14 PM, Jim Kress ORG wrote: > >

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Jim Kress ORG
gt; > to your cmd line. This will cause rank=0 to output the MCA params it > thinks were set via the default files and/or environment (including > cmd line). > > Ralph > > On Jun 22, 2009, at 6:14 PM, Jim Kress ORG wrote: > > > For the app I am using, ORCA (a Qu

[OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Jim Kress ORG
For the app I am using, ORCA (a Quantum Chemistry program), when it was compiled using openMPI 1.2.8 and run under 1.2.8 with the following in the openmpi-mca-params.conf file: btl=self,openib the app ran fine with no traffic over my Ethernet network and all traffic over my Infiniband network. H

Re: [OMPI users] single data/ mutilple processes

2009-01-03 Thread Jim Kress
Never mind. I figured it out for myself. Jim On Sat, 2009-01-03 at 13:51 -0500, Jim Kress wrote: > Hi Jody, > > I did not explain my problem very well. > > I have an application called mdrun. It was compiled and linked using > openMPI. I want to run mdrun on 8 node

Re: [OMPI users] single data/ mutilple processes

2009-01-03 Thread Jim Kress
ccess the data there. > > Jody > > > On Sat, Jan 3, 2009 at 4:13 PM, Jim Kress wrote: > > I need to use openMPI in a mode where the input and output data reside > > on one node of my cluster while all the other nodes are just used for > > computation and send dat

[OMPI users] single data/ mutilple processes

2009-01-03 Thread Jim Kress
I need to use openMPI in a mode where the input and output data reside on one node of my cluster while all the other nodes are just used for computation and send data to/from the head node. All I can find in the documentation are cases showing how to use openMPI for cases where input and output da