[OMPI users] OpenMPI 1.10.5 oversubscribing cores

2017-09-08 Thread twurgl
I posted this question last year and we ended up not upgrading to the newer openmpi. Now I need to change to openmpi 1.10.5 and have the same issue. Specifically, using 1.4.2, I can run two 12 core jobs on a 24 core node and the processes would bind to cores and only have 1 process per core. ie

[OMPI users] Openmpi 1.8.8 and affinty

2016-01-15 Thread twurgl
In the past (v 1.6.4-) we used mpirun args of --mca mpi_paffinity_alone 1 --mca btl openib,tcp,sm,self with lsf 7.0.6, and this was enough to make cores not be oversubscribed when submitting 2 or more jobs to the same node. Now I am using 1.8.8 and thus far don't have the right combination of ar

[OMPI users] Why are the static libs different if compiled with or without dynamic switch?

2015-02-24 Thread twurgl
I am setting up Openmpi 1.8.4. The first time I compiled, I had the following: version=1.8.4.I1404211913 ./configure \ --disable-vt \ --prefix=/apps/share/openmpi/$version \ --disable-shared \ --enable-static \ --with-verbs \ --enable-mpirun-prefix-by-default \ --with

Re: [OMPI users] Using OPENMPI configured for MX, GM and OPENIB interconnects

2009-08-26 Thread twurgl
I see. My one script for all clusters calls mpirun --mca btl openib,mx,gm,tcp,sm,self so I'd need to add some logic above the mpirun line to figure out what cluster I am on to setup the correct mpirun line. still seems like I should be able to do the mpirun line I have and just tell me wh

[OMPI users] Using OPENMPI configured for MX, GM and OPENIB interconnects

2009-08-26 Thread twurgl
I configure openmpi (1.3.3 and previous ones as well) to be able to have an executable able to run on any cluster we have. I used: ./configure --with-mx --with-openib --with-gm At the end of the day, the same executable does run on any of the clusters. The question I have is: When, for

[OMPI users] locked memory problem

2008-06-11 Thread twurgl
I get the locked memory error as follows: -- *** An error occurred in MPI_Init *** before MPI was initialized *** MPI_ERRORS_ARE_FATAL (goodbye) [node10:10395] [0,0,0]-[0,1,6] mca_oob_tcp_msg_recv: readv f

[OMPI users] ulimit question from video open-fabrics-concepts...

2008-05-29 Thread twurgl
HI, I am in one of your MPI instructional videos and have a question. You said to make sure the registered memory ulimit is set to unlimited. I type the command "ulimit -a" and don't see a registered memory entry. Is this maybe the same as "max locked memory"? Or can you tell me where to check

Re: [OMPI users] Open MPI instructional videos

2008-05-28 Thread twurgl
Jeff, I started viewing some of these. I think this is great stuff. thanks! Jeff Squyres To Sent by: Open MPI Users

Re: [OMPI users] Can't get OPENMPI to run parallel job with Myrinet/GM

2008-02-19 Thread twurgl
Would you be able to send me the mpirun command and args that you use? how can I get more output to study? I added "--display-map -d -v " to my mpirun command, which gives more output, but not the reason for the failure. The information contained herein is GOODYEAR PROPRIETARY information and i