Re: [OMPI users] Bug MPI_Iscatter

2013-11-22 Thread George Bosilca
Pierre, On Nov 22, 2013, at 02:39 , Pierre Jolivet wrote: > George, > I completely agree that the code I sent was a good example of what NOT to do > with collective and non-blocking communications, so I’m sending a better one. > 1. I’m setting MPI_DATATYPE_NULL only on non-root processes. The r

[OMPI users] CFP: 1st International Workshop on Cloud for Bio (C4Bio 2014)

2013-11-22 Thread Javier Garcia Blas
Dear Sir or Madam, (We apologize if you receive multiple copies of this message) FIRST INTERNATIONAL WORKSHOP ON CLOUD FOR BIO (C4Bio) to be held as part of IEEE/ACM CCGrid 2014 Chicago, USA, May 26-29, 2014 http://www.arcos.inf.uc3

[OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Gans, Jason D
Hello, I would like to run an instance of my application on every *core* of a small cluster. I am using Torque 2.5.12 to run jobs on the cluster. The cluster in question is a heterogeneous collection of machines that are all past their prime. Specifically, the number of cores ranges from 2-8. H

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Reuti
Hi, Am 22.11.2013 um 17:32 schrieb Gans, Jason D: > I would like to run an instance of my application on every *core* of a small > cluster. I am using Torque 2.5.12 to run jobs on the cluster. The cluster in > question is a heterogeneous collection of machines that are all past their > prime.

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Jason Gans
On 11/22/13 10:47 AM, Reuti wrote: Hi, Am 22.11.2013 um 17:32 schrieb Gans, Jason D: I would like to run an instance of my application on every *core* of a small cluster. I am using Torque 2.5.12 to run jobs on the cluster. The cluster in question is a heterogeneous collection of machines th

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Reuti
Am 22.11.2013 um 18:56 schrieb Jason Gans: > On 11/22/13 10:47 AM, Reuti wrote: >> Hi, >> >> Am 22.11.2013 um 17:32 schrieb Gans, Jason D: >> >>> I would like to run an instance of my application on every *core* of a >>> small cluster. I am using Torque 2.5.12 to run jobs on the cluster. The >

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Ralph Castain
Really shouldn't matter - this is clearly a bug in OMPI if it is doing mapping as you describe. Out of curiosity, have you tried the 1.7 series? Does it behave the same? I can take a look at the code later today and try to figure out what happened. On Nov 22, 2013, at 9:56 AM, Jason Gans wrote

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Gans, Jason D
I have tried the 1.7 series (specifically 1.7.3) and I get the same behavior. When I run "mpirun -oversubscribe -np 24 hostname", three instances of "hostname" are run on each node. The contents of the $PBS_NODEFILE are: n0007 n0006 n0005 n0004 n0003 n0002 n0001 n but, since I have compiled

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Ralph Castain
On Nov 22, 2013, at 10:03 AM, Reuti wrote: > Am 22.11.2013 um 18:56 schrieb Jason Gans: > >> On 11/22/13 10:47 AM, Reuti wrote: >>> Hi, >>> >>> Am 22.11.2013 um 17:32 schrieb Gans, Jason D: >>> I would like to run an instance of my application on every *core* of a small cluster. I

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Lloyd Brown
As far as I understand, the mpirun will assign processes to hosts in the hostlist ($PBS_NODEFILE) sequentially, and if it runs out of hosts in the list, it starts over at the top of the file. Theoretically, you should be able to request specific hostnames, and the processor counts per hostname, in

Re: [OMPI users] Request for help/suggestion

2013-11-22 Thread Reuti
Hi, Am 20.11.2013 um 21:42 schrieb Venkat Reddy: > Hi Team, > > I am compiled the OpenFoam-1.7.1,openFoam-2.2.1,OpenFoam-2.2.2 versions. > All the versions same problem that some times I am able to run simpleFoam > 8,16,32,64,80 cores but some times it will get hang no messages will come. > My

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Jason Gans
On 11/22/13 11:15 AM, Ralph Castain wrote: On Nov 22, 2013, at 10:03 AM, Reuti > wrote: Am 22.11.2013 um 18:56 schrieb Jason Gans: On 11/22/13 10:47 AM, Reuti wrote: Hi, Am 22.11.2013 um 17:32 schrieb Gans, Jason D: I would like to run an instance of my

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Ralph Castain
On Nov 22, 2013, at 10:26 AM, Jason Gans wrote: > On 11/22/13 11:15 AM, Ralph Castain wrote: >> >> On Nov 22, 2013, at 10:03 AM, Reuti wrote: >> >>> Am 22.11.2013 um 18:56 schrieb Jason Gans: >>> On 11/22/13 10:47 AM, Reuti wrote: > Hi, > > Am 22.11.2013 um 17:32 schrieb Ga

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Jason Gans
On 11/22/13 11:18 AM, Lloyd Brown wrote: As far as I understand, the mpirun will assign processes to hosts in the hostlist ($PBS_NODEFILE) sequentially, and if it runs out of hosts in the list, it starts over at the top of the file. Theoretically, you should be able to request specific hostnames

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Reuti
Am 22.11.2013 um 19:34 schrieb Jason Gans: > On 11/22/13 11:18 AM, Lloyd Brown wrote: >> As far as I understand, the mpirun will assign processes to hosts in the >> hostlist ($PBS_NODEFILE) sequentially, and if it runs out of hosts in >> the list, it starts over at the top of the file. >> >> Theo

[OMPI users] open-mpi on Mac OS 10.9 (Mavericks)

2013-11-22 Thread Meredith, Karl
I recently upgraded my 2013 Macbook Pro (Retina display) from 10.8 to 10.9. I downloaded and installed openmpi-1.6.5 and compiled it with gcc 4.8 (gcc installed from macports). openmpi compiled and installed without error. However, when I try to run any of the example test cases, the code get

Re: [OMPI users] Bug MPI_Iscatter

2013-11-22 Thread Pierre Jolivet
George, On Nov 22, 2013, at 5:21 AM, George Bosilca wrote: > Pierre, > > On Nov 22, 2013, at 02:39 , Pierre Jolivet wrote: > >> George, >> I completely agree that the code I sent was a good example of what NOT to do >> with collective and non-blocking communications, so I’m sending a better

Re: [OMPI users] Bug MPI_Iscatter

2013-11-22 Thread George Bosilca
On Nov 23, 2013, at 01:18 , Pierre Jolivet wrote: > George, > > On Nov 22, 2013, at 5:21 AM, George Bosilca wrote: > >> Pierre, >> >> On Nov 22, 2013, at 02:39 , Pierre Jolivet wrote: >> >>> George, >>> I completely agree that the code I sent was a good example of what NOT to >>> do with