Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-27 Thread tmishima
Hi, Here is a very simple patch, but Ralph might have a different idea. So I'd like him to decide how to treat it. As far as I checked, I believe it has no side effect. (See attached file: patch.bind-to-none) Tetsuya > Hi, > > Am 27.08.2014 um 09:57 schrieb Tetsuya Mishima: > > > Hi Reuti and R

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-27 Thread Reuti
Hi, Am 27.08.2014 um 09:57 schrieb Tetsuya Mishima: > Hi Reuti and Ralph, > > How do you think if we accept bind-to none option even when the pe=N option > is provided? > > just like: > mpirun -map-by slot:pe=N -bind-to none ./inverse Yes, this would be ok to cover all cases. -- Reuti > If

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-27 Thread Tetsuya Mishima
Hi Reuti and Ralph, How do you think if we accept bind-to none option even when the pe=N option is provided? just like: mpirun -map-by slot:pe=N -bind-to none ./inverse If yes, it's easy for me to make a patch. Tetsuya Tetsuya Mishima tmish...@jcity.maeda.co.jp

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-25 Thread Reuti
Am 21.08.2014 um 16:50 schrieb Reuti: > Am 21.08.2014 um 16:00 schrieb Ralph Castain: > >> >> On Aug 21, 2014, at 6:54 AM, Reuti wrote: >> >>> Am 21.08.2014 um 15:45 schrieb Ralph Castain: >>> On Aug 21, 2014, at 2:51 AM, Reuti wrote: > Am 20.08.2014 um 23:16 schrieb Ralph Cas

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
Am 21.08.2014 um 16:50 schrieb Reuti: > Am 21.08.2014 um 16:00 schrieb Ralph Castain: > >> >> On Aug 21, 2014, at 6:54 AM, Reuti wrote: >> >>> Am 21.08.2014 um 15:45 schrieb Ralph Castain: >>> On Aug 21, 2014, at 2:51 AM, Reuti wrote: > Am 20.08.2014 um 23:16 schrieb Ralph Cas

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
Am 21.08.2014 um 16:50 schrieb Reuti: > Am 21.08.2014 um 16:00 schrieb Ralph Castain: > >> >> On Aug 21, 2014, at 6:54 AM, Reuti wrote: >> >>> Am 21.08.2014 um 15:45 schrieb Ralph Castain: >>> On Aug 21, 2014, at 2:51 AM, Reuti wrote: > Am 20.08.2014 um 23:16 schrieb Ralph Cas

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
Am 21.08.2014 um 16:00 schrieb Ralph Castain: > > On Aug 21, 2014, at 6:54 AM, Reuti wrote: > >> Am 21.08.2014 um 15:45 schrieb Ralph Castain: >> >>> On Aug 21, 2014, at 2:51 AM, Reuti wrote: >>> Am 20.08.2014 um 23:16 schrieb Ralph Castain: > > On Aug 20, 2014, at 11:16

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Ralph Castain
On Aug 21, 2014, at 6:54 AM, Reuti wrote: > Am 21.08.2014 um 15:45 schrieb Ralph Castain: > >> On Aug 21, 2014, at 2:51 AM, Reuti wrote: >> >>> Am 20.08.2014 um 23:16 schrieb Ralph Castain: >>> On Aug 20, 2014, at 11:16 AM, Reuti wrote: > Am 20.08.2014 um 19:05 schrieb

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
Am 21.08.2014 um 15:45 schrieb Ralph Castain: > On Aug 21, 2014, at 2:51 AM, Reuti wrote: > >> Am 20.08.2014 um 23:16 schrieb Ralph Castain: >> >>> >>> On Aug 20, 2014, at 11:16 AM, Reuti wrote: >>> Am 20.08.2014 um 19:05 schrieb Ralph Castain: >> >> Aha, this is quite in

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Ralph Castain
On Aug 21, 2014, at 2:51 AM, Reuti wrote: > Am 20.08.2014 um 23:16 schrieb Ralph Castain: > >> >> On Aug 20, 2014, at 11:16 AM, Reuti wrote: >> >>> Am 20.08.2014 um 19:05 schrieb Ralph Castain: >>> > > Aha, this is quite interesting - how do you do this: scanning the > /proc//

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
sheds some light on it. -- Reuti > can I set a maximum number of threads in the queue one.q (e.g. 15 ) and > change the number in the 'export' for my convenience > > I feel like a child hearing the adults speaking > Thanks I'm learning a lot > > >

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
t;>> -nostdin -V compute-1-12.local PATH=/opt/openmpi/bin:$PATH ; exp >>>>>>>>> 17802 ? Sl 0:00 \_ /opt/gridengine/bin/linux-x64/qrsh -inherit >>> -nostdin -V compute-1-13.local PATH=/opt/openmpi/bin:$PATH ; exp >>>>>>>>> 17803 ? Sl 0

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-21 Thread Reuti
Am 20.08.2014 um 23:16 schrieb Ralph Castain: > > On Aug 20, 2014, at 11:16 AM, Reuti wrote: > >> Am 20.08.2014 um 19:05 schrieb Ralph Castain: >> Aha, this is quite interesting - how do you do this: scanning the /proc//status or alike? What happens if you don't find enough fr

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread tmishima
in the 'export' for my convenience > > I feel like a child hearing the adults speaking > Thanks I'm learning a lot > > > Oscar Fabian Mojica Ladino > Geologist M.S. in  Geophysics > > > > From: re...@staff.uni-marburg.de > > Date: Tue, 19 Aug 2014

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread tmishima
threads, i.e. > > OMP_NUM_THREADS must be defined by "qsub -v ..." outside of the jobscript > > (tricky scanning > >>> of the submitted jobscript for OMP_NUM_THREADS would be too nasty) > >>>> - limits to use inside the jobscript calls to libraries behaving i

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Ralph Castain
On Aug 20, 2014, at 11:16 AM, Reuti wrote: > Am 20.08.2014 um 19:05 schrieb Ralph Castain: > >>> >>> Aha, this is quite interesting - how do you do this: scanning the >>> /proc//status or alike? What happens if you don't find enough free >>> cores as they are used up by other applications al

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Reuti
Am 20.08.2014 um 19:05 schrieb Ralph Castain: >> >> Aha, this is quite interesting - how do you do this: scanning the >> /proc//status or alike? What happens if you don't find enough free >> cores as they are used up by other applications already? >> > > Remember, when you use mpirun to launc

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Oscar Mojica
27;m learning a lot Oscar Fabian Mojica Ladino Geologist M.S. in Geophysics > From: re...@staff.uni-marburg.de > Date: Tue, 19 Aug 2014 19:51:46 +0200 > To: us...@open-mpi.org > Subject: Re: [OMPI users] Running a hybrid MPI+openMP program > > Hi, > > Am 19.08.2014

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Ralph Castain
gt;>>>> queuingsystem should get a proper request for the overall amount of >>>> slots the user needs. For now this will be forwarded to Open MPI and it >>>> will use this >>>>>> information to start the appropriate number of processes (which was an >>

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Reuti
should the generated list of machines be >>> adjusted; there are several options: >>>>>> >>>>>> a) The PE of the queuingsystem should do it: >>>>>> >>>>>> + a one time setup for the admin >>>>>> + in SGE the

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Ralph Castain
ricky scanning >>>> of the submitted jobscript for OMP_NUM_THREADS would be too nasty) >>>>> - limits to use inside the jobscript calls to libraries behaving in >> the same way as Open MPI only >>>>> >>>>> >>>>> b) The

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Reuti
aving a mixture of > users and jobs a different >>> handling would be necessary to handle this in a proper way IMO: >>>>>> >>>>>> a) having a PE with a fixed allocation rule of 8 >>>>>> >>>>>> b) requesting this PE

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Ralph Castain
>>>> c) The user should do it >>>> >>>> + no change in the SGE installation >>>> - each and every user must include it in all the jobscripts to adjust > the list and export the pointer to the $PE_HOSTFILE, but he could change it > forth and b

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread tmishima
t;> d) Open MPI should do it > >> > >> + no change in the SGE installation > >> + no change to the jobscript > >> + OMP_NUM_THREADS can be altered for different steps of the jobscript while staying inside the granted allocation automatically > >> o sh

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Reuti
obscript >> + OMP_NUM_THREADS can be altered for different steps of the jobscript while >> staying inside the granted allocation automatically >> o should MKL_NUM_THREADS be covered too (does it use OMP_NUM_THREADS >> already)? >> >> -- Reuti >> >&

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-20 Thread Tetsuya Mishima
atically >o should MKL_NUM_THREADS be covered too (does it use OMP_NUM_THREADS already)? > >-- Reuti > > >> echo "PE_HOSTFILE:" >> echo $PE_HOSTFILE >> echo >> echo "cat PE_HOSTFILE:" >> cat $PE_HOSTFILE >> >> Thanks fo

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-19 Thread Reuti
rit -nostdin > > > -V compute-1-13.local PATH=/opt/openmpi/bin:$PATH ; exp > > > 17803 ? Sl 0:00 \_ /opt/gridengine/bin/linux-x64/qrsh -inherit -nostdin > > > -V compute-1-14.local PATH=/opt/openmpi/bin:$PATH ; exp > > > 17804 ? Sl 0:00 \_ /opt/gridengi

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-19 Thread Oscar Mojica
mails, your advices had been very useful PS: The version of SGE is OGS/GE 2011.11p1 Oscar Fabian Mojica Ladino Geologist M.S. in Geophysics > From: re...@staff.uni-marburg.de > Date: Fri, 15 Aug 2014 20:38:12 +0200 > To: us...@open-mpi.org > Subject: Re: [OMPI users] Running a hybrid M

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-15 Thread Reuti
-inherit -nostdin -V compute-1-4.local > PATH=/opt/openmpi/bin:$PATH ; expo > 17826 ?R 31:36 \_ ./inverse.exe > 3429 ?Ssl0:00 automount --pid-file /var/run/autofs.pid > > So the job is using the 10 machines, Until here is all right OK.

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-15 Thread Oscar Mojica
> Date: Thu, 14 Aug 2014 23:54:22 +0200 > To: us...@open-mpi.org > Subject: Re: [OMPI users] Running a hybrid MPI+openMP program > > Hi, > > I think this is a broader issue in case an MPI library is used in conjunction > with threads while running inside a queuing system.

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-14 Thread Reuti
chines trying to gain more time. the > results were the same in both cases. In the last case i could prove that the > processes were distributed to all machines correctly. > > What I must to do? > Thanks > > Oscar Fabian Mojica Ladino > Geologist M.S. in Geophysics >

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-14 Thread Oscar Mojica
ug 2014 10:10:17 -0400 > From: maxime.boissonnea...@calculquebec.ca > To: us...@open-mpi.org > Subject: Re: [OMPI users] Running a hybrid MPI+openMP program > > Hi, > You DEFINITELY need to disable OpenMPI's new default binding. Otherwise, > your N threads will run on a s

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-14 Thread Maxime Boissonneault
Hi, You DEFINITELY need to disable OpenMPI's new default binding. Otherwise, your N threads will run on a single core. --bind-to socket would be my recommendation for hybrid jobs. Maxime Le 2014-08-14 10:04, Jeff Squyres (jsquyres) a écrit : I don't know much about OpenMP, but do you need t

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-14 Thread Reuti
Hi, Am 14.08.2014 um 15:50 schrieb Oscar Mojica: > I am trying to run a hybrid mpi + openmp program in a cluster. I created a > queue with 14 machines, each one with 16 cores. The program divides the work > among the 14 processors with MPI and within each processor a loop is also > divided in

Re: [OMPI users] Running a hybrid MPI+openMP program

2014-08-14 Thread Jeff Squyres (jsquyres)
I don't know much about OpenMP, but do you need to disable Open MPI's default bind-to-core functionality (I'm assuming you're using Open MPI 1.8.x)? You can try "mpirun --bind-to none ...", which will have Open MPI not bind MPI processes to cores, which might allow OpenMP to think that it can us