Am 25.10.2012 um 10:37 schrieb Guillermo Marco Puche: > Hello ! > > I found a new version of my tool which supports multi-threading but also MPI > or OPENMPI for more additional processes. > > I'm kinda new to MPI with SGE. What would be the good command for qsub or > config inside a job file to ask SGE to work with 2 MPI processes? > > Will the following code work in a SGE job file? > > #$ -pe mpi 2 > > That's supposed to make job work with 2 processes instead of 1.
Not out of the box: it will grant 2 slots for the job according to the allocation rules of the PE. But how to start your application in the jobscript inside the granted allocation is up to you. Fortunately the MPI libraries got an (almost) automatic integration into queuing systems nowadays without further user intervention. Which MPI library do you use when you compile your application of the mentioned ones above? -- Reuti > Regards, > Guillermo. > > El 22/10/2012 17:19, Reuti escribió: >> Am 22.10.2012 um 16:31 schrieb Guillermo Marco Puche: >> >> >>> I'm using a program where I can specify the number of threads I want to use. >>> >> Only threads and not additional processes? Then you are limited to one node, >> unless you add something like >> http://www.kerrighed.org/wiki/index.php/Main_Page or http://www.scalemp.com >> to get a cluster wide unique process and memory space. >> >> -- Reuti >> >> >> >>> I'm able to launch multiple instances of that tool in separate nodes. >>> For example: job_process_00 in compute-0-0, job_process_01 in compute-1 >>> etc.. each job is calling that program which splits up in 8 threads (each >>> of my nodes has 8 CPUs). >>> >>> When i setup 16 threads i can't split 8 threads per node. So I would like >>> to split them between 2 compute nodes. >>> >>> Currently I've 4 compute nodes and i would like to speed up the process >>> setting 16 threads of my program splitting between more than one compute >>> node. At this moment I'm stuck using only 1 compute node per process with 8 >>> threads. >>> >>> Thank you ! >>> >>> Best regards, >>> Guillermo. >>> _______________________________________________ >>> users mailing list >>> >>> [email protected] >>> https://gridengine.org/mailman/listinfo/users > > _______________________________________________ > users mailing list > [email protected] > https://gridengine.org/mailman/listinfo/users _______________________________________________ users mailing list [email protected] https://gridengine.org/mailman/listinfo/users
