If that is what you are trying to do, mpirun will do it just fine too - it 
doesn't have to be an MPI program.

On Oct 19, 2011, at 3:37 PM, Gus Correa wrote:

> Jorge
> 
> Besides what Reuti and Eugene said, in case what you're looking for
> is a mechanism to launch several copies of a
> serial [non-parallel] program in a cluster,
> you could try these alternatives:
> 
> 1) Launch several jobs to run the same program,
> using a job scheduler like Torque or Grid Engine.
> 
> http://www.adaptivecomputing.com/resources/docs/torque/
> http://www.adaptivecomputing.com/resources/downloads.php
> [Torque may be available through your Linux package manager: yum,
> apt-get, etc.]
> 
> http://gridscheduler.sourceforge.net/
> 
> 
> 2) Use a distributed/parallel shell like pdsh, tentakel, etc,
> to launch many copies of the serial program:
> 
> http://sourceforge.net/projects/pdsh/
> https://computing.llnl.gov/linux/pdsh.html
> http://freshmeat.net/projects/tentakel/
> 
> Some of these items may be already installed in your cluster.
> 
> My two cents.
> Gus Correa
> 
> Reuti wrote:
>> Hi,
>> Am 19.10.2011 um 17:57 schrieb Jorge Jaramillo:
>>> Hello everyone, I have a doubt about how to execute a parallel application 
>>> on a cluster. I used the 'mpirun' to execute some applications and they 
>>> worked, but I guess this command only is useful with MPI applications.
>> correct.
>>> My question is, How do I execute a program that has no MPI statements on 
>>> the cluster?
>> "In the cluster" could also mean "How to submit a job to a cluster, which 
>> would then in turn runs local on a granted machine". But I think you mean 
>> this in the context, that you have just a bunch of machines with just MPI 
>> installed.
>>> If it is not possible, how do I change the structure of the program so it 
>>> can be executed as a parallel application?
>> This depends on the application: sometimes you could just parallelize some 
>> loops, in some cases you have to change the used algorithm to replace it 
>> with one which can easily be parallelized, maybe the data structure needs to 
>> be changed and you have to think about how to distribute data to the 
>> nodes,...
>> It might also be, that using Open MP (which works only on one and the same 
>> machine) will give you a parallel version faster. http://openmp.org/wp/ 
>> Nowadays many compilers support it. Nevertheless you have to touch your 
>> application by hand and modify the source.
>> -- Reuti
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to