On Sep 15, 2006, at 10:36 AM, imran shaik wrote:
Can you elaborate on this.?
I have few doubts as well:
1) OpenMPI runtime supports SGE?? Does it uses SGE instead of MPI
runtime when it finds SGE running??
It's a difficult question if you expect an answer describing the deep
internals of the Open MPI implementation. Let's say from a high level
point of view that the MPI runtime detect SGE and use it in order to
start the MPI job.
2) Is it possible to check point and run MPI jobs?
Not with the released version. It's still work in progress.
Eventually it will be one of the features of Open MPI but not before
SC2006.
3) Is it possible to add and remove processes dynamically from the
MPI communicator?
Open MPI is MPI 2 compliant, therefore it support dynamic processes.
The is a FAQ on the web site on how to do it.
5) When do we actually need many different communicators?
It depend on what you plan to do. Usually, from the programmer point
of view using multiple communicators make the code more readable as
they allow you to have a logic view of the messages in transit. But
it is not a requirement. One can write a million lines of code MPI
application and only use the MPI_COMM_WORLD.
4) Is MPI only suitable for low latency communication in a cluster
environment?
MPI was designed as a programming paradigm. It allow expressing
parallel algorithms based on communications between peers. These
communications can be point-to-point or collectives. The goal is
wider than just low latency communications, as the standard allow you
[as an example] to describe the memory layout of the data that get
involved in the communication. The MPI forum have the full
documentation about all the features of the MPI 2 standard.
george.
Ralph H Castain <r...@lanl.gov> wrote: I can't speak to the Perl
bindings, but Open MPI's runtime already supports
SGE, so all you have to do is "mpirun" like usual and we take care
of the
rest. You may have to check your version of Open MPI as this
capability was
added in the more recent releases.
Ralph
On 9/13/06 8:52 AM, "Renato Golin" wrote:
> On 9/13/06, imran shaik wrote:
>> I need to run parallel jobs on a cluster typically of size 600
nodes and
>> running SGE, but the programmers are good at perl but not C or C+
+. So i
>> thought of MPI, but i dont know whether it has perl support?
>
> Hi Imran,
>
> SGE will dispatch process among the nodes of your cluster but it
does
> not support interprocess communication, which MPI does. If your
> problem is easily splittable (like parse a large apache log, read a
> large xml list of things) you might be able to split the data and
> spawn as many process as you can.
>
> I do it using LSF (another dispatcher) and a Makefile that controls
> the dependencies and spawn the processes (using make's -j flag)
and it
> works quite well. But if your job need the communication (like
> processing big matrices, collecting and distributing data among
> processes etc) you'll need an interprocess communication and that's
> what MPI is best at.
>
> In a nutshell, you'll need the runtime environment to run MPI
programs
> as well as you need SGE's runtime environments on every node to
> dispatch jobs and collect information.
>
> About MPI bindings for Perl, there's this module:
> http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm
>
> but it's far too young to be trustworthy, IMHO, and you'll probably
> need the MPI runtime on all nodes as well...
>
> cheers,
> --renato
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.
Great rates starting at 1ยข/min.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users