thanks

very clear,

i was not aware that openMPI internally uses shared memory in case two
proceses reside on the same node,
which is perfect.

very complete explanations,
thanks really

On Thu, Jul 22, 2010 at 7:11 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
> Hi Cristobal
>
> Cristobal Navarro wrote:
>>
>> yes,
>> i was aware of the big difference hehe.
>>
>> now that openMP and openMPI is in talk, i've alwyas wondered if its a
>> good idea to model a solution on the following way, using both openMP
>> and openMPI.
>> suppose you have n nodes, each node has a quadcore, (so you have n*4
>> processors)
>> launch n proceses acorrding to the n nodes available.
>> set a resource manager like SGE to fill the n*4 slots using round robin.
>> on each process, make use of the other cores available on the node,
>> with openMP.
>>
>> if this is possible, then on each one could make use fo the shared
>> memory model locally at each node, evading unnecesary I/O through the
>> nwetwork, what do you think?
>>
>
> Yes, it is possible, and many of the atmosphere/oceans/climate codes
> that we run is written with this capability. In other areas of
> science and engineering this is probably the case too.
>
> However, this is not necessarily better/faster/simpler than dedicate all the
> cores to MPI processes.
>
> In my view, this is due to:
>
> 1) OpenMP has a different scope than MPI,
> and to some extent is limited by more stringent requirements than MPI;
>
> 2) Most modern MPI implementations (and OpenMPI is an example) use shared
> memory mechanisms to communicate between processes that reside
> in a single physical node/computer;
>
> 3) Writing hybrid code with MPI and OpenMP requires more effort,
> and much care so as not to let the two forms of parallelism step on
> each other's toes.
>
> OpenMP operates mostly through compiler directives/pragmas interspersed
> on the code.  For instance, you can parallelize inner loops in no time,
> granted that there are no data dependencies across the commands within the
> loop.  All it takes is to write one or two directive/pragma lines.
> More than loop parallelization can be done with OpenMP, of course,
> although not as much as can be done with MPI.
> Still, with OpenMP, you are restricted to work in a shared memory
> environment.
>
> By contrast, MPI requires more effort to program, but it takes advantage
> of shared memory and networked environments
> (and perhaps extended grids too).
> On areas where MPI-based libraries and APIs (like PETSc) were developed,
> the effort of programming directly with MPI can be reduced,
> by using the library facilities.
>
> To answer your question in another email, I think
> in principle you can program with PETSc and MPI together.
>
> I hope this helps.
> Gus Correa
> ---------------------------------------------------------------------
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> ---------------------------------------------------------------------
>
>>
>>
>> On Thu, Jul 22, 2010 at 5:27 PM, amjad ali <amja...@gmail.com> wrote:
>>>
>>> Hi Cristobal,
>>>
>>> Note that the pic in http://dl.dropbox.com/u/6380744/clusterLibs.png
>>> shows that Scalapack is based on what; it only shows which packages
>>> Scalapack uses; hence no OpenMP is there.
>>>
>>> Also be clear about the difference:
>>> "OpenMP" is for shared memory parallel programming, while
>>> "OpenMPI" is an implantation of MPI standard (this list is about OpenMPI
>>> obviously).
>>>
>>> best
>>> AA.
>>>
>>> On Thu, Jul 22, 2010 at 5:06 PM, Cristobal Navarro <axisch...@gmail.com>
>>> wrote:
>>>>
>>>> Thanks
>>>>
>>>> im looking at the manual, seems good.
>>>> i think now the picture is more clear.
>>>>
>>>> i have a very custom algorithm, local problem of research,
>>>> paralelizable, thats where openMPI enters.
>>>> then, at some point on the program, all the computation traduces to
>>>> numeric (double) matrix operations, eigenvalues and derivatives. thats
>>>> where a library like PETSc makes its appearance. or a lower level
>>>> solution would be GSL and manually implement paralelism with MPI.
>>>>
>>>> in case someone chooses, a highlevel library like PETSc and some low
>>>> level openMPI for its custom algorithms, is there a race for MPI
>>>> problem?
>>>>
>>>> On Thu, Jul 22, 2010 at 3:42 PM, Gus Correa <g...@ldeo.columbia.edu>
>>>> wrote:
>>>>>
>>>>> Hi Cristobal
>>>>>
>>>>> You may want to take a look at PETSc,
>>>>> which has all the machinery for linear algebra that
>>>>> you need, can easily attach a variety of Linear Algebra packages,
>>>>> including those in the diagram you sent and more,
>>>>> builds on top of MPI, and can even build MPI for you, if you prefer.
>>>>> It has C and Fortran interfaces, and if I remember right,
>>>>> you can build it alternatively with a C++ interface.
>>>>> You can choose from real or complex scalars,
>>>>> depending on your target problem (e.g. if you are going to do
>>>>> signal/image processing with FFTs, you want complex scalars).
>>>>> I don't know if it has high level commands to deal with
>>>>> data structures (like trees that you mentioned), but it may.
>>>>>
>>>>> http://www.mcs.anl.gov/petsc/petsc-as/
>>>>>
>>>>> My $0.02
>>>>> Gus Correa
>>>>> ---------------------------------------------------------------------
>>>>> Gustavo Correa
>>>>> Lamont-Doherty Earth Observatory - Columbia University
>>>>> Palisades, NY, 10964-8000 - USA
>>>>> ---------------------------------------------------------------------
>>>>>
>>>>> Cristobal Navarro wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> i am designing a solution to one of my programs, which mixes some tree
>>>>>> generation, matrix operatons, eigenvaluies, among other tasks.
>>>>>> i have to paralellize all of this for a cluster of 4 nodes (32 cores),
>>>>>> and what i first thought was MPI as a blind choice, but after looking
>>>>>> at this picture
>>>>>>
>>>>>> http://dl.dropbox.com/u/6380744/clusterLibs.png ( On the picture,
>>>>>> openMP is missing.)
>>>>>>
>>>>>> i decided to take a break and sit down, think what best suits to my
>>>>>> needs.
>>>>>> Adittionally, i am not familiar with Fortran, so i search for C/C++
>>>>>> libraries.
>>>>>>
>>>>>> what are your experiences, what aspects of your proyect do you
>>>>>> consider when choosing, is a good practice to mix these libraries in
>>>>>> one same proyect?
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to