yes,
i was aware of the big difference hehe.

now that openMP and openMPI is in talk, i've alwyas wondered if its a
good idea to model a solution on the following way, using both openMP
and openMPI.
suppose you have n nodes, each node has a quadcore, (so you have n*4 processors)
launch n proceses acorrding to the n nodes available.
set a resource manager like SGE to fill the n*4 slots using round robin.
on each process, make use of the other cores available on the node,
with openMP.

if this is possible, then on each one could make use fo the shared
memory model locally at each node, evading unnecesary I/O through the
nwetwork, what do you think?



On Thu, Jul 22, 2010 at 5:27 PM, amjad ali <amja...@gmail.com> wrote:
> Hi Cristobal,
>
> Note that the pic in http://dl.dropbox.com/u/6380744/clusterLibs.png
> shows that Scalapack is based on what; it only shows which packages
> Scalapack uses; hence no OpenMP is there.
>
> Also be clear about the difference:
> "OpenMP" is for shared memory parallel programming, while
> "OpenMPI" is an implantation of MPI standard (this list is about OpenMPI
> obviously).
>
> best
> AA.
>
> On Thu, Jul 22, 2010 at 5:06 PM, Cristobal Navarro <axisch...@gmail.com>
> wrote:
>>
>> Thanks
>>
>> im looking at the manual, seems good.
>> i think now the picture is more clear.
>>
>> i have a very custom algorithm, local problem of research,
>> paralelizable, thats where openMPI enters.
>> then, at some point on the program, all the computation traduces to
>> numeric (double) matrix operations, eigenvalues and derivatives. thats
>> where a library like PETSc makes its appearance. or a lower level
>> solution would be GSL and manually implement paralelism with MPI.
>>
>> in case someone chooses, a highlevel library like PETSc and some low
>> level openMPI for its custom algorithms, is there a race for MPI
>> problem?
>>
>> On Thu, Jul 22, 2010 at 3:42 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
>> > Hi Cristobal
>> >
>> > You may want to take a look at PETSc,
>> > which has all the machinery for linear algebra that
>> > you need, can easily attach a variety of Linear Algebra packages,
>> > including those in the diagram you sent and more,
>> > builds on top of MPI, and can even build MPI for you, if you prefer.
>> > It has C and Fortran interfaces, and if I remember right,
>> > you can build it alternatively with a C++ interface.
>> > You can choose from real or complex scalars,
>> > depending on your target problem (e.g. if you are going to do
>> > signal/image processing with FFTs, you want complex scalars).
>> > I don't know if it has high level commands to deal with
>> > data structures (like trees that you mentioned), but it may.
>> >
>> > http://www.mcs.anl.gov/petsc/petsc-as/
>> >
>> > My $0.02
>> > Gus Correa
>> > ---------------------------------------------------------------------
>> > Gustavo Correa
>> > Lamont-Doherty Earth Observatory - Columbia University
>> > Palisades, NY, 10964-8000 - USA
>> > ---------------------------------------------------------------------
>> >
>> > Cristobal Navarro wrote:
>> >>
>> >> Hello,
>> >>
>> >> i am designing a solution to one of my programs, which mixes some tree
>> >> generation, matrix operatons, eigenvaluies, among other tasks.
>> >> i have to paralellize all of this for a cluster of 4 nodes (32 cores),
>> >> and what i first thought was MPI as a blind choice, but after looking
>> >> at this picture
>> >>
>> >> http://dl.dropbox.com/u/6380744/clusterLibs.png ( On the picture,
>> >> openMP is missing.)
>> >>
>> >> i decided to take a break and sit down, think what best suits to my
>> >> needs.
>> >> Adittionally, i am not familiar with Fortran, so i search for C/C++
>> >> libraries.
>> >>
>> >> what are your experiences, what aspects of your proyect do you
>> >> consider when choosing, is a good practice to mix these libraries in
>> >> one same proyect?
>> >> _______________________________________________
>> >> users mailing list
>> >> us...@open-mpi.org
>> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> > _______________________________________________
>> > users mailing list
>> > us...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to