On Tue, Apr 17, 2012 at 2:26 AM, jody <jody....@gmail.com> wrote: > As to OpenMP: i already make use of OpenMP in some places (for > instance for the creation of the large data block), > but unfortunately my main application is not well suited for OpenMP > parallelization..
If MPI does not support this kind of programming, you can always write the logic in your application... MPI tasks are normal & real processes just like any other processes in the system. Do something like: 1. open a file in /tmp exclusively - which means only one MPI task on each machine can get the "lock". 2. the one that gets the "lock" creates a shared memory segment & loads in the fileset. 3. communicate with other MPI tasks on the machine (eg. read from a file, or whatever that is easy) and let them know about the memory segment. It's really 20 - 50 lines of C or C++ code - may not be the prettiest architecture, but in the end the MPI library is doing something very similar internally. Rayson ================================= Open Grid Scheduler / Grid Engine http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ > > I guess i'll have to take more detailed look at my problem to see if i > can restructure it in a good way... > > Thank You > Jody > > > On Mon, Apr 16, 2012 at 11:16 PM, Brian Austin <brianmaus...@gmail.com> wrote: >> Maybe you meant to search for OpenMP instead of Open-MPI. >> You can achieve something close to what you want by using OpenMP for on-node >> parallelism and MPI for inter-node communication. >> -Brian >> >> >> >> On Mon, Apr 16, 2012 at 11:02 AM, George Bosilca <bosi...@eecs.utk.edu> >> wrote: >>> >>> No currently there is no way in MPI (and subsequently in Open MPI) to >>> achieve this. However, in the next version of the MPI standard there will be >>> a function allowing processes to shared a memory segment >>> (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/284). >>> >>> If you like living on the bleeding edge, you can try Brian's branch >>> implementing the MPI 3.0 RMA operations (including the shared memory >>> segment) from http://svn.open-mpi.org/svn/ompi/tmp-public/mpi3-onesided/. >>> >>> george. >>> >>> On Apr 16, 2012, at 09:52 , jody wrote: >>> >>> > Hi >>> > >>> > In my application i have to generate a large block of data (several >>> > gigs) which subsequently has to be accessed by all processes (read >>> > only), >>> > Because of its size, it would take quite some time to serialize and >>> > send the data to the different processes. Furthermore, i risk >>> > running out of memory if this data is instantiated more than once on >>> > one machine. >>> > >>> > Does OpenMPI offer some way of sharing data between processes (on the >>> > same machine) without needing to send (and therefore copy) it? >>> > >>> > Or would i have to do this by means of creating shared memory, writing >>> > to it, and then make it accessible for reading by the processes? >>> > >>> > Thank You >>> > Jody >>> > _______________________________________________ >>> > users mailing list >>> > us...@open-mpi.org >>> > http://www.open-mpi.org/mailman/listinfo.cgi/users >>> >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/