dear sir/madam
what are the mpi functins used for taking checkpoint and restart within
applicaion in mpi programs and where do i get these functions from ?
with regards
mallikarjuna shastry
I strongly suggest you take a look at boost::mpi,
http://www.boost.org/doc/libs/1_39_0/doc/html/mpi.html
It handles serialization transparently and has some great natural
extensions to the MPI C interface for C++, e.g.
bool global = all_reduce(comm, local, logical_and());
This sets "global"
Hi,
do I understand the MPI-2 Parallel I/O correctly (C++)?
After opening a file with MPI::File::Open, I can use Read_at on the
returned file object. I give offsets in bytes and I can perform random
access reads from any process at any point of the file without
violating correctness (although the
Dear all,
I have recently started working on a project using OpenMPI. Basically,
I have been given some c++ code, a cluster to play with and a deadline
in order to make the c++ code run faster. The cluster was a bit
crowded, so I started working on my laptop (g++ 4.3.3 -- Ubuntu repos,
OpenMPI 1.3
The MPI standard does not define any functions for taking checkpoints
from the application.
The checkpoint/restart work in Open MPI is a command line driven,
transparent solution. So the application does not have change in any
way, and the user (or scheduler) must initiate the checkpoint fr
Hi,
//Initialize step
MPI_Init(&argc,&argv);
//Here it breaks!!! Memory allocation issue!
MPI_Comm_size(MPI_COMM_WORLD, &pool);
std::cout<<"I'm here"<
and your PATH is also okay? (I see that you use plain mpicxx in the
build) ...
Moreover, I wanted to see if the installation is actually
On Mon, Jul 6, 2009 at 2:14 PM, Dorian Krause wrote:
> Hi,
>
>>
>> //Initialize step
>> MPI_Init(&argc,&argv);
>> //Here it breaks!!! Memory allocation issue!
>> MPI_Comm_size(MPI_COMM_WORLD, &pool);
>> std::cout<<"I'm here"<> MPI_Comm_rank(MPI_COMM_WORLD, &myid);
>>
>> When trying to debug via gdb
Hi
Are you also sure that you have the same version of Open-MPI
on every machine of your cluster, and that it is the mpicxx of this
version that is called when you run your program?
I ask because you mentioned that there was an old version of Open-MPI
present... die you remove this?
Jody
On Mon,
On Mon, Jul 6, 2009 at 3:26 PM, jody wrote:
> Hi
> Are you also sure that you have the same version of Open-MPI
> on every machine of your cluster, and that it is the mpicxx of this
> version that is called when you run your program?
> I ask because you mentioned that there was an old version of Op
Let total time on my slot 0 process be S+C+B+I
= serial computations + communication + busy wait + idle
Is there a way to find out S?
S+C would probably also be useful, since I assume C is low.
The problem is that I = 0, roughly, and B is big. Since B is big, the
usual process timing methods don'
just one additional and if I have:
vector< vector > x;
How to use the MPI_Send
MPI_Send(&x[0][0], x[0].size(),MPI_DOUBLE, 2, 0, MPI_COMM_WORLD);
?
Le 09-07-05 à 22:20, John Phillips a écrit :
Luis Vitorio Cargnini wrote:
Hi,
So, after some explanation I start to use the bindings of C inside
Thanks, but I really do not want to use Boost.
Is easier ? certainly is, but I want to make it using only MPI itself
and not been dependent of a Library, or templates like the majority of
boost a huge set of templates and wrappers for different libraries,
implemented in C, supplying a wrappe
Hi, I am attempting to debug a memory corruption in an mpi program using
valgrind. Howver, when I run with valgrind I get semi-random segfaults and
valgrind messages with the openmpi library. Here is an example of such a
seg fault:
==6153==
==6153== Invalid read of size 8
==6153==at 0x19102
Feels like a deja-vu: http://www.linux-mag.com/cache/7407/1.html.
Doesn't MapReduce do what MPI has been doing for a lot longer?
Hi Luis,
Luis Vitorio Cargnini wrote:
Thanks, but I really do not want to use Boost.
Is easier ? certainly is, but I want to make it using only MPI itself
and not been dependent of a Library, or templates like the majority of
boost a huge set of templates and wrappers for different libraries
Hi Raymond, thanks for your answer
Le 09-07-06 à 21:16, Raymond Wan a écrit :
I've used Boost MPI before and it really isn't that bad and
shouldn't be seen as "just another library". Many parts of Boost
are on their way to being part of the standard and are discussed and
debated on. And
Hi all,
The system I use is a PS3 cluster, with 16 PS3s and a PowerPC as a
headnode, they are connected by a high speed switch.
There are point-to-point communication functions( MPI_Send and
MPI_Recv ), the data size is about 40KB, and a lot of computings which
will consume a long time(abou
Lin,
Try -np 16 and not running on the head node.
Doug Reeder
On Jul 6, 2009, at 7:08 PM, Zou, Lin (GE, Research, Consultant) wrote:
Hi all,
The system I use is a PS3 cluster, with 16 PS3s and a PowerPC as
a headnode, they are connected by a high speed switch.
There are point-to-poin
Luis Vitorio Cargnini wrote:
just one additional and if I have:
vector< vector > x;
How to use the MPI_Send
MPI_Send(&x[0][0], x[0].size(),MPI_DOUBLE, 2, 0, MPI_COMM_WORLD);
?
Vitorio,
The standard provides no information on where the different parts of
the data will be, relative to
Luis Vitorio Cargnini wrote:
Your suggestion is a great and interesting idea. I only have the fear to
get used to the Boost and could not get rid of Boost anymore, because
one thing is sure the abstraction added by Boost is impressive, it turn
the things much less painful like MPI to be imple
On Mon, 2009-07-06 at 23:09 -0400, John Phillips wrote:
> Luis Vitorio Cargnini wrote:
> >
> > Your suggestion is a great and interesting idea. I only have the fear to
> > get used to the Boost and could not get rid of Boost anymore, because
> > one thing is sure the abstraction added by Boost i
Thank you for your suggestion, I tried this solution, but it doesn't work. In
fact, the headnode doesn't participate the computing and communication, it only
malloc a large a memory, and when the loop in every PS3 is over, the headnode
gather the data from every PS3.
The strange thing is that
22 matches
Mail list logo