On Nov 2, 2011, at 10:34 AM, Durga Choudhury wrote:
> Any particular reason these calls don't nest? In some other HPC-like
> paradigms (e.g. VSIPL) such calls are allowed to nest (i.e. only the
> finalize() that matches the first init() will destroy allocated
> resources.)
I honestly don't rememb
yes i call MPI_init after MPI_finalise ,because i use mpi in function and i
call it lot of time .
thank you for ur help ,now i call mpi_init and finalize out of funtion and
it work .
thank you
Le 2 novembre 2011 13:29, Jeff Squyres (jsquyres) a
écrit :
> Did you call MPI-INIT after you called
Any particular reason these calls don't nest? In some other HPC-like
paradigms (e.g. VSIPL) such calls are allowed to nest (i.e. only the
finalize() that matches the first init() will destroy allocated
resources.)
Just a curiosity question, doesn't really concern me in any particular way.
Best re
Hi,
you could try the following (template):
MPI_Send( &vec[first_element], num_elements*sizeof(T), MPI_BYTE, ..)
MPI_Recv( &vec[first_element], num_elements*sizeof(T), MPI_BYTE, ..)
As far as I know STL vectors use contiguous memory for the values of the
vector.
However, I didn't test this and
Did you call MPI-INIT after you called MPI-finalize? If so, you're not allowed
to do that. Call. MPI-INIT once and call MPI-finalize once.
Sent from my phone. No type good.
On Nov 1, 2011, at 2:45 PM, "amine mrabet" wrote:
> hey
>
> i'm new in mpi , i try tu use mpi inside of function and
You might want to look at boost.mpi.
Sent from my phone. No type good.
On Nov 1, 2011, at 2:58 PM, "Mudassar Majeed" wrote:
> Dear MPI people,
> I have a vector class with template as
> follows,
>
> template
> class Vector
>
> It is a wrapper on the STL ve