[OMPI users] problème with MPI_FINALIZE

2011-11-01 Thread amine mrabet
hey

i'm new in mpi , i try tu use  mpi inside of function and i have this error
messag

An error occurred in MPI_Init
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (goodbye)
[dellam:16806] Abort before MPI_INIT completed successfully; not able to
guarantee that all other processes were killed!

maybe i cant use mpi inside of function ?

-- 
amine mrabet


[OMPI users] Sending vector elements of type T using MPI_Ssend, plz help.

2011-11-01 Thread Mudassar Majeed
Dear MPI people, 

    I have a vector class with template as follows,

template 
class Vector

It is a wrapper on the STL vector class. The element type is T that will be 
replaced by the actual instantiated type on the runtime. I have not seen any 
support in C++ templates for checking the type of T. I need to send elements of 
type T that are in the Vector v; using the MPI_Ssend  plz help me how 
can I do that. How can I send few elements may be starting from 4th element to 
the 10th element in the vector.

regards,
Mudassar


[OMPI users] How to set up state-less node /tmp for OpenMPI usage

2011-11-01 Thread Blosch, Edwin L
I'm getting this message below which is observing correctly that /tmp is 
NFS-mounted.   But there is no other directory which has user or group write 
permissions.  So I think I'm kind of stuck, and it sounds like a serious issue.

Before I ask the administrators to change their image, i.e. mount this 
partition under /work instead of /tmp, I'd like to ask if anyone is using 
OpenMPI on a state-less cluster, and are there any gotchas with regards to 
performance of OpenMPI, i.e. like handling of /tmp, that one would need to know?

Thank you,

Ed

--
WARNING: Open MPI will create a shared memory backing file in a
directory that appears to be mounted on a network filesystem.
Creating the shared memory backup file on a network file system, such
as NFS or Lustre is not recommended -- it may cause excessive network
traffic to your file servers and/or cause shared memory traffic in
Open MPI to be much slower than expected.

You may want to check your compute nodes, what the typical temporari
directory: node.  Possible sources of the location of this temporary
directory include the $TEMPDIR, $TEMP, and $TMP environment variables.

Note, too, that system administrators can set a list of filesystems
where Open MPI is disallowed from creating temporary files by settings
the MCA parameter "orte_no_session_dir".

Local host: e8332
File Name:  
/tmp/159313.1.e8300/openmpi-sessions-bloscel@e8332_0/53301/1/shared_mem_pool.e8332
--