Glenn,
This will require some more investigation. I have verified that the
udapl btl is making the proper calls to free registered memory and
though I have seen the free memory as listed by vmstat drop and I see it
come back as well. Additionally if I run a basic bandwidth test
serially(one
MPI is process based and not thread based. So there is no way in MPI
to synchronize several threads. Moreover, all threads in a process
will return the same rank [as you noticed].
Synchronizations internal to a process have to be done outside MPI.
Any threading library, such as pthread libr
Hi Marcus,
Your expectation sounds very reasonable to me. I have filed a bug in our bug
tracker (https://svn.open-mpi.org/trac/ompi/ticket/1124), and you will
receive emails as it is updated.
Unfortunately, this is in a part of the code which has not been touched for a
long time, and is in som
Hi,
Can I use openmpi API like MPI_Reduce or MPI_Gather to synchronize
multiple theads in a process?
In my test in redhat linux, MPI_Comm_rank return 0 for all threads in
the same process. If I want to use the MPI functions like MPI_Reduce or
MPI_Gather, the rank number shoule be different
Hi,
I am doing a research on parallel techniques for shared-memory
systems(NUMA). I understand that OpenMPI is intelligent to utilize
shared-memory system and it uses processor-affinity. Is the OpenMPI
design of MPI_AllReduce "same" for shared-memory (NUMA) as well as
distributed system? Can someon
Early Registration Now Open!
_
2007 IEEE International Conference on Cluster Computing (Cluster 2007)
September 17-20, 2007
Austin, T