There is nothing MPI specific in your code snippet.
You should try to find out what is different in your
code for node 0. You have mentioned that you have
moved the root node to other nodes, so it's not machine
specific. You might be setting up the arrays differently
on the different nodes. You sh
s and want to tie
them to two cores which means I don't want the 7 threads to use all
four cores on the cluster that we have. Have you done some thing
similar to this?
Thanks,
Siamak
On 10/23/07, *Prasun Ratn* <mailto:prasu...@ncsu.edu>> wrote:
I do this using the h
pable of
doing that.
Thanks,
Siamak
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Prasun Ratn
Graduate Student, Computer Science,
I'm using ver 1.2.3 which I believe is the latest stable release
Jelena Pjesivac-Grbovic wrote:
Hello,
which version are you using? This is an old code.
The trunk version has new code which does not use any MPI_* functions.
Thanks,
Jelena
On Mon, 27 Aug 2007, Prasun Ratn wrote:
Is there a reason why ompi_coll_tuned_alltoall_intra_bruck calls
MPI_Type_* functions and not PMPI_Type_* ? My code traces MPI
calls and I would rather not trace calls made by openmpi itself.
-Prasun