Re: [OMPI users] random MPI_UNIVERSE_SIZE and inter-commicator created by MPI_Comm_spawn

2010-02-18 Thread George Bosilca
Mathieu, Your MPI_COMM_UNIVERSE is a inter-communicator, and therefore the MPI_Comm_size and MPI_Comm_rank return the size, respectively the rank, in the local group. There is a special accessor for getting the remote group size (MPI_Comm_remote_size). Now regarding the previous question (abou

Re: [OMPI users] random MPI_UNIVERSE_SIZE and inter-commicator created by MPI_Comm_spawn

2010-02-18 Thread Mathieu Gontier
Another question on the same example. When I ask the size on the inter-communitator (MPI_COMM_UNIVERSE in the example) between the spaner/parent and the spawned/children processes, the same number of processes than in MPI_COMM_WORLD is returned. I do not really understand because I expected m

Re: [OMPI users] Bad Infiniband latency with subounce

2010-02-18 Thread Pavel Shamis (Pasha)
Steve, thank for the details. What is the command line that you use to run the benchmark ? Can you try to add follow mca parameters to your command line: "--mca btl openib,sm,self --mca btl_openib_max_btls 1" Thanks, Pasha Repsher, Stephen J wrote: Thanks for keeping on this Hopefully this

[OMPI users] random MPI_UNIVERSE_SIZE

2010-02-18 Thread Mathieu Gontier
Hello, I am trying to use MPI_Comm_spawn (MPI-2 standard only) and I have an problem when I use MPI_UNIVERSE_SIZE. Here my code: int main( int argc, char *argv[] ) { int wsize=0, wrank=-1 ; int usize=0, urank=-1 ; int ier ; int usize_attr=0, flag=0 ; MPI_Comm MPI_COMM_UNIVERSE; ie

Re: [OMPI users] Bad Infiniband latency with subounce

2010-02-18 Thread Repsher, Stephen J
Thanks for keeping on this Hopefully this answers all the questions: The cluster has some blades with XRC, others without. I've tested on both with the same results. For MVAPICH, a flag is set to turn on XRC; I'm not sure how OpenMPI handles it but my build is configured --enable-openib-con

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-18 Thread Jeff Squyres
Thanks George. I assume we need this in 1.4.2 and 1.5, right? On Feb 17, 2010, at 6:15 PM, George Bosilca wrote: > I usually prefer the expanded notation: > > unsigned char ret; > __asm__ __volatile__ ( > "lock; cmpxchgl %3,%4 \n\t" >

Re: [OMPI users] Bad Infiniband latency with subounce

2010-02-18 Thread Pavel Shamis (Pasha)
Hey, I only may to add the XRC and RC have the same latency. What is the command line that you use to run this benchmark ? What is the system configuration (one hca, one active port ) ? Any addition information about system configuration, mpi command line, etc. will help to analyze your issue.