Re: [OMPI users] MPI_AllReduce design for shared-memory.

2007-08-14 Thread smairal
Thanks, I understand what you are saying. But my query is regarding the design of MPI_AllReduce for shared-memory systems. I mean is there any different logic/design of MPI_AllReduce when OpenMPI is used on shared-memory systems? The standard MPI_AllReduce says, 1. Each MPI process sends its value

Re: [OMPI users] MPI_AllReduce design for shared-memory.

2007-08-14 Thread smairal
Can anyone help on this? -Thanks, Sarang. Quoting smai...@ksu.edu: > Hi, > I am doing a research on parallel techniques for shared-memory > systems(NUMA). I understand that OpenMPI is intelligent to utilize > shared-memory system and it uses processor-affinity. Is the OpenMPI > design of MPI_All

[OMPI users] MPI_AllReduce design for shared-memory.

2007-08-10 Thread smairal
Hi, I am doing a research on parallel techniques for shared-memory systems(NUMA). I understand that OpenMPI is intelligent to utilize shared-memory system and it uses processor-affinity. Is the OpenMPI design of MPI_AllReduce "same" for shared-memory (NUMA) as well as distributed system? Can someon

Re: [OMPI users] Problem running MPI on a dual-core pentium D

2007-06-10 Thread smairal
I am working on an MD simulation algorithm on a shared-memory system with 4 dual-core AMD 875 opteron processors. I started with MPICH (1.2.6) and then shifted to OpenMPI and I found very good improvement with OpenMPI. Even I would be interested in knowing any other benchmarks with similar comparis

[OMPI users] OpenMPI with multiple threads (MPI_THREAD_MULTIPLE)

2007-06-05 Thread smairal
Hi, I am trying a program in which I have 2 MPI nodes and each MPI node has 2 threads: Main node-thread Receive Thread - MPI_Init_Thread(MPI_THREAD_MULTIPLE); . . LOOP: LOOP: THRE

Re: [OMPI users] forcing MPI to bind all sockets to 127.0.0.1

2007-05-30 Thread smairal
I use a shared memory system and for my MPI algorithm, I set the IP-addresses for all the nodes as 127.0.0.1 in some_hostfile and I execute the program using "mpirun --machinefile some_hostfile -np 4 prog-name". I think, by default the sm btl switch is ON. Will this help in such a case? I am not su

[OMPI users] OpenMPI on shared memory.

2007-05-29 Thread smairal
Hi, I am doing a research on parallel computing on shared memory with NUMA architecture. The system is a 4 node AMD opteron with each node being a dual-core. I am testing an OpenMPI program with MPI-nodes <= MAX cores available on system (in my case 4*2=8). Can someone tell me whether: a) In s

Re: [OMPI users] Regarding MPI_THREAD_MULTIPLE

2007-05-27 Thread smairal
Thanks a lot Brian. -Regards, Sarang. Quoting "Brian W. Barrett" : > You're right, in v1.2 it will return MPI_THREAD_SINGLE (although it > really > shouldn't). Some MPI implementations may do something different if > you > request MPI_THREAD_FUNNELED instead of MPI_THREAD_SINGLE, so you > shoul

Re: [OMPI users] Regarding MPI_THREAD_MULTIPLE

2007-05-27 Thread smairal
I tried with MPI_THREAD_FUNNELED but it still returns MPI_THREAD_SINGLE in "provided". I tried a sample program that has thread_0 doing MPI and thread_1 doing some computation and thread_0 and thread_1 doing some thread synchronization (using pthread condition variables). The program seems to be do

Re: [OMPI users] Regarding MPI_THREAD_MULTIPLE

2007-05-27 Thread smairal
Thanks a lot guys for your help. I am also trying a design where I have one thread doing MPI and others doing some computation (non-MPI). If I have only MPI_THREAD_SINGLE enabled for OpenMPI, can I still implement this kind of a design or there would be issues with threads in OpenMPI. -Thanks and

[OMPI users] Regarding MPI_THREAD_MULTIPLE

2007-05-26 Thread smairal
Hi, I want to use threads with OpenMPI such that the threads would be able to call MPI functions. For this purpose, I am using MPI_Init_thread with MPI_THREAD_MULTIPLE option. But this function call returns MPI_THREAD_SINGLE in the "provided" parameter indicating that MPI_THREAD_MULTIPLE is