[OMPI users] MPI for DSP

2006-03-06 Thread 赖俊杰
hello everyone,I'm a research assistant at Tsinghua University. And now,i begin to study the MPI for DSP. Can anybody tell me something on this field? thanks. laij...@mails.tsinghua.edu.cn   2006-03-07

Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Graham E Fagg
Hi David yep they do (reduce the values to a single location) and in a tree topology it would look something like the following: proc 3 4 5 6 local values 30 40 50 60 partial sums - -

Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Jeff Squyres
On Mar 6, 2006, at 3:38 PM, Xiaoning (David) Yang wrote: I'm not quite sure how collective computation calls work. For example, for an MPI_REDUCE with MPI_SUM, do all the processes collect values from all the processes and calculate the sum and put result in recvbuf on root? Sounds strange.

Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Xiaoning (David) Yang
I'm not quite sure how collective computation calls work. For example, for an MPI_REDUCE with MPI_SUM, do all the processes collect values from all the processes and calculate the sum and put result in recvbuf on root? Sounds strange. David * Correspondence * > From: Jeff Squyres > Re

Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Jeff Squyres
Generally, yes. There are some corner cases where we have to allocate additional buffers, but that's the main/easiest benefit to describe. :-) On Mar 6, 2006, at 11:21 AM, Xiaoning (David) Yang wrote: Jeff, Thank you for the reply. In other words, MPI_IN_PLACE only eliminates data mo

Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Xiaoning (David) Yang
Jeff, Thank you for the reply. In other words, MPI_IN_PLACE only eliminates data movement on root, right? David * Correspondence * > From: Jeff Squyres > Reply-To: Open MPI Users > Date: Fri, 3 Mar 2006 19:18:52 -0500 > To: Open MPI Users > Subject: Re: [OMPI users] MPI_IN_PLACE >

Re: [OMPI users] mpif90 problem.

2006-03-06 Thread Benoit Semelin
Second topic: I am using 3 processors I am calling a series of MPI_SCATTER which work when I send messages of 5 ko to the other processors, fails at the second scatter if I sent messages of ~10 ko, and fails at the first scatter for bigger messages. The message is: What is "ko" -- d