please refer to following code, which sends data to root from multiple cour
There is only one receive, so it receives only one message. When you
specify the element count for the receive, you're only specifying the size
of the buffer into which the message will be received. Only after the
messa
I got the solution. I just need to set the appropriate tag to send and
receive.sorry for asking
thanks
winthan
On Wed, Dec 24, 2008 at 10:36 PM, Win Than Aung wrote:
> thanks Eugene for your example, it helps me a lot.I bump into one more
> problems
> lets say I have the file content
_odd_dat;
if(mpi_my_id == root)
{
filepteven = fopen("C:\\fileeven.dat");
fileptodd = fopen("C:\\fileodd.dat");
int peer =0;
for(peer =0;peer wrote:
> Win Than Aung wrote:
>
> thanks for your reply jeff
> so i tried following
>
>
>
> #
se values will be
sent to root processor and save as file_even_add.dat" and also each
processor will add "1st and 3rd"(odd values) (those values will be sent to
root processor(here is 0) and saved as "file_odd_add.dat".
if(mpi_my_id == root)
{
}
On Tue, Dec 23, 2008
> either Open MPI or MPICH2), but that's where the similarities end.
>
>
>
> On Dec 23, 2008, at 2:07 PM, Win Than Aung wrote:
>
> PS: extra question
>> qsub -I -q standby -l select=1:ncpus=8
>> mpirun -np 4 ./hello
>> running mpdallexit on steele-a137
= 2.98023223877e-05 seconds
too few entries in machinefile
the mpi program supposed to print 4 hello msg since there r four processors.
but for some reasons, it doesn't print
thanks
winthan
On Tue, Dec 23, 2008 at 1:23 PM, Eugene Loh wrote:
> Win Than Aung wrote:
>
> MPI_Recv() <
PI_Recv is better to receive msg from multiple processors and gather
in 1 processor? or MPI_Gather is better?
thanks
winthan
On Tue, Dec 23, 2008 at 1:23 PM, Eugene Loh wrote:
> Win Than Aung wrote:
>
> MPI_Recv() << is it possible to receive the message sent from
MPI_Recv() << is it possible to receive the message sent from other
sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
thanks
mpirun -np 4 ./hello
running mpdallexit on steele-a137.rcac.purdue.edu
LAUNCHED mpd on steele-a137.rcac.purdue.edu via
RUNNING: mpd on steele-a137.rcac.purdue.edu
steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
time for 100 loops = 2.98023223877e-05 seconds
too few entries in machinefile
i put