FWIW: you might want to take an MPI tutorial; they're really helpful
for learning MPI's capabilities and how to use the primitives. The
NCSA has 2 excellent MPI tutorials (intro and advanced); they both
require free registration:
http://ci-tutor.ncsa.uiuc.edu/login.php
On Dec 24, 200
I got the solution. I just need to set the appropriate tag to send and
receive.sorry for asking
thanks
winthan
On Wed, Dec 24, 2008 at 10:36 PM, Win Than Aung wrote:
> thanks Eugene for your example, it helps me a lot.I bump into one more
> problems
> lets say I have the file content as follow
>
thanks Eugene for your example, it helps me a lot.I bump into one more
problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212 1.0012121 //0th
1.001212 1.0012121 //1st
1.001212 1.0012121 //2nd
1.001212 1
thanks Eugene for your example, it helps me a lot.I bump into one more
problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212 1.0012121 //0th
1.001212 1.0012121 //1st
1.001212 1.0012121 //2nd
1.001212 1
Win Than Aung wrote:
thanks for your reply jeff
so i tried following
#include
#include
int main(int argc, char **argv) {
int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
int data[2];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM
thanks for your reply jeff
so i tried following
#include
#include
int main(int argc, char **argv) {
int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
int data[2];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM_WORLD,&me);
if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD
This looks like a question for the MPICH2 developers.
Specifically, it looks like you are using MPICH2, not Open MPI. They
are entirely different software packages maintained by different
people -- we're not really qualified to answer questions about
MPICH2. The top-level API is the same
In the example you cite below, it looks like you're mixing MPI_Gather
and MPI_Send.
MPI_Gather is a "collective" routine; it must be called by all
processes in the communicator. All processes will send a buffer/
message to the root; only the root process will receive all the
buffers/messa
PS: extra question qsub -I -q standby -l select=1:ncpus=8
mpirun -np 4 ./hello
running mpdallexit on steele-a137.rcac.purdue.edu
LAUNCHED mpd on steele-a137.rcac.purdue.edu via
RUNNING: mpd on steele-a137.rcac.purdue.edu
steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
time for 100 loops = 2.9802
Hi,thanks for your reply. let's say i have 3 processors. I sent msg from
1st,2nd processors and want to gather in processor 0 processor. so i tried
like following. it couldn't receive msg sent from processor 1 and 2.
http://www.nomorepasting.com/getpaste.php?pasteid=22985
PS: is MPI_Recv is b
Win Than Aung wrote:
MPI_Recv() << is it possible to receive the message sent from
other sources? I tried MPI_ANY_SOURCE in place of source but it
doesn't work out
Yes of course. Can you send a short example of what doesn't work? The
example should presumably be less than about 20 line
MPI_Recv() << is it possible to receive the message sent from other
sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
thanks
12 matches
Mail list logo