There is nothing MPI specific in your code snippet.
You should try to find out what is different in your
code for node 0. You have mentioned that you have
moved the root node to other nodes, so it's not machine
specific. You might be setting up the arrays differently
on the different nodes. You sh
Hi All,
Thanks for the help. I think that I don't have the cache issue
because all the processes have the same amount of data, and
accessed in the same fashion. My problem is solved partially as I
was using 2, 4, 8 , 16, 32 and 64 processes for my application
code. Now what I did I used 3 pr
This is not an MPI problem.
Without looking at your code in detail, I'm guessing that you're
accessing memory without any regard to memory layout and/or caching.
Such an access pattern will therefore thrash your L1 and L2 caches
and access memory in a truly horrible pattern that guarantees
Thanks,
The array bounds are the same on all the nodes and also the compute nodes
are identical i.e. SunFire V890 nodes. And I have also changed the root
process to be on different nodes, but the problem remains the same. I
still dont understand the problem very well and my progress is in stand
s
Thanks for your reply,
I used MPI_Wtime for my application but even then process 0 took longer
time executing the mentioned code segment. I might be worng, but what I
see is process 0 takes more time to access the array elements than other
processes. Now I dont see what to do because the mentione
HI
I'm not sure if that is a problem,
but in MPI applications you shoud
use MPI_WTime() for time-measurements
Jody
On 10/25/07, 42af...@niit.edu.pk <42af...@niit.edu.pk> wrote:
> Hi all,
>I am a research assistant (RA) at NUST Pakistan in High Performance
> Scientific Computing Lab. I am