On Tue, 2009-12-01 at 05:47 -0800, Tim Prince wrote:
> amjad ali wrote:
> > Hi,
> > thanks T.Prince,
> >
> > Your saying:
> > "I'll just mention that we are well into the era of 3 levels of
> > programming parallelization: vectorization, threaded parallel (e.g.
> > OpenMP), and process parallel
amjad ali wrote:
Hi,
thanks T.Prince,
Your saying:
"I'll just mention that we are well into the era of 3 levels of
programming parallelization: vectorization, threaded parallel (e.g.
OpenMP), and process parallel (e.g. MPI)." is a really great new
learning for me. Now I can perceive better.
Hi,
thanks T.Prince,
Your saying:
"I'll just mention that we are well into the era of 3 levels of programming
parallelization: vectorization, threaded parallel (e.g. OpenMP), and
process parallel (e.g. MPI)." is a really great new learning for me. Now I
can perceive better.
Can you please expl
amjad ali wrote:
Hi,
Suppose we run a parallel MPI code with 64 processes on a cluster, say
of 16 nodes. The cluster nodes has multicore CPU say 4 cores on each node.
Now all the 64 cores on the cluster running a process. Program is SPMD,
means all processes has the same workload.
Now if we
Hi,
Suppose we run a parallel MPI code with 64 processes on a cluster, say of 16
nodes. The cluster nodes has multicore CPU say 4 cores on each node.
Now all the 64 cores on the cluster running a process. Program is SPMD,
means all processes has the same workload.
Now if we had done auto-vectoriz