On 12/11/2011 12:16 PM, Andreas Schäfer wrote:
Hey,
on an SMP box threaded codes CAN always be faster than their MPI
equivalents. One reason why MPI sometimes turns out to be faster is
that with MPI every process actually initializes its own
data. Therefore it'll end up in the NUMA domain to whi
Hey,
on an SMP box threaded codes CAN always be faster than their MPI
equivalents. One reason why MPI sometimes turns out to be faster is
that with MPI every process actually initializes its own
data. Therefore it'll end up in the NUMA domain to which the core
running that process belongs. A lot o
Hello,
I try to use MPI for solving the Fourier equation in 3D. In the code, I
have the following parameters :
number of domains on Ox : x_domains
number of domains on Oy : y_domains
number of domains on Oz : z_domains
size of grid on Ox: size_x
size of grid on Oy: size_y
size of grid on Oz: siz
I guess, on a multicore machine, openmp/pthread code will always run faster
than MPI code on the same box, even if the MPI implementation is efficient
and uses a shared memory tool whereby the data is actually shared across the
different process, though it's in a different way than it is shared acr
Hello all
If you are using the developer's trunk or nightly tarball, or are interested in
new mapping and binding options that will be included in the next feature
series (1.7), then please read on. If not, then please ignore.
People have raised the question of "the trunk isn't binding processe