Thanks for your answers I'll use normal C-style MPI so. I checked
boost, but it seems it only supplies me with a shared communication
interface among the nodes, turning a little difficult to parallelize
the processes itself, also boost obligate me to have an MPI
installation too. Boost is w
I'm sorry. I meant boost.mpi ...
Luis Vitorio Cargnini wrote:
Hi,
Please I'm writing a C++ applications that will use MPI. My problem
is, I want to use the C++ bindings and then come my doubts. All the
examples that I found people is using almost like C, except for the
fact of adding the na
Hi,
Luis Vitorio Cargnini wrote:
Hi,
Please I'm writing a C++ applications that will use MPI. My problem
is, I want to use the C++ bindings and then come my doubts. All the
examples that I found people is using almost like C, except for the
fact of adding the namespace MPI:: before the proced
Hi,
Please I'm writing a C++ applications that will use MPI. My problem
is, I want to use the C++ bindings and then come my doubts. All the
examples that I found people is using almost like C, except for the
fact of adding the namespace MPI:: before the procedure calls.
For example I want to
app0 will have ranks 0-2, app2 will have rank 3-(2+x), app3 will have
rankyou get the picture.
On Jul 3, 2009, at 10:14 AM, Simone Pellegrini wrote:
Ralph Castain wrote:
Sure:
mpirun --np 3 mpi_app1 "app1_args" : -np x mpi_app2 "app2_args" : -
np y mpi_app3 "app3_args"
Nice, but wha
Ralph Castain wrote:
Sure:
mpirun --np 3 mpi_app1 "app1_args" : -np x mpi_app2 "app2_args" : -np
y mpi_app3 "app3_args"
Nice, but what the implication with the process rank?
Can I assume that app1 will have rank 0, app2 rank 1 and app3 rank 3? or
there are no assumption that can me made?
Sure:
mpirun --np 3 mpi_app1 "app1_args" : -np x mpi_app2 "app2_args" : -np
y mpi_app3 "app3_args"
On Jul 3, 2009, at 9:36 AM, Simone Pellegrini wrote:
Dear all,
current implementation of mpirun starts the executable in different
nodes. For some reason I need to start different MPI appli
Dear all,
current implementation of mpirun starts the executable in different
nodes. For some reason I need to start different MPI applications across
nodes and I want to use MPI to communicate among these applications. For
short I want to breakdown the SPMD model, something like:
mpirun --np
Kris,
Using MX_CSUM should _not_ make a difference by itself. But it
requires the debug library which may alter the timing enough to avoid
a race (in MX, OMPI, or the application).
Correct, if you use the MTL then all messages are handled by MX
(internode, shared memory and self).
Scott
Dear all,
I apologize with the moderator of the mailing list if my message is not
strictly related to the Open MPI library.
I am a PhD student at the University of Innsbruck, my topic is
optimization of MPI applications. During my research I have collected
several transformation that can impr
Scott,
Thanks for your advice! Good to know about the checksum debug
functionality! Strangely enough running with either "MX_CSUM=1" or "-mca
pml cm" allows Murasaki to work normally, and makes the test case I
attached in my previous mail work. Very suspicious, but at least this
does make a functi
11 matches
Mail list logo