Hi folks,
I. looked for ways to tell to "mpiexec" to forward some environment
variables, I saw a mismatch:
---
http://www.open-mpi.org/faq/?category=running#mpirun-options
...
--x : A comma-delimited list of environment
variables
Dear MPI community,
Please inform me if it is possible to
migrate MPI processes among the nodes or cores. By note I mean a machine having
multiple cores. So the cluster can have several nodes and each node can have
several cores. I want to know if it is t
I'm not sure where the FAQ got its information, but it has always been one
param per -x option.
I'm afraid there isn't any envar to support the setting of multiple -x options.
We didn't expect someone to forward very many, if any, so we didn't create that
capability. It wouldn't be too hard to
I'm not sure what you mean by "migrate". Are you talking about restarting a
failed process at a different location? Or arbitrarily moving a process to
another location upon command?
On Nov 10, 2011, at 5:18 AM, Mudassar Majeed wrote:
>
> Dear MPI community,
>
Thank you for your reply. I am implementing a load balancing function for MPI,
that will balance the computation load and the communication both at a time. So
my algorithm assumes that all the cores may at the end get different number of
processes to run. In the beginning (before that function w
On Nov 10, 2011, at 8:11 AM, Mudassar Majeed wrote:
> Thank you for your reply. I am implementing a load balancing function for
> MPI, that will balance the computation load and the communication both at a
> time. So my algorithm assumes that all the cores may at the end get different
> number
On Nov 10, 2011, at 6:02 AM, Paul Kapinos wrote:
> Hi folks,
> I. looked for ways to tell to "mpiexec" to forward some environment
> variables, I saw a mismatch:
>
> ---
> http://www.open-mpi.org/faq/?category=running#mpirun-option
The MPI standard does not provide explicit support for process
migration. However, some MPI implementations (including Open MPI) have
integrated such support based on checkpoint/restart functionality. For
more information about the checkpoint/restart process migration
functionality in Open MPI see
Paul,
I'm sure this isn't the response you want to hear, but I'll suggest it
anyway:
Queuing systems can forward the submitters environment if desired. For
example, in SGE, the -V switch forwards all the environment variables to
the job's environment, so if there's one you can use to launch your
Thank you for your reply. In our previous publication, we have figured it out
that run more than one processes on cores and balancing the computational load
considerably reduces the total execution time. You know the MPI_Graph_create
function, we created another function MPI_Load_create that ma
On 11/10/2011 5:19 AM, Jeff Squyres wrote:
On Nov 10, 2011, at 8:11 AM, Mudassar Majeed wrote:
Thank you for your reply. I am implementing a load balancing function for MPI,
that will balance the computation load and the communication both at a time. So
my algorithm assumes that all the cores
So what you are looking for is an MPI extension API that let's you say "migrate
me from my current node to node "? Or do you have a rank that is the
"master" that would order "move rank N to node "?
Either could be provided, I imagine - just want to ensure I understand what you
need. Can you pa
Note that the "migrate me from my current node to node " scenario
is covered by the migration API exported by the C/R infrastructure, as
I noted earlier.
http://osl.iu.edu/research/ft/ompi-cr/api.php#api-cr_migrate
The "move rank N to node " scenario could probably be added as an
extension of th
For example there are 10 nodes, and each node contains 20 cores. We will have
200 cores in total and let say there are 2000 MPI processes. We start the
application with 10 MPI on each core. Let say Comm(Pi, Pj) denotes how much
communication Pi and Pj make with each other and let say each proces
On Nov 10, 2011, at 11:30 AM, Mudassar Majeed wrote:
> For example there are 10 nodes, and each node contains 20 cores. We will have
> 200 cores in total and let say there are 2000 MPI processes. We start the
> application with 10 MPI on each core.
Is this just to be able to simulate very large
Hi Jeff,
In the attached file Compile_out.tar.bz2 I have included the out
files for config, make, and install. I also included another copy of the
out_test file so that it gives you all of the info that I have. Again your
help is much appreciated.
Amos Leffler
On Wed, Nov 9, 2011 at 1
e output to the command:
> > "mpicc hello_cc.c -o hello_cc
> > and lists files which do not appear to be present. I checked the
> permissions and they seem to be correct so I am stumped, I did use the
> make and install commands and they
17 matches
Mail list logo