On Tue, Mar 14, 2006 at 12:00:57PM -0600, Edgar Gabriel wrote:
> you are touching here a difficult area in Open MPI:
I don't doubt it. I haven't found an MPI implementation yet that does
this without any quirks or oddities :>
> - name publishing across independent jobs does unfortunatly not work
I think I know what goes wrong. Since they are in different 'universes',
they will have exactly the same 'Open MPI name', and therefore the
algorithm in intercomm_merge can not determine which process should be
first and which is second.
Practically, all jobs which are connected at a certain p
you are touching here a difficult area in Open MPI:
- name publishing across independent jobs does unfortunatly not work
right now (It does work, if all processes have been started by the same
mpirun or if the have been spawned by a father process using
MPI_Comm_spawn). Your approach with pass
could you provide me a simple testcode for that? Comm_join and
intercomm_merge should work, I would have a look at that...
(separate answer to your second email is coming soon)
Thanks
Edgar
Robert Latham wrote:
Hi
I've got a bit of an odd bug here. I've been playing around with MPI
process
Hello
In playing around with process management routines, I found another
issue. This one might very well be operator error, or something
implementation specific.
I've got two processes (a and b), linked with openmpi, but started
independently (no mpiexec).
- A starts up and calls MPI_Init
- A
Hi
I've got a bit of an odd bug here. I've been playing around with MPI
process management routines and I notied the following behavior with
openmpi-1.0.1:
Two processes (a and b), linked with ompi, but started independently
(no mpiexec, just started the programs directly).
- a and b: call MPI_
I see responses to noncritical parts of my discussion but not the
following, is it a known issue, a fixed issue, or we don't want to
discuss it issue?
Michael
On Mar 7, 2006, at 4:39 PM, Michael Kluskens wrote:
The following errors/warnings also exist when running my spawn test
on a clean
On Mar 14, 2006, at 4:42 AM, Pierre Valiron wrote:
I am now attempting to tune openmpi-1.1a1r9260 on Solaris Opteron.
I guess I should have pointed this out more clearly earlier. Open
MPI 1.1a1 is a nightly build of alpha release from our development
trunk. It isn't guaranteed to be stab
> -Original Message-
> > [-:13327] mca: base: component_find: unable to open: dlopen(/usr/
> > local/lib/openmpi/mca_pml_teg.so, 9): Symbol not found:
> > _mca_ptl_base_recv_request_t_class
> >Referenced from: /usr/local/lib/openmpi/mca_pml_teg.so
> >Expected in: flat namespace
> >
I am now attempting to tune openmpi-1.1a1r9260 on Solaris Opteron.
Each quadripro node possess two ethernet interfaces bge0 and bge1.
Interfaces bge0 are dedicated to parallel jobs and correspond to node
names pxx,
they use a dedicated gigabit switch.
Interfaces bge1 provide nfs sharing etc and
10 matches
Mail list logo