Thanks Federico,
It worked fine.But I have small issue.Following code demonstrates how I
use mpi::intercommunicator.But in the spawned child processes, the
intercommunicator size is same as number of spawned processes.But it
should be 1 ,right?
Because,I execute the manager process (manager.cpp
Download the nightly 1.3 release branch snapshot - not the actual
release, but the nightly tarball:
http://www.open-mpi.org/nightly/v1.3/
It is very close to release quality - only waiting for a couple of
things, none of which would impact this issue.
Let me know how this works for you.
Ra
Hi,
Thanks -- I downloaded the latest 1.4 snapshot after I saw your message and
verified that this issue does not seem to occur in it. However, I ran into
other stability issues (not necessarily surprising for a development
snapshot). Is there any idea on when 1.3.4 will be out and if this fix wil
Shaun Jackman wrote:
Jeff Squyres wrote:
On Aug 26, 2009, at 10:38 AM, Jeff Squyres (jsquyres) wrote:
Yes, this could cause blocking. Specifically, the receiver may not
advance any other senders until the matching Irecv is posted and is
able to make progress.
I should clarify something else h
Jeff Squyres wrote:
On Aug 26, 2009, at 10:38 AM, Jeff Squyres (jsquyres) wrote:
Yes, this could cause blocking. Specifically, the receiver may not
advance any other senders until the matching Irecv is posted and is
able to make progress.
I should clarify something else here -- for long mess
After much more work on this problem, and isolating it better, I finally found
a torque user who recognized the problem
and supplied the solution. Thanks to everyone on this list who responded to my
request for help. Here is my revised statement
of the problem and the solution:
On Fri, Aug 28, 2
I'm afraid the rank-file mapper in 1.3.3 has several known problems
that have been described on the list by users. We hopefully have those
fixed in the upcoming 1.3.4 release.
On Aug 31, 2009, at 10:01 AM, Sacerdoti, Federico wrote:
Hi,
I am trying to use the rankmap to bind a 4-proc mpi
Hi,
I am trying to use the rankmap to bind a 4-proc mpi job to one socket of a
two-socket, 8 core machine. However I'm getting a strange error.
CMDS USED
orterun --hostfile hostlist.1 -n 4 --mca rmaps_rank_file_path ./rankmap.1
desres-netscan -o $OUTDIR
$ cat rankmap.1
rank 0=drdb0235.en
Look at
http://www.boost.org/doc/libs/1_40_0/doc/html/boost/mpi/intercommunicator.html
to have a Boost wrapper for an Intercommunicator.
Federico
2009/8/28 Ashika Umanga Umagiliya
> Greetings all,
>
> I wanted to send come complex user defined types between MPI processes and
> found out that
Hi,
I'm trying to do passive one-sided communication, unlocking a receive
buffer when it is safe and then re-locking it when data has arrived.
Locking also occurs for the duration of a send.
I also tried using post/wait and start/put/complete, but with that I see
hangs on the complete.
What
Dear users,
I'm not sure whether this is the right place to go to with my problem,
but maybe someone can give me some leads. I'm trying to run 'Gadget2'
using OMPI 1.3.3. The installation seems fine; I can run simple programs
on as many machines/nodes I want using a machinefile. I can also run
Gad
you need to check the release notes, and compare the differences.
also check the Open MPI version in both of them.
In general it's not so good idea to run different versions of the software
for performance comparison or art all.
since both of them are Open source, backward computability is not alwa
Hi,
I have two machines with RHEL 5.2, then I installed OFED 1.4.1 driver
on the first machine, the second machine is using OFED 1.3.1 by RHEL
owned. My question is if the different version of OFED drivers will
affect performance?
Thanks.
Eric Lee
13 matches
Mail list logo