I fear that would be a bad thing to do as it would disrupt mpirun's operations.
However, I did fix the problem by adding the topology as a param to the
pretty-print functions. Please see:
https://svn.open-mpi.org/trac/ompi/ticket/4356
Thanks for pointing it out
Ralph
On Mar 10, 2014, at 1:15 A
Hi,
Am 02.02.2014 um 00:23 schrieb Reuti:
>>
>>> Thanks for taking a look. I just learned from PGI that this is a known bug
>>> that will be fixed in the 14.2 release (Februrary 2014).
Just for curiosity: was there any update on this issue - looks like PGI 14.3 is
still failing.
-- Reuti
[s
Greetings, and thanks for trying out our Java bindings.
Can you provide some more details? E.g., is there a particular program you're
running that incurs these problems? Or is there even a particular MPI function
that you're using that results in this segv (e.g., perhaps we have a specific
bu
Hi Ralph, I would report one more small thing.
The verbose output in bind_downwards sometimes gives incorrect binding-map
when I use heterogeneous nodes with different topologies. I confirmed that
this patch fixedtheproblem:
--- rmaps_base_binding.
Hi,
I have 8 nodes each with 2 quad core sockets. Also, the nodes have IB
connectivity. I am trying to run OMPI Java binding in OMPI trunk revision
30301 with 8 procs per node totaling 64 procs. This gives a SIGSEV error as
below.
I wonder if you have any suggestion to resolve this?
Thank you,
S