Daniel,
Are you using threads. I don't think the opempi-1.2.x work with threads.
Doug Reeder
On Oct 3, 2008, at 2:30 PM, Daniel Hansen wrote:
Oh, by the way, here is the segfault:
[m4b-1-8:11481] *** Process received signal ***
[m4b-1-8:11481] Signal: Segmentation fault (11)
[m4b-1-8:11481] S
Oh, by the way, here is the segfault:
[m4b-1-8:11481] *** Process received signal ***
[m4b-1-8:11481] Signal: Segmentation fault (11)
[m4b-1-8:11481] Signal code: Address not mapped (1)
[m4b-1-8:11481] Failing at address: 0x2b91c69eed
[m4b-1-8:11483] [ 0] /lib64/libpthread.so.0 [0x33e8c0de70]
[m4b
Hi,
I'm trying to get openmpi working over openib partitions. On this cluster,
the partition number is 0x109. The ib interfaces are pingable over the
appropriate ib0.8109 interface:
d2:/opt/openmpi-ib # ifconfig ib0.8109
ib0.8109 Link encap:UNSPEC HWaddr
80-00-00-4A-FE-80-00-00-00-00-00-00-00-
I have been testing some code against openmpi lately that always causes it
to crash during certain mpi function calls. The code does not seem to be
the problem, as it runs just fine against mpich. I have tested it against
openmpi 1.2.5, 1.2.6, and 1.2.7 and they all exhibit the same problem.
Also
> Eric,
>
> In the 1.3 and some of the latest 1.2.X versions tuned is the default
> component for collectives. However, the tuned currently in the trunk
> are optimized for high performance networks (such as IB or MX), and
> they do not deliver the best performance on slower devices such as
> Ether
Ralph Castain ha scritto:
> Interesting. I ran a loop calling comm_spawn 1000 times without a
> problem. I suspect it is the threading that is causing the trouble here.
I think so! My guessing is that at low level there is some trouble when
handling *concurrent*
orted spawning. Maybe
> You are welc
Interesting. I ran a loop calling comm_spawn 1000 times without a
problem. I suspect it is the threading that is causing the trouble here.
You are welcome to send me the code. You can find my loop code in your
code distribution under orte/test/mpi - look for loop_spawn and
loop_child.
Ral
Ralph Castain ha scritto:
>
> On Oct 3, 2008, at 7:14 AM, Roberto Fichera wrote:
>
>> Ralph Castain ha scritto:
>>> I committed something to the trunk yesterday. Given the complexity of
>>> the fix, I don't plan to bring it over to the 1.3 branch until
>>> sometime mid-to-end next week so it can be
Eric,
In the 1.3 and some of the latest 1.2.X versions tuned is the default
component for collectives. However, the tuned currently in the trunk
are optimized for high performance networks (such as IB or MX), and
they do not deliver the best performance on slower devices such as
Ethernet.
On Oct 3, 2008, at 7:14 AM, Roberto Fichera wrote:
Ralph Castain ha scritto:
I committed something to the trunk yesterday. Given the complexity of
the fix, I don't plan to bring it over to the 1.3 branch until
sometime mid-to-end next week so it can be adequately tested.
Ok! So it means that
Ralph Castain ha scritto:
> I committed something to the trunk yesterday. Given the complexity of
> the fix, I don't plan to bring it over to the 1.3 branch until
> sometime mid-to-end next week so it can be adequately tested.
Ok! So it means that I can checkout from the SVN/trunk to get you fix,
r
I committed something to the trunk yesterday. Given the complexity of
the fix, I don't plan to bring it over to the 1.3 branch until
sometime mid-to-end next week so it can be adequately tested.
Ralph
On Oct 3, 2008, at 5:02 AM, Roberto Fichera wrote:
Ralph Castain ha scritto:
Actually, i
Hello all,
I am currently profiling a simple case where I replace multiple S/R
calls with Allgather calls and it would _seem_ the simple S/R calls are
faster. Now, *before* I come to any conclusion on this, one of the
pieces I am missing is more details on how /if/when the tuned coll MCA
i
Am 03.10.2008 um 10:46 schrieb Jaime Perea:
Hello again.
Since I already had a 6.1 version of the sge I reverted to it
and included the hacks (ssh, sshd -i and qlogin_wrap) and in
this way both the interactives qsh and qrsh and batch qsub
worked with openmpi.
For me this is a solution, but I'm
Ralph Castain ha scritto:
> Actually, it just occurred to me that you may be seeing a problem in
> comm_spawn itself that I am currently chasing down. It is in the 1.3
> branch and has to do with comm_spawning procs on subsets of nodes
> (instead of across all nodes). Could be related to this - you
Hi,
Maybe you can have a look at
http://www.boost.org/doc/libs/1_36_0/doc/html/mpi.html
On Fri, 03 Oct 2008 09:11:32 +0200, Gabriele Fatigati
wrote:
Hi,
you can use STL maps into MPI program like a serial program. But, you
can't
insert STL structures into MPI calls.
2008/10/2 Shafa
Hello again.
Since I already had a 6.1 version of the sge I reverted to it
and included the hacks (ssh, sshd -i and qlogin_wrap) and in
this way both the interactives qsh and qrsh and batch qsub
worked with openmpi.
For me this is a solution, but I'm still curious of what it was
going on in 6.2.
Hi,
you can use STL maps into MPI program like a serial program. But, you can't
insert STL structures into MPI calls.
2008/10/2 Shafagh Jafer
> Hi
> In MPICH there is mpi2c++_map that takes care of mapping mpi onto C++. Is
> there similar thing in openmpi?? should there be any?? because i am get
18 matches
Mail list logo