Not sure I understand. The problem has been fixed in both the trunk and the 1.8
branch now, so you should be able to work with either of those nightly builds.
On Aug 21, 2014, at 12:02 AM, Timur Ismagilov wrote:
> Have i I any opportunity to run mpi jobs?
>
>
> Wed, 20 Aug 2014 10:48:38 -0700
Have i I any opportunity to run mpi jobs?
Wed, 20 Aug 2014 10:48:38 -0700 от Ralph Castain :
>yes, i know - it is cmr'd
>
>On Aug 20, 2014, at 10:26 AM, Mike Dubman < mi...@dev.mellanox.co.il > wrote:
>>btw, we get same error in v1.8 branch as well.
>>
>>
>>On Wed, Aug 20, 2014 at 8:06 PM, Ralph
yes, i know - it is cmr'd
On Aug 20, 2014, at 10:26 AM, Mike Dubman wrote:
> btw, we get same error in v1.8 branch as well.
>
>
> On Wed, Aug 20, 2014 at 8:06 PM, Ralph Castain wrote:
> It was not yet fixed - but should be now.
>
> On Aug 20, 2014, at 6:39 AM, Timur Ismagilov wrote:
>
>> H
btw, we get same error in v1.8 branch as well.
On Wed, Aug 20, 2014 at 8:06 PM, Ralph Castain wrote:
> It was not yet fixed - but should be now.
>
> On Aug 20, 2014, at 6:39 AM, Timur Ismagilov wrote:
>
> Hello!
>
> As i can see, the bug is fixed, but in Open MPI v1.9a1r32516 i still have
> t
It was not yet fixed - but should be now.
On Aug 20, 2014, at 6:39 AM, Timur Ismagilov wrote:
> Hello!
>
> As i can see, the bug is fixed, but in Open MPI v1.9a1r32516 i still have
> the problem
>
> a)
> $ mpirun -np 1 ./hello_c
>
> -
Hello!
As i can see, the bug is fixed, but in Open MPI v1.9a1r32516 i still have the
problem
a)
$ mpirun -np 1 ./hello_c
--
An ORTE daemon has unexpectedly failed after launch and before
communicating back to mpirun. This
I filed the following ticket:
https://svn.open-mpi.org/trac/ompi/ticket/4857
On Aug 12, 2014, at 12:39 PM, Jeff Squyres (jsquyres)
wrote:
> (please keep the users list CC'ed)
>
> We talked about this on the weekly engineering call today. Ralph has an idea
> what is happening -- I need
(please keep the users list CC'ed)
We talked about this on the weekly engineering call today. Ralph has an idea
what is happening -- I need to do a little investigation today and file a bug.
I'll make sure you're CC'ed on the bug ticket.
On Aug 12, 2014, at 12:27 PM, Timur Ismagilov wrote:
Are you running any kind of firewall on the node where mpirun is invoked? Open
MPI needs to be able to use arbitrary TCP ports between the servers on which it
runs.
This second mail seems to imply a bug in OMPI's oob_tcp_if_include param
handling, however -- it's supposed to be able to handle
When i add --mca oob_tcp_if_include ib0 (infiniband interface) to mpirun (as
it was here: http://www.open-mpi.org/community/lists/users/2014/07/24857.php
) i got this output:
[compiler-2:08792] mca:base:select:( plm) Querying component [isolated]
[compiler-2:08792] mca:base:select:( plm) Quer
Hello!
I have Open MPI v1.8.2rc4r32485
When i run hello_c, I got this error message
$mpirun -np 2 hello_c
An ORTE daemon has unexpectedly failed after launch and before
communicating back to mpirun. This could be caused by a number
of factors, including an inability to create a connection bac
11 matches
Mail list logo