On May 4, 2012, at 6:43 PM, Don Armstrong wrote:
> Even though this might have seemed like a stupid question, it put me
> onto the right track. Apparently, mca_btl_tcp_endpoint_accept (or
> similar) is unable to handle multiple IP addresses on the same
> interface, and rejects the connection.
Yes
On Fri, 04 May 2012, Don Armstrong wrote:
> On Fri, 04 May 2012, TERRY DONTJE wrote:
> > Sorry if this is a stupid question but what is eth0:1 (it's under
> > eth0). Are the 172.16.30.X addresses pingable to each other?
>
> Yes. They're all on the same physical subnet.
Even though this might hav
On Fri, 04 May 2012, TERRY DONTJE wrote:
> Sorry if this is a stupid question but what is eth0:1 (it's under
> eth0). Are the 172.16.30.X addresses pingable to each other?
Yes. They're all on the same physical subnet.
Don Armstrong
--
"People selling drug paraphernalia ... are as much a part
On 5/4/2012 1:17 PM, Don Armstrong wrote:
On Fri, 04 May 2012, Rolf vandeVaart wrote:
On Behalf Of Don Armstrong
On Thu, 03 May 2012, Rolf vandeVaart wrote:
2. If that works, then you can also run with a debug switch to
see what connections are being made by MPI.
You can see the connections
On Fri, 04 May 2012, Rolf vandeVaart wrote:
> On Behalf Of Don Armstrong
> >On Thu, 03 May 2012, Rolf vandeVaart wrote:
> >> 2. If that works, then you can also run with a debug switch to
> >> see what connections are being made by MPI.
> >
> >You can see the connections being made in the attached
On Fri, 04 May 2012, Jeff Squyres wrote:
> Double check that you have firewalls (e.g., iptables) disabled.
They are. [You can tell that they are by the tcpdump.]
Don Armstrong
--
It can sometimes happen that a scholar, his task completed, discovers
that he has no one to thank. Never mind. He w
On 5/4/2012 8:26 AM, Rolf vandeVaart wrote:
2. If that works, then you can also run with a debug switch to see
what connections are being made by MPI.
You can see the connections being made in the attached log:
[archimedes:29820] btl: tcp: attempting to connect() to [[60576,1],2] address
13
>-Original Message-
>From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
>On Behalf Of Don Armstrong
>Sent: Thursday, May 03, 2012 5:43 PM
>To: us...@open-mpi.org
>Subject: Re: [OMPI users] MPI over tcp
>
>On Thu, 03 May 2012, Rolf vandeVaar
Double check that you have firewalls (e.g., iptables) disabled.
On May 3, 2012, at 5:42 PM, Don Armstrong wrote:
> On Thu, 03 May 2012, Rolf vandeVaart wrote:
>> I tried your program on a single node and it worked fine.
>
> It works fine on a single node, but deadlocks when it communicates in
On Thu, 03 May 2012, Rolf vandeVaart wrote:
> I tried your program on a single node and it worked fine.
It works fine on a single node, but deadlocks when it communicates in
between nodes. Single node communication doesn't use tcp by default.
> Yes, TCP message passing in Open MPI has been worki
I tried your program on a single node and it worked fine. Yes, TCP message
passing in Open MPI has been working well for some time.
I have a few suggestions.
1. Can you run something like hostname successfully (mpirun -np 10 -hostfile
yourhostfile hostname)
2. If that works, then you can also ru
11 matches
Mail list logo