On Apr 3, 2009, at 3:36 AM, Jerome BENOIT wrote:
> This seems to be a local admin issue as such a line is unlikely to
have been
> added by either the Debian Open MPI or slurm packages.
This is clearly an admin issue: maintaining a cluster of clones is
quite a challenge :-)
It certain
Hello List,
Dirk Eddelbuettel wrote:
On 3 April 2009 at 06:35, Jerome BENOIT wrote:
| It appeared that the file /etc/openmpi/openmpi-mca-params.conf on node green
was the only one
| into the cluster to contain the line
|
| btl_tcp_port_min_v4 = 49152
Great -- so can we now put your claims of
On 3 April 2009 at 06:35, Jerome BENOIT wrote:
| It appeared that the file /etc/openmpi/openmpi-mca-params.conf on node green
was the only one
| into the cluster to contain the line
|
| btl_tcp_port_min_v4 = 49152
Great -- so can we now put your claims of 'the Debian package is broken' to
rest
On Apr 2, 2009, at 6:35 PM, Jerome BENOIT wrote:
It appeared that the file /etc/openmpi/openmpi-mca-params.conf on
node green was the only one
into the cluster to contain the line
btl_tcp_port_min_v4 = 49152
Great -- glad you found the issue!
Once the this line commented, the tests sugges
Hello List !
It appeared that the file /etc/openmpi/openmpi-mca-params.conf on node green
was the only one
into the cluster to contain the line
btl_tcp_port_min_v4 = 49152
Once the this line commented, the tests suggest below, and the sbatch script
previously emailed,
work.
Now, if I put the
Hi !
Dirk Eddelbuettel wrote:
On 3 April 2009 at 03:33, Jerome BENOIT wrote:
| The above submission works the same on my clusters.
| But in fact, my issue involve interconnection between the nodes of the
clusters:
| in the above examples involve no connection between nodes.
|
| My cluster is a
On 3 April 2009 at 03:33, Jerome BENOIT wrote:
| The above submission works the same on my clusters.
| But in fact, my issue involve interconnection between the nodes of the
clusters:
| in the above examples involve no connection between nodes.
|
| My cluster is a cluster of quadcore computers:
Hi Again !
Dirk Eddelbuettel wrote:
Works for me (though I prefer salloc), suggesting that you did something to
your network topology or Open MPI configuration:
:~$ cat /tmp/jerome_hw.c
// mpicc -o phello phello.c
// mpirun -np 5 phello
#include
#include
#include
int main(int narg, char
2009 at 02:41, Jerome BENOIT wrote:
|
| Original Message
| Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
| Date: Fri, 03 Apr 2009 02:41:01 +0800
| From: Jerome BENOIT
| Reply-To: ml.jgmben...@mailsnare.net
| To: Dirk Eddelbuettel
| CC: ml.jgmben
[ It is considered bad form to publically reply to a private message. What I
had sent you earlier was a private mail. ]
On 3 April 2009 at 02:41, Jerome BENOIT wrote:
|
| Original Message
| Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
| Date
Original Message
Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
List-Post: users@lists.open-mpi.org
Date: Fri, 03 Apr 2009 02:41:01 +0800
From: Jerome BENOIT
Reply-To: ml.jgmben...@mailsnare.net
To: Dirk Eddelbuettel
CC: ml.jgmben
Hello List,
I am trying to fix the following issue (with firewall off):
[green][[32664,1],5][../../../../../../ompi/mca/btl/tcp/btl_tcp_component.c:596:mca_btl_tcp_component_create_listen]
bind() failed: Permission denied (13)
I have tried with new kernel without security features, but witho
12 matches
Mail list logo