On 3 April 2009 at 06:35, Jerome BENOIT wrote:
| It appeared that the file /etc/openmpi/openmpi-mca-params.conf on node green
was the only one
| into the cluster to contain the line
|
| btl_tcp_port_min_v4 = 49152
Great -- so can we now put your claims of 'the Debian package is broken' to
rest
On Apr 2, 2009, at 2:45 PM, Gus Correa wrote:
Sorry, I don't have an answer about performance.
You may need to ask somebody else or google around
about the relative performance of 32-bit vs. 64-bit mode.
It is worth trying 64-bit. The performance is going to depend on the
program. Since 6
On Apr 2, 2009, at 6:35 PM, Jerome BENOIT wrote:
It appeared that the file /etc/openmpi/openmpi-mca-params.conf on
node green was the only one
into the cluster to contain the line
btl_tcp_port_min_v4 = 49152
Great -- glad you found the issue!
Once the this line commented, the tests sugges
Hello List !
It appeared that the file /etc/openmpi/openmpi-mca-params.conf on node green
was the only one
into the cluster to contain the line
btl_tcp_port_min_v4 = 49152
Once the this line commented, the tests suggest below, and the sbatch script
previously emailed,
work.
Now, if I put the
Hi !
Dirk Eddelbuettel wrote:
On 3 April 2009 at 03:33, Jerome BENOIT wrote:
| The above submission works the same on my clusters.
| But in fact, my issue involve interconnection between the nodes of the
clusters:
| in the above examples involve no connection between nodes.
|
| My cluster is a
On 3 April 2009 at 03:33, Jerome BENOIT wrote:
| The above submission works the same on my clusters.
| But in fact, my issue involve interconnection between the nodes of the
clusters:
| in the above examples involve no connection between nodes.
|
| My cluster is a cluster of quadcore computers:
Hi Again !
Dirk Eddelbuettel wrote:
Works for me (though I prefer salloc), suggesting that you did something to
your network topology or Open MPI configuration:
:~$ cat /tmp/jerome_hw.c
// mpicc -o phello phello.c
// mpirun -np 5 phello
#include
#include
#include
int main(int narg, char
I am very sorry for may bad behaviour: I will try to be less confused the next
time.
Thanks a lot for the outputs and the hints,
Jerome
Dirk Eddelbuettel wrote:
[ It is considered bad form to publically reply to a private message. What I
had sent you earlier was a private mail. ]
On 3 April 2
[ It is considered bad form to publically reply to a private message. What I
had sent you earlier was a private mail. ]
On 3 April 2009 at 02:41, Jerome BENOIT wrote:
|
| Original Message
| Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
| Date: F
Hi Francesco, list
I was thinking of compatibility (and consistency)
rather than of performance.
I don't know about Debian, but on CentOS 5.2 x86_64
the 64-bit libnuma library lives at /usr/lib64,
and the 32-bit version on /usr/lib.
If you are trying to build 64-bit OpenMPI libraries with libnum
Original Message
Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
List-Post: users@lists.open-mpi.org
Date: Fri, 03 Apr 2009 02:41:01 +0800
From: Jerome BENOIT
Reply-To: ml.jgmben...@mailsnare.net
To: Dirk Eddelbuettel
CC: ml.jgmben...@mailsnare.
Hello List,
I am trying to fix the following issue (with firewall off):
[green][[32664,1],5][../../../../../../ompi/mca/btl/tcp/btl_tcp_component.c:596:mca_btl_tcp_component_create_listen]
bind() failed: Permission denied (13)
I have tried with new kernel without security features, but witho
Gustavo:
Does this imply that a compilation with "--with-libnuma=/usr/lib64"
may afford better performance?
Am asking that because in the meantime I have taken previous
compilation of openmpi-1.2.6 and use it (one disk of raid1 died and I
changed both with larger ones). If better performance I can
Hi Francesco, list
Francesco:
Besides the typo that Jeff found on your
CXX definition, I would guess you want
--with-libnuma=/usr/lib64
(since your machine is amd64,
and you are using the 64-bit Intel compilers: cce, fce).
That is instead of --with-libnuma=/usr/lib (32-bit).
The /usr/lib64 path a
Eugene,
This is a joke, right?
OpenMPI has had that since the 1.2 line.
Eugene Loh wrote:
Ah. George, you should have thought about that. I understand your
eagerness to share this exciting news, but perhaps an April-1st
announcement detracted from the seriousness of this grand development.
Ah. George, you should have thought about that. I understand your
eagerness to share this exciting news, but perhaps an April-1st
announcement detracted from the seriousness of this grand development.
Here's another desirable MPI feature. People talk about "error
detection/correction". We
Well done, well done!.
As always your contributions are much appreciated.
Mac McCalla
Houston
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of George Bosilca
Sent: Wednesday, April 01, 2009 5:04 PM
To: Open MPI MPI Developers; Open M
On Wed, Apr 01, 2009 at 06:04:15PM -0400, George Bosilca wrote:
> The Open MPI Team, representing a consortium of bailed-out banks, car
> manufacturers, and insurance companies, is pleased to announce the
> release of the "unbreakable" / bug-free version Open MPI 2009,
> (expected to be available
On Apr 2, 2009, at 7:21 AM, Francesco Pietra wrote:
Hi:
With debian linux amd64 lenny I tried to install openmpi-1.3.1 instead
of using the executables openmpi-1.2.6 of previous disks. I configured
as for 1.2.6 (wrong ?)
CC=/opt/intel/cce/10.1.015/bin/icc CXX=opt/intel/cce/10.1.015/bin/icpc
Josh Hursey writes:
> Thanks. I'll fix this and post a new draft soon (I have a few other
> items to put in there anyway).
One thing to note in the mean time is that building with BLCR failed for
me with the PGI compiler with a link-time message about a bad file
format. I assume it's a libtool
I wrote:
> E.g. on
> 8-core nodes, if you submit a 16-process job, there are four cores left
> over on the relevant nodes which might get something else scheduled on
> them.
Of course, that doesn't make much sense because I thought `12' and typed
`16' for some reason... Thanks to Rolf for off-li
Hi:
With debian linux amd64 lenny I tried to install openmpi-1.3.1 instead
of using the executables openmpi-1.2.6 of previous disks. I configured
as for 1.2.6 (wrong ?)
CC=/opt/intel/cce/10.1.015/bin/icc CXX=opt/intel/cce/10.1.015/bin/icpc
F77=/opt/intel/fce/10.1.015/bin/ifort
FC=/opt/intel/fce/10
does anymone think of an April fool's day???
**
Anthony THEVENIN
Research Engineer
PALM project
Global Change And Climate Modelling team
CERFACS
42, Avenue G. Coriolis
31057 Toulouse (FRANCE)
Tel: +33(0)561 19 30 73
Fax: +33(0)561 19 30 00
http://www.cerfacs.fr/
http://www.cerfacs.f
I'm still looking for the new "-11" option that will allow OpenMPI compiled
code to run even faster. I'm hoping that option makes it in soon (isn't it part
of the 11.7 MPI standard?)
Jeff
From: George Bosilca
To: Open MPI MPI Developers ; Open MPI Users
S
Great!
I'll try it as soon as possible :)
2009/4/2 George Bosilca :
> The Open MPI Team, representing a consortium of bailed-out banks, car
> manufacturers, and insurance companies, is pleased to announce the
> release of the "unbreakable" / bug-free version Open MPI 2009,
> (expected to be avail
What a wonderfull implementation
2009/4/2 Damien Hocking
> Outstanding. I'll have two.
>
> Damien
>
>
> George Bosilca wrote:
>
>> The Open MPI Team, representing a consortium of bailed-out banks, car
>> manufacturers, and insurance companies, is pleased to announce the
>> release of the "
Problem resolved, I set ConnectTimeout N in /etc/ssh/ssd_config , mpirun exit
after N seconds.
thanks a lot!
From: buptzh...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Thu, 2 Apr 2009 11:05:25 +0800
Subject: Re: [OMPI users] Beginner's question: how to av
27 matches
Mail list logo