ure
with NPmpi
compiled with openmpi approximately 3400 Mbits/s which is good! Scales
linearly with 4 times 900 Mbits/sec
THank you,
Allan Menezes
v6 so to fix it i rebuilt it after make clean with ipv6
enabled and it works!
This configure for ver 1.3 works on my system
./configure --prefix=/opt/openmpi128 --enable-mpi-threads
--with-threads=posix
Do you still want the old or the new
; Format="flowed";
DelSp="yes"
It looks like the daemon isn't seeing the other interface address on
host x2. Can you ssh to x2 and send the contents of ifconfig -a?
Ralph
On Oct 31, 2008, at 9:18 AM, Allan Menezes wrote:
users-requ...@open-mpi.org wrote:
users-requ...@open-mpi.org
You can reach the person managing the list at
users-ow...@open-mpi.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of users digest..."
Today's Topics:
1. Openmpi ver1.3beta1 (Allan Menezes)
2
ethernet for eth0
Can somebody advise?
Thank you very much.
Allan Menezes
Hi,
I get segmentation errors and address not mapped errors with openmpi
1.2.7 or the nightly 1.3a9727
with hpl-2.0 but not with hpl-1.0a.
Everything works with hpl-1.0a with openmpi1.2.7
Can somebody enlighten me?
Thank you,
Allan Menezes
me how to do it with NPtcp because i do not know.
Regards,
Allan Menezes
increase in bandwidth! The MTU in all was
default of 1500 for all eth cards and both trials. I am using Fedora
Core 8, x86_64 for the operating system.
Allan Menezes
Hi,
I found the problem. It's a bug with openmpi v 1.2.4 i think. As below
tests confirm(AND an big THANKS to George!) I com
umbers for performance above approx 200Mbytes.
Some sort of overflow in v1.2.4.
Thank you,
Regards,
Allan Menezes
Hi George, The following test peaks at 8392Mpbs: mpirun --prefix
/opt/opnmpi124b --host a1,a1 -mca btl tcp,sm,self -np 2 ./NPmpi on a1
and on a2
mpirun --prefix /opt/opnmpi124b --host a2,a
greatly apprectiated!
Regards, Allan Menezes You should run a shared memory test, to see
what's the max memory bandwidth you can get. Thanks, george. On Dec 17,
2007, at 7:14 AM, Gleb Natapov wrote:
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
Hi,
How many PCI-Ex
HTML attachment scrubbed and removed
--
Message: 2
Date: Sun, 16 Dec 2007 18:49:30 -0500
From: Allan Menezes
Subject: [OMPI users] Gigabit ethernet (PCI Express) and openmpi
v1.2.4
To: us...@open-mpi.org
Message-ID: <4765b98a.30...@sympatico.ca>
C
-params.conf
file for latency and percentage b/w .
Please advise.
Regards,
Allan Menezes
18 heterogenous nodes all x86. Almost all single core(14) and two
dual cores and two hyperthreading CPU's what should my P's and Q's be to
benchmark the true performance?
I am guessing P=4 and Q=6. Am i right?
Thank you for your consideration
Allan Menezes
am running slots =2 as two CPU's I would get a
performance increase from expt 2 by 100 -50%
But I see no difference.Can anybody tell me why this is so?
I have not tried mpich 2.
Thank you,
Regards,
Allan Menezes
as two CPU's I would get a
performance increase from expt 2 by 100 -50%
But I see no difference.Can anybody tell me why this is so?
I have not tried mpich 2.
Thank you,
Regards,
Allan Menezes
On Wed, 15 Mar 2006, Allan Menezes wrote:
Dear Brian, I have the same setup as Mr. Chakrbarty with 16 nodes,
Oscar 4.2.1 beta 4 and two Gigabit ethernet cards with two 16 and 24
port switches one smart and the other managed. I use dhcp to get the IP
addresses for one eth card(The Ip
pi 1.1 (beta) or
1.01 teh performance should increase for the same N and NB in HPL I get
aslight performance decrease instead of increase of about 0.5 to 1
gigaflop less. The hostfile is simply a1, a2 ... a16 using Oscar's DNS
to resolve the domain name. Why is there a performance decrease?
Regards, Allan Menezes
Thank you,
Allan Menezes
Hi Jeff,
Here are last night's reults of the following command on my 15 node
cluster. One node is down from 16.
mpirun --mca pml teg --mca btl_tcp_if_include eth1,eth0 --hostfile aa
-np 15 ./xhpl
No errors were spewed out to stdout as per my previous post when using
btl tcp and btl_tcp_if_inc
back
device).
The second line specify that the TCP BTL is allowed to use only the eth0
interface. This line has to reflect your own configuration.
Finally the 3th one give the full path to the hostfile file.
Thanks,
george.
On Mon, 14 Nov 2005, Allan Menezes wrote:
Dear Jeff, Sorry
back
device).
The second line specify that the TCP BTL is allowed to use only the eth0
interface. This line has to reflect your own configuration.
Finally the 3th one give the full path to the hostfile file.
Thanks,
george.
On Mon, 14 Nov 2005, Allan Menezes wrote:
Dear Jeff, Sorry
es pml
teg to see if there is a difference! Thank you, Allan Message: 2 Date:
Sun, 13 Nov 2005 15:51:30 -0500 From: Jeff Squyres
Subject: Re: [O-MPI users] HPL anf TCP To: Open
MPI Users Message-ID:
Content-Type:
text/plain; charset=US-ASCII; format=flowed On Nov 3, 2005, at 8:35 PM,
Alla
] HPL anf TCP To: Open MPI Users
Message-ID:
Content-Type: text/plain; charset=US-ASCII; format=flowed On Nov 3,
2005, at 8:35 PM, Allan Menezes wrote:
1. No, I have 4 NICs on the head node and two on each of the 15 other
compute nodes. I use the realtek 8169 gigabit ethernet cards on the
Hi,
I am using Oscar 4.2. I have two ethernet cards on compute nodes,
eth0, eth1[one 10/100Mbps and one realtek 8169 gigabit NIC] and 4
ethernet cards on the head node , eth0 10/100Mbps, eth1 10/100Mbps, eth2
realtek 8169 gigabit, eth3 a built in 3com gigabit ethernet with the
sk98lin driver
parable performance. I was not using jumo MTU frames either just 1500bytes. It is not homogenous (BSquared) but a good test set up.
If you have any advice, Please tell me and I could try it out.
Thank you and good luck!
Allan
On Oct 27, 2005, at 10:19 AM, Jeff Squyres wrote:
On Oct 19
Message: 2 Date: Tue, 18 Oct 2005 08:48:45 -0600 From: "Tim S. Woodall"
Subject: Re: [O-MPI users] Hpl Bench mark and
Openmpi rc3 (Jeff Squyres) To: Open MPI Users
Message-ID: <43550b4d.6080...@lanl.gov> Content-Type: text/plain;
charset=ISO-8859-1; format=flowed
Hi Jeff,
I installed t
16:39 -0400
From: Jeff Squyres
Subject: Re: [O-MPI users] Hpl Bench mark and Openmpi rc3
To: Open MPI Users
Message-ID: <8557a377fe1f131e23274e10e5f6e...@open-mpi.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed
On Oct 13, 2005, at 1:25 AM, Allan Menezes wrote:
I have a 16 n
Hi,
I have a 16 node cluster of x86 machines with FC3 running on the head
node. I used a beta version of OSCAR 4.2 for putting together the
cluster. It uses /home/allan as the NFS directory.
I tried Mpich2v1.02p1 and got abench mark of 26GFlops for it approx.
WIth open mpi 1.0RC3 having set t
28 matches
Mail list logo