Hi Allan,
This suggest that your chipset is not able to handle the full PCI-E
speed on more than 3 ports. This usually depends on the way the PCI-E
links are wired trough the ports and the capacity of the chipset
itself. As an exemple we were never able to reach fullspeed
performance with Myrinet 10g on IBM e325 nodes, because of chipset
limitations. We had to make the node changed to solve the issue.
Running several instances of NPtcp should somewhat show the bandwith
limit of the PCI-E bus on your machine.
Aurelien
Le 17 déc. 07 à 21:51, Allan Menezes a écrit :
Hi George, The following test peaks at 8392Mpbs: mpirun --prefix
/opt/opnmpi124b --host a1,a1 -mca btl tcp,sm,self -np 2 ./NPmpi on a1
and on a2
mpirun --prefix /opt/opnmpi124b --host a2,a2 -mca btl tcp,sm,self -
np 2 ./NPmpi
gives 8565Mbps
--(a)
on a1:
mpirun --prefix /opt/opnmpi124b --host a1,a1 -np 2 ./NPmpi
gives 8424Mbps on a2:
mpirun --prefix /opt/opnmpi124b --host a2,a2 -np 2 ./NPmpi
gives 8372Mbps So theres enough memory and processor b/w to give
2.7Gbps
for 3 pci express eth cards especially from --(a) between a1 and a2?
Thank you for your help. Any assistance would be greatly apprectiated!
Regards, Allan Menezes You should run a shared memory test, to see
what's the max memory bandwidth you can get. Thanks, george. On Dec
17,
2007, at 7:14 AM, Gleb Natapov wrote:
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI version
1.2.4
support with a corresponding linear increase in bandwith measured
with
netpipe NPmpi and openmpi mpirun?
With two PCI express cards I get a B/W of 1.75Gbps for 892Mbps
each
ans
for three pci express cards ( one built into the motherboard) i
get
1.95Gbps. They all are around 890Mbs indiviually measured with
netpipe
and NPtcp and NPmpi and openmpi. For two it seems there is a
linear
increase in b/w but not for three pci express gigabit eth cards.
I have tune the cards using netpipe and $HOME/.openmpi/mca-
params.conf
file for latency and percentage b/w .
Please advise.
What is in your $HOME/.openmpi/mca-params.conf? May be are hitting
your
chipset limit here. What is your HW configuration? Can you try to
run
NPtcp on each interface simultaneously and see what BW do you get.
--
Gleb.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dr. Aurelien Bouteiller, Sr. Research Associate
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996