Hi George, The following test peaks at 8392Mpbs: mpirun --prefix
/opt/opnmpi124b --host a1,a1 -mca btl tcp,sm,self -np 2 ./NPmpi on a1
and on a2
mpirun --prefix /opt/opnmpi124b --host a2,a2 -mca btl tcp,sm,self -np 2 ./NPmpi
gives 8565Mbps
--(a)
on a1:
mpirun --prefix /opt/opnmpi124b --host a1,a1 -np 2 ./NPmpi
gives 8424Mbps on a2:
mpirun --prefix /opt/opnmpi124b --host a2,a2 -np 2 ./NPmpi
gives 8372Mbps So theres enough memory and processor b/w to give 2.7Gbps
for 3 pci express eth cards especially from --(a) between a1 and a2?
Thank you for your help. Any assistance would be greatly apprectiated!
Regards, Allan Menezes You should run a shared memory test, to see
what's the max memory bandwidth you can get. Thanks, george. On Dec 17,
2007, at 7:14 AM, Gleb Natapov wrote:
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI version
1.2.4
support with a corresponding linear increase in bandwith measured
with
netpipe NPmpi and openmpi mpirun?
With two PCI express cards I get a B/W of 1.75Gbps for 892Mbps each
ans
for three pci express cards ( one built into the motherboard) i get
1.95Gbps. They all are around 890Mbs indiviually measured with
netpipe
and NPtcp and NPmpi and openmpi. For two it seems there is a linear
increase in b/w but not for three pci express gigabit eth cards.
I have tune the cards using netpipe and $HOME/.openmpi/mca-
params.conf
file for latency and percentage b/w .
Please advise.
What is in your $HOME/.openmpi/mca-params.conf? May be are hitting
your
chipset limit here. What is your HW configuration? Can you try to run
NPtcp on each interface simultaneously and see what BW do you get.
--
Gleb.