Hi Allan, i ran hpbench on a cluster of two machines having just convetional PCI bus and i didnt find any performance improvement. On the contrary the performace degraded. I figured out it may be because conventional PCI bus (33 MHz, 32 bits) provides a bandwidth with 33x32 = 1056 Mbps, hence just
Hi George & Allan
First of all i would like to thanks George, for the information. I think with this i can further imporve the performance of my cluster. I was wondering if there is any manual available where all these parameters are listed out. So then i can experiment with them.
Allan i would
Thanks Brian, Thanks Michael
I wanted to benchmark the communcation throughput and latency using multiple using gigabit eithernet controller.
So here are the results which i want share with you all
I used .
OpenMPI version 1.0.2a10r9275
Hpcbench
Two Dell Precision 650 workstation.
The Dell Pre
Hi I have been looking for information on how to use multiple Gigabit Ethernet Interface for MPI communication.
So far what i have found out is i have to use mca_btl_tcp.
But what i wish to know, is what IP Address to assign to each Network Interface. I also wish to know if there will be any cha