On Dec 20, 2006, at 7:04 PM, Jeff Squyres wrote:
I've been asked by the owner of the cluster "How can you prove to me
that this openmpi job is using the Infiniband network?"
At first I thought a simple netstat -an on the compute nodes might
tell
me, however I don't see the Infiniband IP's in
You can also usually watch the counters on the IB cards and
Ethernet cards. For programs that have a lot of communication
between nodes it is quickly obvious which network you're using.
The IB card monitoring is driver specific, but you should have
some tools for this. For Ethernet you can
On Dec 20, 2006, at 1:54 PM, Andrus, Mr. Brian ((Contractor)) wrote:
Thanks for the info.
I downloaded the newer stable (1.1.2-1) and have tried it with the
same
results.
Since I am trying to use the rpm source, everything comes out in one
output file.
I have compressed and attached it.
Th
On Dec 20, 2006, at 6:28 PM, Michael John Hanby wrote:
Howdy, I'm new to cluster administration, MPI and high speed networks.
I've compiled my OpenMPI using these settings:
./configure CC='icc' CXX='icpc' FC='ifort' F77='ifort'
--with-mvapi=/usr/local/topspin
--with-mvapi-libdir=/usr/local/top
Howdy, I'm new to cluster administration, MPI and high speed networks.
I've compiled my OpenMPI using these settings:
./configure CC='icc' CXX='icpc' FC='ifort' F77='ifort'
--with-mvapi=/usr/local/topspin
--with-mvapi-libdir=/usr/local/topspin/lib64 --enable-static
--prefix=/share/apps/openmpi/1.
Jeff,
Thanks for the info.
I downloaded the newer stable (1.1.2-1) and have tried it with the same
results.
Since I am trying to use the rpm source, everything comes out in one
output file.
I have compressed and attached it.
Brian Andrus
QSS Group, Inc.
Naval Research Laboratory
Monterey, Califo
Can you send the full output from configure and config.log? See this
page for details of what we need for compile failures:
http://www.open-mpi.org/community/help/
Also note that there is a slightly newer version than what you're
trying -- v1.1.2 (1.1.3 may actually be out shortly, too
I am trying to build an OpenMPI rpm for RHEL4U4 using the following:
rpmbuild --rebuild --define configure_options"CC=pgcc CXX=pgCC F77=pgf77
FC=pgf90 FFLAGS=-fastsse FCFLAGS=-fastsse" ./openmpi-1.1.1-1.src.rpm
It builds the rpm but there are some warnings:
---
configure: WARNING:
Apologies if you received multiple copies of this message.
===
CALL FOR PAPERS
Workshop on Virtualization/Xen in High-Performance Cluster
and Grid Computing (XHPC'07)
as part of The 16th IEEE International Symposium on High
Performan
On 12/20/06, Harakiri wrote:
I will study through the suggested paper, however
i actually read a different paper which suggested
using less messages, i would imagine that for arrays
of numbers lets say 100 Millions - the network
messages become the critical factor.
IMHO,
It depends completely
Thanks for your input,
--- Andreas Schäfer wrote:
> that even though this results in _many_ messages,
> the algorithms
> optimal runtime complexity will compensate for it.
>
> But benchmark your own ;-)
I will study through the suggested paper, however
i actually read a different paper which s
11 matches
Mail list logo