I am the developer and the maintainer of the data-type engine in Open
MPI. And, I'm stunned (!) It never occur to me that someone will ever
use a data-type description that need more than 32K entries on the
internal stack.
Let me explain a little bit. The stack is used to efficiently parse
Dear Open-MPI Developers,
investigations on the segmentation fault (see previous postings "Signal:
Segmentation fault (11) Problem") lets us suspect that Open-MPI allows only
a limited number of elements in the description of user-defined
MPI_Datatypes.
Our application segmentation-faults when a
I think you probably want to contact your Mellanox support for help
with installing IB Gold.
This list is for support of Open MPI, not IB Gold.
On Apr 18, 2007, at 10:34 AM, Simple Kaul wrote:
Hi,
I am trying to install IBGD-1.8.2 on a linux server
(kernel-2.6.9-42.ELsmp)
and get the fo
On Apr 18, 2007, at 8:44 AM, stephen mulcahy wrote:
~/openmpi-1.2/bin/mpirun --mca btl_tcp_if_include eth0 --mca btl
tcp,self --bynode -np 2 --hostfile ~/openmpi.hosts.80
~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
Neither one resulted in a significantly different benchmark.
That's truly odd --
Hi,
I am trying to install IBGD-1.8.2 on a linux server (kernel-2.6.9-42.ELsmp)
and get the following error when running the ./install script:
Building ib RPMs. Please wait...
Running /tmp/ib-1.8.2/build_rpm.sh --prefix /opt/mellanox --build_root
/../..//IBGD --packages ib_verbs -- -kver 2.6
Hi,
Thanks. I'd actually come across that and tried it also .. but just to
be sure .. here's what I just tried
[smulcahy@foo ~]$ ~/openmpi-1.2/bin/mpirun -v --display-map --mca btl
^openib,mvapi --bynode -np 2 --hostfile ~/openmpi.hosts.2only
~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
and h
Look here:
http://www.open-mpi.org/faq/?category=tuning#selecting-components
General idea
mpirun -np 2 --mca btl ^tcp (to exclude ethernet) replace with
^openib (or ^mvapi) to exclude infiniband.
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Apr 18, 2007,
Hi,
I'm currently conducting some testing on system with gigabit and
infiniband interconnects. I'm keen to baseline openmpi over both the
gigabit and infiniband interconnects.
I've compiled it with defaults and run the Intel MPI Benchmarks PingPong
as follows to get an idea of latency and ba
Hi George,
Some more investigation on the Segmentation fault done with valgrind is
shown below.
There seems to be uninitialized parameters and finally a read at address
0x1, which
causes the segfault. I have checked whether one of my members appear to be
at that
address when constructing th