mpirun --prefix /opt/openmpi -mca oob_tcp_include eth0 -mca
btl_tcp_if_include eth0 --hostfile ~/work/openmpi_hostfile -np 4
hostname
Could a section be added to the FAQ mentioning that the firewall
service should be shutdown over the mpi interface and that the two
-mca switches should be used?
On Feb 7, 2007, at 3:26 PM, Michael wrote:
Building openmpi-1.3a1r13525 on OS X 10.4.8 (PowerPC), using my
standard compile line
./configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs --with-mpi-f90-
size=large --with-f90-max-array-dim=3 ; make all
and after installing I found that I couldn't compil
I have added the following line to my .bashrc:
export OMPIFLAGS="-mca oob_tcp_include eth0 -mca btl_tcp_if_include
eth0 --hostfile ~/work/openmpi_hostfile"
and have verified that mpirun $OMPIFLAGS -np 4 hostname works.
Is there a better way of accomplishing this, or is this a matter of
there be
On Feb 8, 2007, at 6:33 PM, Troy Telford wrote:
The error is (upon job start), something to the extent of (transcribed
from phone):
mca_mpool_openib_register: cannot allocate memory
.
.
.
Error creating low priority CQ for MTHCA0: Cannot allocate memory.
What has to happen for t
For things like these, I usually use the "dot file" mca parameter
file in my home directory:
http://www.open-mpi.org/faq/?category=tuning#setting-mca-params
That way, I don't accidently forget to set the parameters on a given
run ;).
Brian
On Feb 8, 2007, at 6:15 PM, Mark Kosmowski wr
Hello Troy,
This issue is fairly common and has to do with the maximum amount of
memory allowed to be allocated. See this FAQ for more detail:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
I had the same issue and the FAQ resolves it.
Good luck,
Alex.
On 2/8/07, Troy Telfor
I've got a system that is running Open MPI 1.1.2, SLES 10, with the OFED
1.0 drivers.
The code runs over Gigabit Ethernet/TCP without issues on Open MPI...
The code does some memory allocation, communication, etc - the developer
wrote it to stress the network fabric, and can be submitted if
I have a style question related to this issue that I think is resolved.
I have added the following line to my .bashrc:
export OMPIFLAGS="-mca oob_tcp_include eth0 -mca btl_tcp_if_include
eth0 --hostfile ~/work/openmpi_hostfile"
and have verified that mpirun $OMPIFLAGS -np 4 hostname works.
Is
Thanks for your insight George.
Strange, the latency is supposed to be there too. Anyway, the latency
is only used to determine which one is faster, in order to use it for
small messages.
I searched the code base for mca parameter registering and did indeed
discover that latency setting is pos
I think I fixed the problem. I at least have mpirun ... hostname
working over the cluster.
The first thing I needed to do was to make the gigabit network an
internal zone in Yast ... firewall (which essentially turns off the
firewall over this interface).
Next I needed to add the -mca options a
Please find attached a tarball containing the stderror of mpirun ...
hostname across my cluster as well as the output from ompi_info.
Apologies for not including these earlier.
Thank you for any and all assistance,
Mark Kosmowski
ompi-output.tar.gz
Description: GNU Zip compressed data
On Feb 8, 2007, at 4:01 AM, Alok G Singh wrote:
I also came across a presentation [2] from PVM/MPI 06, but I could
find any code to go with it. The author seems to suggest the Boost
serialisation library (which does support stdlib containers). Is this
the way to go ? I have never used to Boost b
On Feb 8, 2007, at 1:12 PM, Alex Tumanov wrote:
George,
Looks like I have some values already set for openib and gm bandwidth:
# ompi_info --param all all |grep -i band
MCA btl: parameter "btl_gm_bandwidth" (current
value: "250")
MCA btl: parameter "btl_mvap
George,
Looks like I have some values already set for openib and gm bandwidth:
# ompi_info --param all all |grep -i band
MCA btl: parameter "btl_gm_bandwidth" (current value: "250")
MCA btl: parameter "btl_mvapi_bandwidth" (current value: "800")
Message: 1
Date: Wed, 7 Feb 2007 17:37:41 -0500
From: "Alex Tumanov"
Subject: Re: [OMPI users] first time user - can run mpi job SMP but
not overcluster
To: "Open MPI Users"
Message-ID:
<2453e2900702071437k20a13e97g5014253aa97cc...@mail.gmail.com>
Content-Type: text/plain
In order to get any performance improvement from stripping the
messages over multiple interconnects one has to specify the latency
and bandwidth for these interconnects, and to make sure that any of
them don't ask for exclusivity. I'm usually running over multiple TCP
interconnects and here
Hello Jeff. Thanks for pointing out NetPipe to me. I've played around
with it a little in hope to see clear evidence/effect of message
striping in OpenMPI. Unfortunately, what I saw is that the result of
running NPmpi over several interconnects is identical to running it
over a single fastest one
Hi Ali
After conferring with my colleagues, it appears we don't have the cycles
right now to really support AIX. As you have noted, the problem is with the
io forwarding subsystem - a considerable issue.
We will revise the web site to indicate this situation. We will provide an
announcement of an
We had previously done some AIX testing with OMPI, but it really
hasn't received much testing in long, long time. At this point, it
is probably safest to say that OMPI is unsupported on AIX.
Sorry. :-(
On Feb 7, 2007, at 3:18 PM, Nader Ahmadi wrote:
We are in the process to decide, if we
Before I begin, I apologise if this has been answered already. I did
search the archives but I didn't find a complete answer.
I would like to send and receive MPI messages consisting of the stdlib
(STL) containers -- map, list, etc.
Upon searching the archives, I came upon this [1] which seemed t
20 matches
Mail list logo