Jed Brown wrote:
Are you saying the output of mpicc/mpif90 -show has the same
optimization flags? MPICH2 usually puts it's own optimization flags
into the wrappers.
Jed, thank you for your reply. Yes, mpif90 shows (other than differing
libraries) identical flags.
Ralph Castain wrote:
Did
Sangamesh,
The ib tunings that you added to your command line only delay the
problem but doesn't resolve it.
The node-0-2.local gets asynchronous event "IBV_EVENT_PORT_ERROR" and as
result
the processes fails to deliver packets to some remote hosts and as
result you see bunch of IB errors.
T
You will not be need the trick if you will configure Open Mpi with
follow flag:
--enable-mpirun-prefix-by-default
Pasha.
Hodgess, Erin wrote:
the LD_LIBRARY_PATH did the trick;
thanks so much!
Sincerely,
Erin
Erin M. Hodgess, PhD
Associate Professor
Department of Computer and Mathematica
As Ralph suggested, I *reversed the order of my PATH settings*:
This is what I it shows:
$ echo $PATH
/usr/local/openmpi-1.3.3/bin/:/usr/bin:/bin:/usr/local/bin:/usr/X11R6/bin/:/usr/games:/usr/lib/qt4/bin:/usr/bin:/opt/kde3/bin
$ echo $LD_LIBRARY_PATH
/usr/local/openmpi-1.3.3/lib/
Moreover, I c
Hi Eugene,
carto file is a file with a staic graph topology of your node.
in the opal/mca/carto/file/carto_file.h you can see example.
( yes I know that , it should be help/man list :) )
Basically it describes a map of your node and inside interconnection.
Hopefully it will be discovered automatica
I hate to repost, but I'm still stuck with the problem that, on a
completely standard install with a standard gcc compiler, we're getting
random hangs with a trivial test program when using the sm btl, and we
still have no clues as to how to track down the problem.
Using a completely standard
Thank you, but I don't understand who is consuming this information for
what. E.g., the mpirun man page describes the carto file, but doesn't
give users any indication whether they should be worrying about this.
Lenny Verkhovsky wrote:
Hi Eugene,
carto file is a file with a staic g
Continuing the conversation with myself:
Google pointed me to Trac ticket #1944, which spoke of deadlocks in
looped collective operations; there is no collective operation anywhere
in this sample code, but trying one of the suggested workarounds/clues:
that is, setting btl_sm_num_fifos to at
Hi
I am trying to run open-mpi 1.3.3. between a linux box running ubuntu
server v.9.04 and a Macintosh. I have configured openmpi with the
following options.:
./configure --prefix=/usr/local/ --enable-heterogeneous --disable-shared
--enable-static
When both the machines are connected to the netwo
The following is the error dump
fuji:src pallabdatta$ /usr/local/bin/mpirun --mca btl_tcp_port_min_v4
36900 -mca btl_tcp_port_range_v4 32 --mca btl_base_verbose 30 --mca btl
tcp,self --mca OMPI_mca_mpi_preconnect_all 1 -np 2 -hetero -H
localhost,10.11.14.205 /tmp/hello
[fuji.local:01316] mca: base
Hey all,
I'm getting a segmentation fault when I attempt to receive a single
character via MPI_Irecv. Code follows:
void recv_func() {
if( !MASTER ) {
charbuffer[ 1 ];
int flag;
MPI_Req
Hi:
I assume if you wait several minutes than your program will actually
time out, yes? I guess I have two suggestions. First, can you run a
non-MPI job using the wireless? Something like hostname? Secondly, you
may want to specify the specific interfaces you want it to use on the
two machi
12 matches
Mail list logo