Dear All,
(follows a previous mail)
I don't understand the strange behavior of this small code: sometimes it ends,
sometimes not.
The output of MPI_Finalized is 1 (for each processes if n>1), but the code
doesn't end. I am forced to use Ctrl-C.
I compiled it with the command line:
"mpicc --std=
It looks to me like you are getting version confusion - your path and
ld_library_path aren't pointing to the place where you installed 1.4.1 and you
are either getting someone else's mpiexec or getting 1.2.x instead. Could also
be that mpicc isn't the one from 1.4.1 either.
Check to ensure that
I rechecked, but didn't see anything wrong.
Here is how I set my environment. Tkx.
$>mpicc --v
Using built-in specs.
COLLECT_GCC=//home/p10015/gcc/bin/x86_64-unknown-linux-gnu-gcc-4.5.0
COLLECT_LTO_WRAPPER=/hsfs/home4/p10015/gcc/bin/../libexec/gcc/x86_64-unknown-linux-gnu/4.5.0/lto-wrapper
Target:
Hi all,
I had the same problem like Jitsumoto, i.e. OpenMPI 1.4.2 failed to restart
and the patch which Fernando gave didn't work.
I also tried 1.5 nightly snapshots but it seemed not working well.
For some purpose, I don't want to use --enable-ft-thread in configure but
the same error occurred ev
Just to make sure I understand -- you're running the hello world app you pasted
in an earlier email with just 1 MPI process on the local machine, and you're
seeing hangs. Is that right?
(there was a reference in a prior email to 2 different architectures -- that's
why I'm clarifying)
On May
On May 23, 2010, at 11:57 AM, Dawid Laszuk wrote:
> It's a bit awkward for me to ask, because I'm not only newbie in
> parallel programming but also in Linux system, but i've but searching
> for long enough to loose any hopes.
No problem; we'll try to help.
> My problem is, when I try to run com
On May 19, 2010, at 2:19 PM, Michael E. Thomadakis wrote:
> I would like to build OMPI V1.4.2 and make it available to our users at the
> Supercomputing Center at TAMU. Our system is a 2-socket, 4-core Nehalem
> @2.8GHz, 24GiB DRAM / node, 324 nodes connected to 4xQDR Voltaire fabric,
> CentOS/
Indeed, it's right.
I work on a bigger program, but executions hanged most of the time. So I cut
and cut and cut to finally obtain this. And it still hangs 2 times on 3 at
least, and I don't know why.
Le Monday 24 May 2010 14:48:43 Jeff Squyres, vous avez écrit :
> Just to make sure I understand
When I specify the hosts separately on the commandline, as follows, the
process completes as expected.
mpirun -np 8 -host remotehost,localhost myapp
Output appears for the localhost and a textfile is created on the remotehost
However when I use a hostfile the remote processes never complete. I can
Yes, I'm sure I'm picking up the newly built version. I've run ompi_info to
verify my path is correct.
I've have a little more information now... I rebuilt openmpi 1.4.2 with the
'--enable-debug' option to configure and when I run a simple mpi program on 2
processors with an MPI_Reduce() call
Thanks a lot :) I've got one step further, but there are another problems.
I think I've fixed that one with "undefined orte_xml_fm". I've
uninstalled by "make uninstall" and cleaned with "make clean" and then
configured with "--enable-mpirun-prefix-by-default" (like you said it)
and "make all", "m
On May 24, 2010, at 12:06 PM, Dawid Laszuk wrote:
> > What's the output from "ldd hello_c"? (this tells us which libraries it's
> > linking to at run-time -- from your configure output, it should list
> > /usr/local/lib/libmpi.so in there somewhere)
>
> kretyn@kretyn-laptop ~/Pobrane/openmpi-1
On May 24, 2010, at 10:45 AM, Glass, Micheal W wrote:
> Yes, I’m sure I’m picking up the newly built version. I’ve run ompi_info to
> verify my path is correct.
>
> I’ve have a little more information now... I rebuilt openmpi 1.4.2 with the
> ‘--enable-debug’ option to configure and when I
My MPI program consists of a number of processes that send 0 or more messages
(using MPI_Isend) to 0 or more other processes. The processes check
periodically if messages are available to be processed. It was running fine
until I increased the message size, and I got deadlock problems. Googling
Gijsbert Wiesenekker wrote:
My MPI program consists of a number of processes that send 0 or more messages
(using MPI_Isend) to 0 or more other processes. The processes check
periodically if messages are available to be processed. It was running fine
until I increased the message size, and I g
That's it! It works. When I make export I don't have to even start
from /usr/.../mpirun, plan "mpirun" do the work. Now I have to make
that PATH to be like that all the time... hmm...
Thanks a lot :)
much appreciate it :)
2010/5/24 Jeff Squyres :
> On May 24, 2010, at 12:06 PM, Dawid Laszuk wrote
I have a user who prefers building rpm's from the srpm. That's okay,
but for debugging via TotalView it creates a version with the openmpi
.so files stripped and we can't gain control of the processes when
launched via mpirun -tv. I've verified this with my own build of a
1.4.1 rpm which I th
Our project is fork / exec'ing in some cases to provide a service for
some of the processes within our MPI job. Open MPI spews big warnings
to the terminal about this. It explains how to disable the message,
but I'd really like it to not pop up regardless.
The child process does not perform any
Well, there are three easy ways to do this:
1. put OMPI_MCA_mpi_warn_on_fork=0 in your environ (you can even do that within
your code prior to calling MPI_Init)
2. put mpi_warn_on_fork=0 in your default MCA param file
3. add -mca mpi_warn_on_fork 0 to your mpirun cmd line
On May 24, 2010, at
19 matches
Mail list logo