Hi George
I did some extra digging and found that (for some reason) the
MPI_IN_PLACE parameter is not being recognized as such by mpi_reduce_f
(reduce_f.c:61). I added a couple of printfs:
printf(" sendbuf = %p \n", sendbuf );
printf(" MPI_FORTRAN_IN_PLACE = %p \n", &MPI_FORTRAN_IN
There is no collective or point to point operation that provides the
assurance you describe. MPI is built around what is called "local
completion semantic".
If you need an operation that confirms that send count and recv counts
match you can write it yourself. The MPI standard has never tried
Hi George
I don't think this is a library mismatch. I just followed your
instructions and got:
$ otool -L a.out
a.out:
/opt/openmpi/1.3.3-g95-32/lib/libmpi_f77.0.dylib (compatibility
version 1.0.0, current version 1.0.0)
/opt/openmpi/1.3.3-g95-32/lib/libmpi.0.dylib (compatibility version
Ricardo,
I checked on Linux and on Mac OS X 10.5.7 with the fortran compilers
from (hpc.sourceforge.net) and I get the correct answer. As you only
report problems on Mac OS X I wonder if the real source of the problem
is not coming from a library mismatch. As you know, Open MPI is
bundled
Hi George
Thanks for the input. This might be an OS specific problem: I'm
running Mac OS X 10.5.7, and this problem appears in openmpi versions
1.3.2, 1.3.3 and 1.4a1r21734, using Intel Ifort Compiler 11.0 and 11.1
(and also g95 + 1.3.2). I haven't tried older versions. Also, I'm
running
Dear All
Have built OpenMPI with default options on an x86_64 RedHat 5 machine and
noticed that the libvt libraries are static and do not appear to have been
compiled with fPIC?
Is this expected? Didn't notice anything in the docs and would like to trace an
app with shared libs.
Thanks
Domin
Hi everyone,
I am trying to build my own system for my nodes - minimalistic. I tried
to make things easy so I didn't even recompile openMPI for it, I just
copied everything from my Ubuntu installation (I know, it's very dirty,
but I stick to KISS :) ). Before, things just worked perfectly with the
On Mon, 2009-07-27 at 23:56 -0700, Jacob Balthazor wrote:
> I ran the command, how would you interpret these results to create a
> path for my include?
>
> [beowulf1@localhost ~]$ whereis libmpi.so
> libmpi:
> [beowulf1@localhost ~]$
I'd interpret it as you've got things stashed in strange plac
I ran the command, how would you interpret these results to create a
path for my include?
[beowulf1@localhost ~]$ whereis libmpi.so
libmpi:
[beowulf1@localhost ~]$
-Jacob B.
On Jul 27, 2009, at 9:55 PM, Terry Frankcombe wrote:
I suspect that my not changing my .bash_profile is inde
Hey, Jacob.
If you are installing OMPI with yum under Fedora 10, you'll get an old
1.2.4 version (that was one of my own mistakes). It is strongly
recommended to get the latest 1.3.3 version - many problems can go away.
Sincerely yours, Alexey.
On Mon, 2009-07-27 at 08:35 -0700, jacob Balthazor
> I suspect that my not changing my .bash_profile is indeed the
> problem unfortunately I do not know were yum placed open's lib and bin
> directories. I tried looking in /usr/local/ but did not find and
> openmpi directory, I suspect this has something to my Fedora
> distribution as apposed
Have you tried just adding -x LD_LIBRARY_PATH to your mpirun cmd line?
This will pickup your local LD_LIBRARY_PATH value and propagate it for
you to the remote node.
On Jul 27, 2009, at 9:58 PM, Jacob Balthazor wrote:
Hey,
I suspect that my not changing my .bash_profile is indeed
Hi,
>From what I could see, actually you can obviate the input-parse.xml with a
bit of rough work :).
1. To parse the objects to the spawning processes, first wrap your objects
in to a derived data type (ex. MPI_Struct) which can be easily transferred
from the web service to the spawned children.
13 matches
Mail list logo