Dear Brian,
The file
/openmpi-1.0rc3/contrib/dist/macosx-pkg/buildpackage.sh
has a symbol TMP_DIR which is undefined. Replacing TMP_DIR with
BUILD_TMP lead to a dmg file
which installed without a problem on my Apple dual G5 tower:
mighell% ./buildpackage.sh ~/pkg/openmpi/src/openmpi-1.0rc
Dear Brian,
Previously my CC environment variable was set to cc:
mighell% printenv CC
cc
I then set the CC environment variable to gcc:
mighell% setenv CC gcc
mighell% printenv CC
gcc
and then tried to build the package
mighell% ./buildpackage.sh ~/pkg/openmpi/src/openmpi-1.0rc3.tar.gz ~/
p
Dear Brian,
It must be something else because cc *is* gcc
mighell% which cc
/usr/bin/cc
mighell% ls -l /usr/bin/cc
lrwxr-xr-x 1 root wheel 7 May 25 11:45 /usr/bin/cc -> gcc-4.0
mighell% cc --version
powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 20041026 (Apple Computer,
Inc. build 4061)
C
On Oct 12, 2005, at 5:15 PM, Ken Mighell wrote:
Making all in xgrid
depbase=`echo src/pls_xgrid_component.lo | sed 's|[^/]*$|.deps/&|;s|
\.lo$||'`; \
if /bin/sh ../../../../libtool --mode=compile gcc -DHAVE_CONFIG_H -
I. -I. -I../../../../include -I../../../../include -I/tmp/
buildpackage-60
Dear OpenMPI,
I tried to build 1.0rc3 on my Apple dual G5 tower running OSX Tiger:
mighell% ./buildpackage.sh ~/pkg/openmpi/src/openmpi-1.0rc3.tar.gz
~/pkg/openmpi/
--> Configuration options:
Package Name: openmpi
Prefix: /Users/mighell/pkg/openmpi/
Boot: ssh
On Wed, Oct 12, 2005 at 07:06:54PM +0100, Ashley Pittman wrote:
> As it turns out I'm in a position to measure this fairly easily, our MPI
> sits on top of a library called libelan, this does all the tag matching
> at a very low level, all MPI does is convert the communicator into a bit
> pattern,
William Gropp wrote:
>
> You might also look at http://www-unix.mcs.anl.gov/mpi/tools/genericmpi/
> . The software is currently being revised but should be available
> soon. For users willing to interpose libraries, this solves many (but
> not all) of these problems, particularly for C-only app
On Wed, Oct 12, 2005 at 12:05:13PM +0100, Ashley Pittman wrote:
> Thirdly is the performance issue, any MPI vendor worth his salt tries
> very hard to reduce the number of function calls and library's between
> the application and the network, adding another one is a step in the
> wrong direction.
Tim Prins wrote:
> I am in the process of developing MorphMPI and have designed my
> implementation a bit different than what you propose (my apologies if I
> misunderstood what you have said). I am creating one main library, which
> users will compile and run against, and which should not need to
Toon,
> We are planning to develop a MorphMPI library. As explained a bit
> higher
> up in this thread, the MorphMPI library will be used while *compiling*
> the app. The library that implements the MorphMPI calls will be linked
> with dynamically. The MorphMPI on its turn links with some specific
Robert G. Brown wrote:
> Ashley Pittman writes:
>
>> Personnel I think a MPI ABI would be a good thing however this is not
>> the way to do it.
>
>
> And this is exactly right. Futhermore, we all know the right way to do
> it. It is for a new governing body or consortium to be established (or
Ashley Pittman wrote:
> The second problem is that of linking, most MPI vendors already have
> MPI_Init in their own library, having another library with it's own
> wrapper MPI_Init in it is going to lead to a whole world of pain to do
> with dynamic linking and symbol resolution. This is not som
William Gropp wrote:
> At 08:44 AM 10/11/2005, Toon Knapen wrote:
>
>> William Gropp wrote:
>> > in the Fortran source mapped to
>> >
>> > MPI_INIT
>> > mpi_init
>> > mpi_init_
>> > mpi_init__
>> > MPI_Init_
>> >
>> > Each of these has been chosen by some Fortran 77 compiler. Confusion
>> > over
Toon Knapen wrote:
Fortran LOGICAL
could you elaborate?
Toon,
There is no universally-agreed Fortran convention for how .TRUE. and
.FALSE. boolean values are represented and how a value is checked for
.TRUE. and .FALSE.
Some Fortran implementations use 0 for .FALSE. and 1 for .TRUE.
> The government is one of the few forces that could mandate a proper MPI
> ABI at this point in time;
They certainly aren't the only ones -- vendors of proprietary
applications that use MPI plus vendors of interconnect hardware get
significant benefits from an ABI.
Anyone who wants to distribute
15 matches
Mail list logo