On Oct 7, 2008, at 4:19 PM, Hahn Kim wrote:
you probably want to set the LD_LIBRARY_PATH (and PATH, likely, and
possibly others, such as that LICENSE key, etc.) regardless of
whether it's an interactive or non-interactive login.
Right, that's exactly what I want to do. I was hoping that mp
you probably want to set the LD_LIBRARY_PATH (and PATH, likely, and
possibly others, such as that LICENSE key, etc.) regardless of
whether it's an interactive or non-interactive login.
Right, that's exactly what I want to do. I was hoping that mpirun
would run .profile as the FAQ page sta
On Oct 7, 2008, at 12:48 PM, Hahn Kim wrote:
Regarding 1., we're actually using 1.2.5. We started using Open MPI
last winter and just stuck with it. For now, using the -x flag with
mpirun works. If this really is a bug in 1.2.7, then I think we'll
stick with 1.2.5 for now, then upgrade l
Yann,
How were you trying to link your code with PETSc? Did you use mpif90 or mpif77 wrappers or were you using cc or mpicc wrappers? I ran some basic tests that test the usage of MPI_STATUS_IGNORE using mpif90 (and mpif77) and it works fine. However I was able to generate a similar error as y
Thanks for the feedback.
Regarding 1., we're actually using 1.2.5. We started using Open MPI
last winter and just stuck with it. For now, using the -x flag with
mpirun works. If this really is a bug in 1.2.7, then I think we'll
stick with 1.2.5 for now, then upgrade later when it's fixed
Terry Dontje wrote:
Yann,
I'll take a look at this it looks like there definitely is an issue
between our libmpi.so and libmpi_f90.so files.
I noticed that the linkage message is a warning does the code actually
fail when running?
--td
Thanks for you fast answer.
No, the program is runnin
Ralph and I just talked about this a bit:
1. In all released versions of OMPI, we *do* source the .profile file
on the target node if it exists (because vanilla Bourne shells do not
source anything on remote nodes -- Bash does, though, per the FAQ).
However, looking in 1.2.7, it looks like
Yann,
I'll take a look at this it looks like there definitely is an issue between our
libmpi.so and libmpi_f90.so files.
I noticed that the linkage message is a warning does the code actually fail
when running?
--td
List-Post: users@lists.open-mpi.org
Date: Tue, 07 Oct 2008 16:55:14 +0200
Fr
This is strange. We need to look into this a little more. However, you
may be OK as the warning says it is taking the value from libmpi.so
which I believe is the correct one. Does your program run OK?
Rolf
On 10/07/08 10:57, Doug Reeder wrote:
Yann,
It looks like somehow the libmpi and
openmpi build
Description: Binary data
Dear all,
I tried to build the latest v1.2.7 open-mpi version on Mac OS X 10.5.5
using the intel c, c++ and fortran compilers v10.1.017 (the latest
ones released by intel). Before starting the build I have properly
configured the CC, CXX, F77 and F
Yann,
It looks like somehow the libmpi and libmpi_f90 have different values
for the variable mpi_fortran_status_ignore. It sounds like a
configure problem. You might check the mpi include files to see if
you can see where the different values are coming from.
Doug Reeder
On Oct 7, 2008, a
Hello,
I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, and
solaris 10u5
I've got this error when linking a PETSc code :
ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
(file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file
/opt/SUNWhpc/H
I am unaware of anything in the code that would "source .profile" for
you. I believe the FAQ page is in error here.
Ralph
On Oct 6, 2008, at 7:47 PM, Hahn Kim wrote:
Great, that worked, thanks! However, it still concerns me that the
FAQ page says that mpirun will execute .profile which doe
FWIW, if this configuration is for all of your users, you might want
to specify these MCA params in the default MCA param file, or the
environment, ...etc. Just so that you don't have to specify it on
every mpirun command line.
See http://www.open-mpi.org/faq/?category=tuning#setting-mca-p
Matt,
I guess that you have some problem with partition configuration.
Can you share with us your partition configuration file (by default
opensm use /etc/opensm/partitions.conf) and guid from your machines (
ibstat | grep GUID ) ?
Regards,
Pasha
Matt Burgess wrote:
Hi,
I'm trying to get o
Sorry, misunderstood the question,
thanks for Pasha the right command line will be
-mca btl openib,self -mca btl_openib_of_pkey_val 0x8109 -mca
btl_openib_of_pkey_ix 1
ex.
#mpirun -np 2 -H witch2,witch3 -mca btl openib,self -mca
btl_openib_of_pkey_val 0x8001 -mca btl_openib_of_pkey_ix 1 ./mpi_p
16 matches
Mail list logo