Does openmpi have any "mpif.h" ?? if yes, where? in openmpi_dir/include ??!!
Ahhh, now that makes sense. Never included, always used. Thanks!
On Mon, Sep 22, 2008 at 8:55 PM, Terry Frankcombe wrote:
> Remember what include does: it essentially dumps mpif.h into the
> source. So to be proper F90 you need:
>
> PROGRAM main
> USE local_module
> IMPLICT NONE
> INCLUDE
Remember what include does: it essentially dumps mpif.h into the
source. So to be proper F90 you need:
PROGRAM main
USE local_module
IMPLICT NONE
INCLUDE 'mpif.h'
...
On Mon, 2008-09-22 at 20:17 -0600, Brian Harker wrote:
> Well, I'm stumped then...my top-level program is the only one that
> u
Well, I'm stumped then...my top-level program is the only one that
uses MPI interfaces. I USE other f90 module files, but none of them
are dependent on MPI functions. For example here's the first few
lines of code where things act up:
PROGRAM main
INCLUDE 'mpif.h' (this line used to be "USE
Hi Brian and list
On my code I have
include 'mpif.h'
with single quotes around the file name.
I use single quotes, but double quotes are also possible according to
the F90 standard.
If you start at column 7 and end at column 72,
you avoid any problems with free vs. fixed Fortran form (w
Yes I have matched all the arguments. I should mention that the code
compiles and runs flawlessly using MPICH2-1.0.7 so it's got to be an
issue with my specific build of openMPI. I want to get openMPI up and
running for performance comparisons.
On Mon, Sep 22, 2008 at 6:43 PM, Jeff Squyres wrote
What's the source code in question, then? Did you match all the
arguments?
On Sep 22, 2008, at 8:36 PM, Brian Harker wrote:
Nope, no user-defined types or arrays greater than 2 dimensions.
On Mon, Sep 22, 2008 at 6:24 PM, Jeff Squyres
wrote:
On Sep 22, 2008, at 6:48 PM, Brian Harker wr
BTW, thanks for hanging in there with me on this guys. I appreciate
your time and input.
On Mon, Sep 22, 2008 at 6:36 PM, Brian Harker wrote:
> Nope, no user-defined types or arrays greater than 2 dimensions.
>
> On Mon, Sep 22, 2008 at 6:24 PM, Jeff Squyres wrote:
>> On Sep 22, 2008, at 6:48 P
Nope, no user-defined types or arrays greater than 2 dimensions.
On Mon, Sep 22, 2008 at 6:24 PM, Jeff Squyres wrote:
> On Sep 22, 2008, at 6:48 PM, Brian Harker wrote:
>
>> when I compile my production code, I get:
>>
>> fortcom: Error: driver.f90: line 211: There is no matching specific
>> subr
On Sep 22, 2008, at 6:48 PM, Brian Harker wrote:
when I compile my production code, I get:
fortcom: Error: driver.f90: line 211: There is no matching specific
subroutine for this generic subroutine call. [MPI_SEND]
Seems odd that it would spit up on MPI_SEND, but has no problem with
MPI_RECV
Hi Gus-
Thanks for the idea. One question: how do you position INCLUDE
statements in a fortran program, because if I just straight substitute
' INCLUDE "mpif.h" ' for ' USE mpi ', I get a lot of crap telling me
my other USE statements are not positioned correctly within the scope
and nothing comp
On Sep 22, 2008, at 6:21 PM, Doug Reeder wrote:
I think that unless make all depends on make clean and make clean
depends on Makefile, you have to manually run make clean and/or
manually delete the module files.
We do have most everything in the code base that matters depend on
opal_confi
Hi Brian and list
I seldom used the "use mpi" syntax before.
I have a lot of code here written in Fortran 90,
but and mpif.h is included instead "use mpi".
The MPI function calls are the same in Fortran 77 and Fortran 90 syntax,
hence there is just one line of code to change, if one wants to go f
I humbly bow before my MPI masters! Thank you guys!
make clean && make all install seemed to fix it. The example code
compiles and runs fine...but...
when I compile my production code, I get:
fortcom: Error: driver.f90: line 211: There is no matching specific
subroutine for this generic subrou
Jeff,
I think that unless make all depends on make clean and make clean
depends on Makefile, you have to manually run make clean and/or
manually delete the module files.
Doug Reeder
On Sep 22, 2008, at 3:16 PM, Jeff Squyres wrote:
On Sep 22, 2008, at 6:08 PM, Brian Harker wrote:
Here's
On Sep 22, 2008, at 6:08 PM, Brian Harker wrote:
Here's the config.log file...now that I look through it more
carefully, I see some errors that I didn't see when watching
./configure scroll by...still don't know what to do though. :(
Not to worry; there are many tests in configure that are de
Brian,
Try doing a make clean before doing the build with your new make file
(from the new configure process). It looks like you are getting the
leftover module files from the old makefile/compilers.
Doug reeder
On Sep 22, 2008, at 2:52 PM, Brian Harker wrote:
Ok, here's something funny/w
Here's the config.log file...now that I look through it more
carefully, I see some errors that I didn't see when watching
./configure scroll by...still don't know what to do though. :(
Thanks!
On Mon, Sep 22, 2008 at 3:54 PM, Jeff Squyres wrote:
> Good question. Can you send the full stdout/std
Good question. Can you send the full stdout/stderr output from
configure and config.log?
(please compress)
On Sep 22, 2008, at 5:52 PM, Brian Harker wrote:
Ok, here's something funny/weird/stupid:
Looking at the actual mpi.mod module file in the $OPENMPI_HOME/lib
directory, the very first
Ok, here's something funny/weird/stupid:
Looking at the actual mpi.mod module file in the $OPENMPI_HOME/lib
directory, the very first line is:
GFORTRAN module created from mpi.f90 on Fri Sep 19 14:01:27 2008
WTF!? I specified that I wanted to use the ifort/icc/icpc compiler
suite when I installe
Hi guys-
Still no dice. The only mpi.mod files I have are the ones generated
from my compile and build from source (and they are where they should
be), so there's definitely no confusion amongst the modules. And
specifying the fulls path to the correct mpi.mod module (like Gus
suggested with the
Exactly what version of Open MPI are you using? You mentioned "1.3"
-- did you download a nightly tarball at some point, or do you have an
SVN checkout? Since you have a development copy of Open MPI, it is
possible that your copy is simply broken (sorry; we *do* break the
development head
Hi, this is my openmpi-default-hostfile:
127.0.0.1 slots=2
If I invoke comand CTRL+C the application is not killed.
With mpirun -np 1 uptime the comand is ever blocked.
The comand is blocked with any comand, also comands not existent.
Thanks.
2008/9/22 Jeff Squyres
> On Sep 19, 2008, at 6:5
I have committed a patch (r19607) to clean up the BLCR configure
logic. It should be in tonight's tarball of the trunk. I've requested
that this fix be moved to v1.3, but that is going to take a day or so
to process.
This patch adds a new configure option '--with-blcr-libdir' which lets
y
On Sep 22, 2008, at 2:48 PM, Shafagh Jafer wrote:
I am not using any of the files that exist in /usr/local/include,
this doesn't even show on my PATH and my mpicc -show tells that the
openmpi path is correct.
/usr/local/include is likely a default include path from the
compiler. So even
I am not using any of the files that exist in /usr/local/include, this doesn't
even show on my PATH and my mpicc -show tells that the openmpi path is correct.
I dont know what to do, I have contacted the technician to hide or delete
/usr/local/include...Millions of thanks for ur replies and pati
On Sep 22, 2008, at 2:12 PM, Shafagh Jafer wrote:
I am gonna kill myself :( i dont know what the problem is... I 'm
gonna explain the details again, please bare me and help me :(
Ok yes I used mpic++. Actually I checked mpicxx -show and the path
were correct. let me describe my problem agai
I am gonna kill myself :( i dont know what the problem is... I 'm gonna explain
the details again, please bare me and help me :(
Ok yes I used mpic++. Actually I checked mpicxx -show and the path were
correct. let me describe my problem again.
When I run my code (after making sure that openmpi
On Sep 22, 2008, at 1:24 PM, Shafagh Jafer wrote:
I am confused now...should I use "/openmpi/mpic++ -E" or "/openmpi/
mpic++" to compile my entire code??
mpic++. I was showing you the -E functionality so that you could
determine whether it really is picking up the wrong mpi.h or not.
Hi Brian and list
I read your original posting and Jeff's answers.
Here on CentOS from Rocks Cluster I have a "native" OpenMPI, with a mpi.mod,
compiled with gfortran.
Note that I don't even have gfortran installed!
This is besides the MPI versions (MPICH2 and OpenMPI)
I installed from scratch u
I am confused now...should I use "/openmpi/mpic++ -E" or "/openmpi/mpic++" to
compile my entire code??
--- On Mon, 9/22/08, Jeff Squyres wrote:
From: Jeff Squyres
Subject: Re: [OMPI users] conflict among differenv MPIs
To: "Open MPI Users"
List-Post: users@lists.open-mpi.org
Date: Monday, Se
I believe this is now fixed in the trunk. I was able to reproduce with
the current trunk and committed a fix a few minutes ago in r19601. So
the fix should be in tonight's tarball (or you can grab it from SVN).
I've made a request to have the patch applied to v1.3, but that may
take a day o
Hi Gus-
Thanks for the input. I have been using full path names to both the
wrapper compilers and mpiexec from the first day I had two MPI
implementations on my machine, depending on if I want to use MPICH or
openMPI, but still the problem remains. ARGG!
On Mon, Sep 22, 2008 at 9:40 AM, Gus
Hello Brian and list
My confusing experiences with multiple MPI implementations
were fixed the day I decided to use full path names to the MPI compiler
wrappers (mpicc, mpif77, etc) at compile time,
and to the MPI job launcher (mpirun, mpiexec, and so on) at run time,
and to do this in a consiste
I unfortunately don't know ifort's precedence rules for finding
modules -- I actually don't even remember if -I is the right flag to
find them. mpif90 should be adding whatever flags are necessary to
find out mpi.mod, though -- but I'm wondering if MPICH's is being
found first, since it's
I built and installed both MPICH2 and openMPI from source, so no
distribution packages or anything. MPICH2 has the modules located in
/usr/local/include, which I assume would be found (since its in my
path), were it not for specifying -I$OPENMPI_HOME/lib at compile time,
right? I can't imagine th
Aurelien's advice is good -- check and see exactly what the debugger
is telling you. You might want to look at the corefile in the
debugger and see exactly where it failed -- it may or may not be an
MPI issue.
Also -- Aurelien didn't directly say it, but don't worry about the
OMPI_DECLSP
On Sep 21, 2008, at 3:46 PM, Shafagh Jafer wrote:
Yes I am using openmpi mpicc and mpic++ to compile my code,
Are you 100% sure that you're using Open MPI's mpicc / mpic++? (and
not MPICH's) This could be a cause for error.
and I only have openmpi's lib directory in my LD_LIBRARY_PATH.
Open MPI and MPICH are both implementations of the MPI standard. As
such, correct MPI applications should be completely source-portable
between Open MPI and any other MPI implementation (including MPICH).
On Sep 21, 2008, at 12:46 AM, Shafagh Jafer wrote:
Hello,
I want to know if I need t
On Sep 22, 2008, at 10:10 AM, Brian Harker wrote:
Thanks for the reply...crap, $HOME/openmpi/lib does contains all the
various lilbmpi* files as well as mpi.mod,
That should be correct.
but still get the same
error at compile-time. Yes, I made sure to specifically build openMPI
with ifort 1
Hi Jeff-
Thanks for the reply...crap, $HOME/openmpi/lib does contains all the
various lilbmpi* files as well as mpi.mod, but still get the same
error at compile-time. Yes, I made sure to specifically build openMPI
with ifort 10.1.012, and did run the --showme command right after
installation to m
On Sep 22, 2008, at 8:48 AM, Robert Kubrick wrote:
Recompile your own version of openmpi in a local directory, set your
PATH to your local openmpi install.
export PATH=/my/openmpi/install/include:/usr/local/include
mpicxx -show
mpicxx --showme should show you the exact command that Open MP
On Sep 19, 2008, at 6:51 PM, Brian Harker wrote:
I have configured openMPI to work with the Intel C (icc) and C++
(icpc) compilers, as well as the Intel fortran (ifort) compiler, and
built all the single choice buffer fortran 90 bindings:
./configure --prefix=$HOME/openmpi CC=icc CXX=icpc F77=i
Recompile your own version of openmpi in a local directory, set your
PATH to your local openmpi install.
export PATH=/my/openmpi/install/include:/usr/local/include
mpicxx -show
On Sep 21, 2008, at 11:05 PM, Shafagh Jafer wrote:
I have tried this, but didn't help :-( could any one help pleas
On Sep 19, 2008, at 6:50 PM, Santolo Felaco wrote:
Hi, I try to be clearer:
osa@libertas:~$ echo $LD_LIBRARY_PATH
/usr/local/lib:/home/osa/blcr/lib
osa@libertas:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/
games:/home/osa/blcr/bin
I compile the file with mp
Hello Terry,
I do not have an active firewall. I have typed on both computers:
netstat -lnut
I enclose you the results.
I have also written on both computers:
mpirun -np 2 --host 10.1.10.208,10.1.10.240 --mca mpi_preconnect_all
1 --prefix /usr/local -mca btl self,tcp -mca btl_tcp_if_include
46 matches
Mail list logo