Re: [OMPI users] mpirun error

2013-04-01 Thread Michael Kluskens
The Intel Fortran 2013 compiler comes with support for Intel's MPI runtime and you are getting that instead of OpenMPI. You need to fix your path for all the shells you use. On Apr 1, 2013, at 5:12 AM, Pradeep Jha wrote: > /opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpirun: line 96:

[OMPI users] mpivars.sh - Intel Fortran 13.1 conflict with OpenMPI 1.6.3

2013-01-24 Thread Michael Kluskens
This is for reference and suggestions as this took me several hours to track down and the previous discussion on "mpivars.sh" failed to cover this point (nothing in the FAQ): I successfully build and installed OpenMPI 1.6.3 using the following on Debian Linux: ./configure --prefix=/opt/openmpi

Re: [OMPI users] tickets 39 & 55

2006-11-06 Thread Michael Kluskens
On Nov 2, 2006, at 7:47 PM, Jeff Squyres wrote: On Nov 2, 2006, at 3:18 PM, Michael Kluskens wrote: So "large" was an attempt to provide *some* of the interfaces -- but [your] experience has shown that this can do more harm than good (i.e., make some legal MPI applications un

Re: [OMPI users] tickets 39 & 55

2006-11-02 Thread Michael Kluskens
On Nov 2, 2006, at 11:53 AM, Jeff Squyres wrote: Adding Craig Rasmussen from LANL into the CC list... On Oct 31, 2006, at 10:26 AM, Michael Kluskens wrote: OpenMPI tickets 39 & 55 deal with problems with the Fortran 90 large interface with regards to: #39: MPI_IN_PLACE in MPI_REDUCE &l

Re: [OMPI users] OMPI Collectives

2006-11-01 Thread Michael Kluskens
On Nov 1, 2006, at 10:27 AM, George Bosilca wrote: PS: BTW which version of Open MPI are you using ? The one who deliver the best performance or the collective communications (at least on high performance networks) is the nightly release of he 1.2 branch. As far as I can see the only nightly

[OMPI users] tickets 39 & 55

2006-10-31 Thread Michael Kluskens
OpenMPI tickets 39 & 55 deal with problems with the Fortran 90 large interface with regards to: #39: MPI_IN_PLACE in MPI_REDUCE #55: MPI_GATHER with arrays of different dimensions Attached is a p

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-30 Thread Michael Kluskens
the trunk. On Oct 16, 2006, at 8:29 AM, Åke Sandgren wrote: On Mon, 2006-10-16 at 10:13 +0200, Åke Sandgren wrote: On Fri, 2006-10-06 at 00:04 -0400, Jeff Squyres wrote: On 10/5/06 2:42 PM, "Michael Kluskens" wrote: System: BLACS 1.1p3 on Debian Linux 3.1r3 on dual-opteron,

[OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interfaces

2006-10-25 Thread Michael Kluskens
Yet another forgotten issue regarding the f90 large interfaces (note that MPI_IN_PLACE is currently an integer, for a time it was a double complex but that has been fixed). Problem I have now is that my patches which worked with 1.2 don't work

Re: [OMPI users] Starting on remote nodes

2006-10-25 Thread Michael Kluskens
On Oct 25, 2006, at 11:43 AM, Katherine Holcomb wrote: ...We support multiple compilers (specifically PGI and Intel) and due to incompatibilities in different vendors' f90 .mod files, we have separate directories for OpenMPI with each compiler. Therefore we cannot set a global path to the O

Re: [OMPI users] Starting on remote nodes

2006-10-25 Thread Michael Kluskens
On Oct 25, 2006, at 11:43 AM, Katherine Holcomb wrote: ...We support multiple compilers (specifically PGI and Intel) and due to incompatibilities in different vendors' f90 .mod files, we have separate directories for OpenMPI with each compiler. Therefore we cannot set a global path to the O

[OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions

2006-10-24 Thread Michael Kluskens
This is a reminder about an issue I bought up back at the end of May 2006 and the solution was to disable with-mpi-f90-size=large till 1.2. Testing 1.3a1r12274 and I see that no progress has been made on this even though I submited the precise

Re: [OMPI users] BLACS Mac OS X

2006-10-12 Thread Michael Kluskens
On Oct 12, 2006, at 4:14 PM, Warner Yuen wrote: I've just built BLACS using the latest beta: openmpi-1.1.2rc4 as well as openmpi-1.1.1. 1.1.2rc4 should be fine; however, I don't think a new version named 1.1.1 was released and it should fail on some or all platforms. I am getting the fol

Re: [OMPI users] PBS problem with OpenMP- only one processor used

2006-10-12 Thread Michael Kluskens
On Oct 12, 2006, at 8:23 AM, amane001 wrote: Thanks for your reply. I actually meant OpenMPI Am 12.10.2006 um 09:52 schrieb amane001: > the code below. Even if I set the OMP_NUM_THREADS = 2, the print > setenv OMP_NUM_THREADS 2 These are OpenMP, not OpenMPI environmental variables. /u

Re: [OMPI users] Trouble with shared libraries

2006-10-12 Thread Michael Kluskens
On Oct 11, 2006, at 10:38 AM, Lisandro Dalcin wrote: On 10/11/06, Jeff Squyres wrote: Open MPI v1.1.1 requires that you set your LD_LIBRARY_PATH to include the directory where its libraries were installed (typically, $prefix/ lib). Or, you can use mpirun's --prefix functionality to avoid

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-10 Thread Michael Kluskens
On Oct 6, 2006, at 12:04 AM, Jeff Squyres wrote: On 10/5/06 2:42 PM, "Michael Kluskens" wrote: System: BLACS 1.1p3 on Debian Linux 3.1r3 on dual-opteron, gcc 3.3.5, Intel ifort 9.0.32 all tests with 4 processors (comments below) Good. Can you expand on what you mean by &q

[OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-10 Thread Michael Kluskens
On Oct 5, 2006, at 4:41 PM, George Bosilca wrote: Once you run the performance tests please let me know the outcome. Ignoring the other issue I just posted here are timings for BLACS 1.1p3 Tester with OpenMPI & MPICH2 on two nodes of a dual-opteron system running Debian Linux 3.1r3, compi

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-05 Thread Michael Kluskens
On Oct 4, 2006, at 7:51 PM, George Bosilca wrote: This is the correct patch (same as previous minus the debugging statements). On Oct 4, 2006, at 7:42 PM, George Bosilca wrote: The problem was found and fixed. Until the patch get applied to the 1.1 and 1.2 branches please use the attached pa

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-04 Thread Michael Kluskens
On Oct 4, 2006, at 8:22 AM, Harald Forbert wrote: The TRANSCOMM setting that we are using here and that I think is the correct one is "-DUseMpi2" since OpenMPI implements the corresponding mpi2 calls. You need a recent version of BLACS for this setting to be available (1.1 with patch 3 should be

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-03 Thread Michael Kluskens
with this info for v1.1, and created ticket 464 for the trunk (v1.3) issue. https://svn.open-mpi.org/trac/ompi/ticket/356 https://svn.open-mpi.org/trac/ompi/ticket/464 On 10/3/06 10:53 AM, "Michael Kluskens" wrote: Summary: OpenMPI 1.1.1 and 1.3a1r11943 have different bugs with reg

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-03 Thread Michael Kluskens
rors until it crashes on the Complex AMX test (which is after the Integer Sum test). System configuration: Debian 3.1r3 on dual opteron, gcc 3.3.5, Intel ifort 9.1.032. On Oct 3, 2006, at 2:44 AM, Åke Sandgren wrote: On Mon, 2006-10-02 at 18:39 -0400, Michael Kluskens wrote: OpenMPI, BLACS,

Re: [OMPI users] BLACS & OpenMPI

2006-10-02 Thread Michael Kluskens
Having trouble getting BLACS to pass tests. OpenMPI, BLACS, and blacstester built just fine. Tester reports errors for integer and real cases #1 and #51 and more for the other types.. is an open ticket related to this. Any word on the stat

[OMPI users] BLACS & OpenMPI

2006-10-02 Thread Michael Kluskens
Building BLACS 1.1 with patch 3 and OpenMPI 1.1.1 (using gcc and ifort) Configuring the Bmake.inc file, if I set: MPILIB = -lmpi I have no trouble building the install program xsyserrors. However, the more standard approach is to set: MPILIB = $(MPILIBdir)/libmpi.a which generates the fol

Re: [OMPI users] LSF with OpenMPI

2006-08-30 Thread Michael Kluskens
t I'm surprised that they don't copy the environment over (others do). None of us have LSF, unfortunately, so we haven't done any work to try to make OMPI work on it. On 8/25/06 10:14 AM, "Michael Kluskens" wrote: Is there anyone running OpenMPI on a machine with LSF

[OMPI users] LSF with OpenMPI

2006-08-25 Thread Michael Kluskens
Is there anyone running OpenMPI on a machine with LSF batch queueing system. Last time I attempted this I discovered that PATH and LD_LIBRARY_PATH were not making it to the client nodes. I could force PATH to work using an OpenMPI option but I could not even force LD_LIBRARY_PATH over to

Re: [OMPI users] Compiling MPI with pgf90

2006-07-31 Thread Michael Kluskens
On Jul 31, 2006, at 1:12 PM, James McManus wrote: I'm trying to compile MPI with pgf90. I use the following configure settings: ./configure --prefix=/usr/local/mpi F90=pgf90 F77=pgf77 Besides the other issue about the wrong env. variable, if you have further trouble I'm using the following

Re: [OMPI users] Open-MPI running os SMP cluster

2006-07-26 Thread Michael Kluskens
On Jul 26, 2006, at 5:07 PM, Mauricio Felga Gobbi wrote: Newbie question: How is the message passing of Open-MPI implemented when I have say 4 nodes with 4 processors (SMP) each, nodes connected by a gigabit ethernet ?... in other words, how does it manage SMP nodes when I want to use all CPUs

Re: [OMPI users] Runtime Error

2006-07-26 Thread Michael Kluskens
s point, I restarted my machine. Not sure if it's necessary or not. 8. Go back to the v1.1 directory. Type 'make clean', then reconfigure, then recompile and reinstall 9. Things should work now. Thank you Michael, ~Ben ++ Benjamin Landsteiner lands...@stolaf.edu

[OMPI users] BTL devices

2006-07-14 Thread Michael Kluskens
On Jun 24, 2006, at 1:19 PM, George Bosilca wrote: As your cluster have several network devices that are supported by Open MPI it is possible that the configure script detected the correct path to their libraries. Therefore, they might be included/ compiled by default in Open MPI. The simplest

Re: [OMPI users] auto detect hosts

2006-07-14 Thread Michael Kluskens
On Jun 29, 2006, at 1:31 PM, Jeff Squyres (jsquyres) wrote: I'm running on a cluster of dual-opterons running Debian Linux. Just using "mpirun -np 4 hostname" somehow OpenMPI located the second dual-opteron in the stack of machines but no more than that, regardless of how many processes I aske

Re: [OMPI users] keyval parser error after v1.1 upgrade

2006-06-26 Thread Michael Kluskens
x27;t see how anything from 1.0.2 could be left over in the 1.1 installation. We aren't installing 1.1 over 1.0.2. 1.1 is configured, built, and installed in a completely different area. -Patrick Michael Kluskens wrote: You may have to properly uninstall OpenMPI 1.0.2 before

Re: [OMPI users] keyval parser error after v1.1 upgrade

2006-06-26 Thread Michael Kluskens
You may have to properly uninstall OpenMPI 1.0.2 before installing OpenMPI 1.1 This was an issue in the past. I would recommend you go into your OpenMPI 1.1 directory and type "make uninstall", then if you have it go into your OpenMPI 1.0.2 directory and do the same. If you don't have a d

Re: [OMPI users] auto detect hosts

2006-06-19 Thread Michael Kluskens
Corrected: How does OpenMPI auto-detect available hosts? I'm running on a cluster of dual-opterons running Debian Linux. Just using "mpirun -np 4 hostname" somehow OpenMPI located the second dual-opteron in the stack of machines but no more than that, regardless of how many processes I asked fo

[OMPI users] auto detect hosts

2006-06-19 Thread Michael Kluskens
How does OpenMPI auto-detect available hosts? I'm running on a cluster of dual-opterons running Debian Linux. Just using "mpirun -np 4 hostname" somehow OpenMPI located the second dual-opteron in the stack of machines but no more than that, regardless of how many processes I asked for. The

[OMPI users] MPI_Wtime

2006-06-19 Thread Michael Kluskens
Is anyone using MPI_Wtime with any version of OpenMPI under Fortran 90? I got my program to compile with MPI_Wtime commands but the difference between two different times in the process is always zero. When compiling against OpenMPI I have to specify mytime = MPI_Wtime For other MPI's I spe

Re: [OMPI users] F90 interfaces again

2006-06-12 Thread Michael Kluskens
On Jun 9, 2006, at 12:33 PM, Brian W. Barrett wrote: On Thu, 8 Jun 2006, Michael Kluskens wrote: call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier) 1 Error: Generic subroutine 'mpi_waitall' at

[OMPI users] F90 interfaces again

2006-06-08 Thread Michael Kluskens
call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier) 1 Error: Generic subroutine 'mpi_waitall' at (1) is not consistent with a specific subroutine interface Issue, 3rd argument of MPI_WAITALL expects an integer arr

Re: [OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions

2006-06-02 Thread Michael Kluskens
v1.2. https://svn.open-mpi.org/trac/ompi/ticket/55 -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, May 30, 2006 3:40 PM To: Open MPI Users Subject: [OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimens

Re: [OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interfaces

2006-06-02 Thread Michael Kluskens
clear there is much room for enhancement in large. Michael -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, May 30, 2006 6:19 PM To: Open MPI Users Subject: [OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interf

Re: [OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interfaces

2006-05-30 Thread Michael Kluskens
My mistake: MPI_IN_PLACE is a "double complex" so the scripts below need to be fixed to reflect that. I don't know if the latest tarball for tonight contains these or other fixes that I have been looking at today. Michael On May 30, 2006, at 6:18 PM, Michael Kluskens

[OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interfaces

2006-05-30 Thread Michael Kluskens
Found serious issue for the f90 interfaces for --with-mpi-f90- size=large Consider call MPI_REDUCE(MPI_IN_PLACE,sumpfi,sumpfmi,MPI_INTEGER,MPI_SUM, 0,allmpi,ier) Error: Generic subroutine 'mpi_reduce' at (1) is not consistent with a specific subroutine interface sumpfi is an integer

[OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions

2006-05-30 Thread Michael Kluskens
Looking at limitations of the following: --with-mpi-f90-size=SIZE specify the types of functions in the Fortran 90 MPI module, where size is one of: trivial (MPI-2 F90-specific functions only), small (trivial + a

Re: [OMPI users] Wont run with 1.0.2

2006-05-25 Thread Michael Kluskens
One possibility is that you didn't properly uninstall version 1.0.1 before installing version 1.0.2 & 1.0.3. There was a change with some of the libraries a while back that caused me a similar problem. An install of later versions of OpenMPI do not remove certain libraries from 1.0.1. Yo

Re: [OMPI users] spawn failed with errno=-7

2006-05-25 Thread Michael Kluskens
I think I moved to OpenMPI 1.1 and 1.2 alphas because of problems with spawn and OpenMPI 1.0.1 & 1.0.2. You may wish to test building 1.1 and seeing if that solves your problem. Michael On May 24, 2006, at 1:48 PM, Jens Klostermann wrote: I did the following run with openmpi1.0.2

Re: [OMPI users] Fortran support not installing

2006-05-24 Thread Michael Kluskens
On May 24, 2006, at 11:24 AM, Terry Reeves wrote: Hello, everyone. I have g95 fortran installed. I'm told it works. I'm doing this for some grad students, I am not myself a programmer or a unix expert but I know a bit more than the basics. This is a Mac OS X dual G5 processor xserve runn

[OMPI users] MPI_Intercomm_merge broken

2006-05-03 Thread Michael Kluskens
,rank,' of',size,' exiting' end On May 2, 2006, at 11:54 PM, Jeff Squyres (jsquyres) wrote: Ok -- let me know what you find. I just checked and the code *looks* right to me, but that doesn't mean that there isn't some deeper

Re: [OMPI users] openmpi-1.0.2 configure problem

2006-05-02 Thread Michael Kluskens
would prefer not to if possible. -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Monday, May 01, 2006 6:20 PM To: Open MPI Users Subject: [OMPI users] openmpi-1.0.2 configure problem checking if FORTRAN compiler su

Re: [OMPI users] fortran flags using Absoft compilers

2006-05-02 Thread Michael Kluskens
On May 1, 2006, at 7:16 PM, Jeffrey Fox wrote: I get openmpi-1.0.2 to compile on a (small) G5 cluster. The C and C+ + compilers work fine so far, but the mpif77 and mpif90 scripts send the wrong flags to the f77 and f90 compilers. Side note I got the Absoft compilers to work using "./conf

[OMPI users] openmpi-1.0.2 configure problem

2006-05-01 Thread Michael Kluskens
checking if FORTRAN compiler supports integer(selected_int_kind (2))... yes checking size of FORTRAN integer(selected_int_kind(2))... unknown configure: WARNING: *** Problem running configure test! configure: WARNING: *** See config.log for details. configure: error: *** Cannot continue. Source

Re: [OMPI users] MPI_Intercomm_Merge -- Fortran

2006-05-01 Thread Michael Kluskens
I've noticed that I can't just fix this myself, very bad things happened to the merged communicator, so this is not a trivial fix I gather. Michael On Apr 30, 2006, at 12:16 PM, Michael Kluskens wrote: MPI_Intercomm_Merge( intercomm, high, newintracomm, ier ) None of the bo

[OMPI users] MPI_Intercomm_Merge -- Fortran

2006-04-30 Thread Michael Kluskens
MPI_Intercomm_Merge( intercomm, high, newintracomm, ier ) None of the books I have state the variable type of the second argument for MPI_Intercomm_Merge for Fortran. Class notes I have from David Cronk state it is a Logical. In C it is an "int" with values of true and false. Looking at O

Re: [OMPI users] missing mpi_allgather_f90.f90.sh inopenmpi-1.2a1r9704

2006-04-27 Thread Michael Kluskens
eling for a few days (making these fixes take a little while). -Original Message- I made another test and the problem does not occur with --with-mpi- f90-size=medium. Michael On Apr 26, 2006, at 11:50 AM, Michael Kluskens wrote: Open MPI 1.2a1r9704 Summary: configure with --with-mp

Re: [OMPI users] missing mpi_allgather_f90.f90.sh in openmpi-1.2a1r9704

2006-04-26 Thread Michael Kluskens
I made another test and the problem does not occur with --with-mpi- f90-size=medium. Michael On Apr 26, 2006, at 11:50 AM, Michael Kluskens wrote: Open MPI 1.2a1r9704 Summary: configure with --with-mpi-f90-size=large and then make. /bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No

Re: [OMPI users] Spawn and Disconnect

2006-04-26 Thread Michael Kluskens
6, at 2:57 PM, Michael Kluskens wrote: I'm running OpenMPI 1.1 (v9704)and when a spawned processes exits the parent does not die (see previous discussions about 1.0.1/1.0.2); however, the next time the parent tries to spawn a process MPI_Comm_spawn does not return. My test output below

[OMPI users] missing mpi_allgather_f90.f90.sh in openmpi-1.2a1r9704

2006-04-26 Thread Michael Kluskens
Open MPI 1.2a1r9704 Summary: configure with --with-mpi-f90-size=large and then make. /bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No such file or directory I doubt this one is system specific --- my details: Building OpenMPI 1.2a1r9704 with g95 (Apr 23 2006) on OS X 10.4.6 using ./c

Re: [OMPI users] f90 module files compile a lot faster

2006-04-25 Thread Michael Kluskens
Minor suggestion, change the first sentence to read: - The Fortran 90 MPI bindings can now be built in one of four sizes using --with-mpi-f90-size=SIZE. Also, Open MPI 1.2 changes the --with-mpi-param-check default from always to runtime according to my comparison of the 1.1 README and 1.

[OMPI users] Spawn and Disconnect

2006-04-25 Thread Michael Kluskens
I'm running OpenMPI 1.1 (v9704)and when a spawned processes exits the parent does not die (see previous discussions about 1.0.1/1.0.2); however, the next time the parent tries to spawn a process MPI_Comm_spawn does not return. My test output below: parent: 0 of 1 parent: How many proce

Re: [OMPI users] f90 module files compile a lot faster

2006-04-25 Thread Michael Kluskens
..@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, April 25, 2006 9:56 AM To: Open MPI Users Subject: [OMPI users] f90 module files compile a lot faster Strange thing, with the latest g95 and the last OpenMPI 1.1 (a3r9704) [on OS X 10.4.6] there does not

Re: [OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-25 Thread Michael Kluskens
it to the branch) -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, April 25, 2006 9:45 AM To: Open MPI Users Subject: Re: [OMPI users] f90 interface error?: MPI_Comm_get_attr This problem still exists in O

[OMPI users] f90 module files compile a lot faster

2006-04-25 Thread Michael Kluskens
Strange thing, with the latest g95 and the last OpenMPI 1.1 (a3r9704) [on OS X 10.4.6] there does not seem to be the compilation penalty for using "USE MPI" instead of "include mpi.h" that there used to be. My test programs compile almost instantly. However, I'm still seeing: [a.b.c.d:2022

Re: [OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-25 Thread Michael Kluskens
) fixes in shortly (even the .h.sh script is generated from a marked up version of mpi.h -- don't ask ;-) ). I also corrected type_get_attr and win_get_attr. Thanks! -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens

Re: [OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-20 Thread Michael Kluskens
l *flag, MPI_Fint *ierr)); On Apr 20, 2006, at 2:24 PM, Michael Kluskens wrote: Error in: openmpi-1.1a3r9663/ompi/mpi/f90/mpi-f90-interfaces.h subroutine MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierr) include 'mpif.h' integer, intent(in) :: comm integer, intent

[OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-20 Thread Michael Kluskens
Error in: openmpi-1.1a3r9663/ompi/mpi/f90/mpi-f90-interfaces.h subroutine MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierr) include 'mpif.h' integer, intent(in) :: comm integer, intent(in) :: comm_keyval integer(kind=MPI_ADDRESS_KIND), intent(out) :: attribute_val inte

[OMPI users] OMPI-F90-CHECK macro needs to be updated?

2006-04-20 Thread Michael Kluskens
Getting warnings like: WARNING: *** Fortran 77 alignment for INTEGER (1) does not match WARNING: *** Fortran 90 alignment for INTEGER (4) WARNING: *** OMPI-F90-CHECK macro needs to be updated! same for LOGICAL, REAL, COMPLEX, INTEGER*2, INTEGER*4, INTEGER*8, etc. I believe these are new within

Re: [OMPI users] Compiling C++ program

2006-04-18 Thread Michael Kluskens
On Apr 18, 2006, at 9:06 AM, Shekhar Tyagi wrote: Brian I checked with the administrator of cluster in our department and according to him the MPI is of 1.2.5 version with the compilers being of PGI type, hope this might help you in solving the mpiCC problem. In which case it is MPICH an

Re: [OMPI users] ORTE errors

2006-04-11 Thread Michael Kluskens
[host:00258] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ oob_base_xcast.c at line 108 [host:00258] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ rmgr_base_stage_gate.c at line 276 child 0 of 1: Receiving 17 from parent Maximum user memory allocated: 0 Michael Michael Kluskens wrote

[OMPI users] ORTE errors

2006-04-10 Thread Michael Kluskens
The ORTE errors again, these are new and different errors. Tested as of OpenMPI 1.1a1r9596. [host:10198] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ soh_base_get_proc_soh.c at line 80 [host:10198] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ oob_base_xcast.c at line 108 [host:10198]

Re: [OMPI users] job running question

2006-04-10 Thread Michael Kluskens
You need to confirm that /etc/bashrc is actually being read in that environment, bash is a little different on which files get read depending on whether you login interactively or not. Also, I don't think ~/.bashrc is read on a noninteractive login. Michael On Apr 10, 2006, at 1:06 PM, Adam

Re: [OMPI users] Open MPI installed locally

2006-04-03 Thread Michael Kluskens
On Apr 3, 2006, at 3:02 PM, Brian Barrett wrote: On Apr 3, 2006, at 2:50 PM, Rolf Vandevaart wrote: From what I have read from the Open MPI documentation, it seems that the recommendation is to install Open MPI on an NFS server that is accessible to all the nodes in the cell. Are there any

[OMPI users] XMPI ?

2006-03-29 Thread Michael Kluskens
XMPI is a GUI debugger that works with LAM/MPI. Is there anything similar that works with OpenMPI? Michael

Re: [OMPI users] Absoft fortran detected as g77?

2006-03-28 Thread Michael Kluskens
On Mar 28, 2006, at 1:22 PM, Brian Barrett wrote: On Mar 27, 2006, at 8:26 AM, Michael Kluskens wrote: On Mar 23, 2006, at 9:28 PM, Brian Barrett wrote: On Mar 23, 2006, at 5:32 PM, Michael Kluskens wrote: I have Absoft version 8.2a installed on my OS X 10.4.5 system and in order to do

Re: [OMPI users] Best MPI implementation

2006-03-27 Thread Michael Kluskens
On Mar 27, 2006, at 4:11 PM, Jeff Squyres (jsquyres) wrote: For your code, most MPI implementations (Open MPI, LAM/MPI, etc.) support the same API. So if it compiles/links with one, it *should* compile/link with the others (assuming you coded it in an MPI- conformant way). The MPI installe

Re: [OMPI users] MPI_ROOT - required where/when?

2006-03-27 Thread Michael Kluskens
ollectives in MPI-2, I am not aware of that you need MPI_ROOT in intra-communicator collectives as defined in MPI-1. Thanks Edgar Michael Kluskens wrote: The constant MPI_ROOT is not universally defined in all current shipping MPI implementations. Is there any MPI function/call that require

[OMPI users] MPI_ROOT - required where/when?

2006-03-27 Thread Michael Kluskens
The constant MPI_ROOT is not universally defined in all current shipping MPI implementations. Is there any MPI function/call that requires MPI_ROOT? From the complete reference it appears that MPI_ALLGATHER might be the one routine. This all relates to portability, code I write using Open

Re: [OMPI users] Absoft fortran detected as g77?

2006-03-27 Thread Michael Kluskens
On Mar 23, 2006, at 9:28 PM, Brian Barrett wrote: On Mar 23, 2006, at 5:32 PM, Michael Kluskens wrote: I have Absoft version 8.2a installed on my OS X 10.4.5 system and in order to do some testing I was trying to build OpenMPI 1.1a1r9364 with it and got the following funny result

[OMPI users] Absoft fortran detected as g77?

2006-03-23 Thread Michael Kluskens
I have Absoft version 8.2a installed on my OS X 10.4.5 system and in order to do some testing I was trying to build OpenMPI 1.1a1r9364 with it and got the following funny result: *** Fortran 77 compiler checking whether we are using the GNU Fortran 77 compiler... yes checking whether f95 acce

[OMPI users] Error message about libopal.so

2006-03-22 Thread Michael Kluskens
Trying to find the cause of one or more errors, might involve libopal.so Built openmpi-1.1a1r9351 on Debian Linux on Operton with PGI 6.1-3 using "./configure --with-gnu-ld F77=pgf77 FFLAGS=-fastsse FC=pgf90 FCFLAGS=-fastsse" My program generates the following error which I do not understan

Re: [OMPI users] mpif90 broken in recent tarballs of 1.1a1

2006-03-21 Thread Michael Kluskens
On Mar 20, 2006, at 7:22 PM, Brian Barrett wrote: On Mar 20, 2006, at 6:10 PM, Michael Kluskens wrote: I have identified what I think is the issue described below. Even though the default prefix is /usr/local, r9336 only works for me if I use ./configure --prefix=/usr/local Thank you for

[OMPI users] Sample code demonstrating issues with multiple versions of OpenMPI

2006-03-20 Thread Michael Kluskens
The sample code at the end of this message demonstrates issues with multiple versions of OpenMPI. OpenMPI 1.0.2a10 compiles the code but crashes because of the interface issues previously discussed. This is both using " USE MPI " and " include 'mpif.h' " OpenMPI 1.1a1r9336 generates the

Re: [OMPI users] mpif90 broken in recent tarballs of 1.1a1

2006-03-20 Thread Michael Kluskens
I have identified what I think is the issue described below. Even though the default prefix is /usr/local, r9336 only works for me if I use ./configure --prefix=/usr/local Michael On Mar 20, 2006, at 11:49 AM, Michael Kluskens wrote: Building Open MPI 1.1a1r9xxx on a PowerMac G4 running

[OMPI users] mpif90 broken in recent tarballs of 1.1a1

2006-03-20 Thread Michael Kluskens
Building Open MPI 1.1a1r9xxx on a PowerMac G4 running OS X 10.4.5 using 1) Apple gnu compilers from Xcode 2.2.1 2) fink-installed g95 setenv F77 g95 ; setenv FC g95 ; ./configure ; make all ; sudo make install r9212 (built ~week ago) worked but I has having some issues and wished to try a n

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-14 Thread Michael Kluskens
I see responses to noncritical parts of my discussion but not the following, is it a known issue, a fixed issue, or we don't want to discuss it issue? Michael On Mar 7, 2006, at 4:39 PM, Michael Kluskens wrote: The following errors/warnings also exist when running my spawn test on a

Re: [OMPI users] Using Multiple Gigabit Ethernet Interface

2006-03-13 Thread Michael Kluskens
On Mar 11, 2006, at 1:00 PM, Jayabrata Chakrabarty wrote: Hi I have been looking for information on how to use multiple Gigabit Ethernet Interface for MPI communication. So far what i have found out is i have to use mca_btl_tcp. But what i wish to know, is what IP Address to assign to each

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-07 Thread Michael Kluskens
On Mar 7, 2006, at 3:23 PM, Michael Kluskens wrote: Per the mpi_comm_spawn issues with the 1.0.x releases I started using 1.1r9212, with my sample code I'm getting a messages of [-:13327] mca: base: component_find: unable to open: dlopen(/usr/ local/lib/openmpi/mca_pml_teg.so, 9): Symbo

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-07 Thread Michael Kluskens
1.1 snapshot >= r9198. On Mar 1, 2006, at 12:30 PM, Michael Kluskens wrote: On Mar 1, 2006, at 9:56 AM, George Bosilca wrote: Now I look into this problem more and your right it's a missing interface. Somehow, it didn't get compiled. From "openmpi-1.0.1/ompi/mpi/

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-07 Thread Michael Kluskens
rying to finish that series and advance towards 1.1. Would you be amenable to using a 1.1.x snapshot? My commit should show up in any 1.1 snapshot >= r9198. On Mar 1, 2006, at 12:30 PM, Michael Kluskens wrote: On Mar 1, 2006, at 9:56 AM, George Bosilca wrote: Now I look into this problem mor

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-01 Thread Michael Kluskens
On Mar 1, 2006, at 9:56 AM, George Bosilca wrote: Now I look into this problem more and your right it's a missing interface. Somehow, it didn't get compiled. From "openmpi-1.0.1/ompi/mpi/f90/mpi-f90-interfaces.h" the interface says: subroutine MPI_Comm_spawn(command, argv, maxprocs, info,

Re: [OMPI users] MPI_COMM_SPAWN versus OpenMPI 1.0.1

2006-03-01 Thread Michael Kluskens
MPI_INFO_NULL, 0, MPI_COMM_WORLD, slavecomm, & MPI_ERRCODES_IGNORE, ierr ) and everything should work just fine. Just as a test I did this, no effect. The error remains. Michael george. PS: Use vim and the force will be with you. You ha

[OMPI users] MPI_COMM_SPAWN versus OpenMPI 1.0.1

2006-02-28 Thread Michael Kluskens
Using OpenMPI 1.0.1 compiled with g95 on OS X (same problem on Debian Linux with g95, I have not tested other compilers yet) mpif90 spawn.f90 -o spawn In file spawn.f90:35 MPI_COMM_WORLD, slavecomm, MPI_ERRCODES_IGNORE, ierr ) 1 Err

[O-MPI users] f90 compiling: USE MPI vs. include 'mpif.h'

2006-01-30 Thread Michael Kluskens
Question regarding f90 compiling Using: USE MPI instead of include 'mpif.h' makes the compilation take an extra two minutes using g95 under OS X 10.4.4 (simple test program 115 seconds versus 0.2 seconds) Is this normal? Michael

Re: [O-MPI users] latest g95: size of FORTRAN integer(selected_int_kind(2))... unknown

2006-01-26 Thread Michael Kluskens
perly. Can you verify that everything is installed properly, and that g95 is able to link to C libraries? On Jan 24, 2006, at 3:11 PM, Michael Kluskens wrote: Building Open MPI 1.0.1 on a PowerMac running OS X 10.4.4 using 1) Apple gnu compilers from Xcode 2.2.1 2) fink-installed g77 3) lates

[O-MPI users] latest g95: size of FORTRAN integer(selected_int_kind(2))... unknown

2006-01-24 Thread Michael Kluskens
Building Open MPI 1.0.1 on a PowerMac running OS X 10.4.4 using 1) Apple gnu compilers from Xcode 2.2.1 2) fink-installed g77 3) latest g95 "G95 (GCC 4.0.1 (g95!) Jan 23 2006)" (the binary from G95 Home) setenv F77 g77 setenv FC g95 ./configure In the G95 section of the configure I get checkin