Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Scott Beardsley
The real problem is that it looks like we have a bug in our F90 bindings. :-( We have the "periods" argument typed as an integer array, when it really should be a logical array. Doh! Ahhh ha! I checked the manpage vs the user's code but I didn't check the OpenMPI code. I can confirm that

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Jeff Squyres
"use mpi" basically gives you stronger type checking in Fortran 90 that you don't get with Fortran 77. So the error you're seeing is basically a compiler error telling you that you have the wrong types for MPI_CART_GET and that it doesn't match any of the functions provided by Open MPI.

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Edmund Sumbar
On Wed, Jul 30, 2008 at 01:15:54PM -0700, Scott Beardsley wrote: > Brock Palen wrote: > > On all MPI's I have always used there was only MPI > > > > use mpi; > > Please excuse my admittedly gross ignorance of all things Fortran but > why does "include 'mpif.h'" work but "use mpi" does not? When

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Joe Griffin
Scott, include brings in a file use brings in a module .. kind of like an object file. Joe > -Original Message- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Scott Beardsley > Sent: Wednesday, July 30, 2008 1:16 PM > To: Open MPI Users > Subject:

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Brock Palen
I have seen strange things about fortran compilers and the suffix of files. use mpi is a fortran 90 thing, not 77, many compilers want fortran 90 codes to end in .f90 or .F90 Try renaming carfoo.f to cartfoo.f90 and try again. I have attached a helloworld.f90 that uses use mpi that wor

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Jeff Squyres
This is correct; Open MPI only generates MPI.mod so that you can "use mpi" in your Fortran app. I'm not sure that MPI1.mod and MPI2.mod and f90base are -- perhaps those are somehow specific artifacts of the other MPI implementation, and/or artifacts of the Fortran compiler...? On Jul 30,

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Scott Beardsley
Brock Palen wrote: On all MPI's I have always used there was only MPI use mpi; Please excuse my admittedly gross ignorance of all things Fortran but why does "include 'mpif.h'" work but "use mpi" does not? When I try the "use mpi" method I get errors like: $ mpif90 -c cart.f call mp

Re: [OMPI users] Missing F90 modules

2008-07-30 Thread Brock Palen
On all MPI's I have always used there was only MPI use mpi; Brock Palen www.umich.edu/~brockp Center for Advanced Computing bro...@umich.edu (734)936-1985 On Jul 30, 2008, at 1:45 PM, Scott Beardsley wrote: I'm attempting to move to OpenMPI from another MPICH-derived implementation. I compi

[OMPI users] Missing F90 modules

2008-07-30 Thread Scott Beardsley
I'm attempting to move to OpenMPI from another MPICH-derived implementation. I compiled openmpi 1.2.6 using the following configure: ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr/mpi/pathscale/openmpi

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Ralph Castain
Just to be clear: you do not require a daemon on every node. You just need one daemon - sitting somewhere - that can act as the data server for MPI_Name_publish/lookup. You then tell each app where to find it. Normally, mpirun fills that function. But if you don't have it, you can kickoff a

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Robert Kubrick
On Jul 30, 2008, at 11:12 AM, Mark Borgerding wrote: I appreciate the suggestion about running a daemon on each of the remote nodes, but wouldn't I kind of be reinventing the wheel there? Process management is one of the things I'd like to be able to count on ORTE for. Would the following

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Jeff Squyres
On Jul 30, 2008, at 11:12 AM, Mark Borgerding wrote: I appreciate the suggestion about running a daemon on each of the remote nodes, but wouldn't I kind of be reinventing the wheel there? Process management is one of the things I'd like to be able to count on ORTE for. Keep in mind that t

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Mark Borgerding
I appreciate the suggestion about running a daemon on each of the remote nodes, but wouldn't I kind of be reinventing the wheel there? Process management is one of the things I'd like to be able to count on ORTE for. Would the following work to give the parent process an intercomm with each c

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Ralph Castain
Okay, I tested it and MPI_Name_publish and MPI_Name_lookup work on 1.2.6, so this may provide an avenue (albeit cumbersome) for you to get this to work. It may require a server, though, to make it work - your first MPI proc may be able to play that role if you pass it's contact info to the

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Ralph Castain
IThe problem would be finding a way to tell all the MPI apps how to contact each other as the Intercomm procedure needs that info to complete. I don't recall if the MPI_Name_publish/lookup functions worked in 1.2 - I'm building the code now to see. If it does, then you could use it to get t

Re: [OMPI users] Communitcation between OpenMPI and ClusterTools

2008-07-30 Thread Alexander Shabarshin
OK, thanks! Is it possible to fix it somehow directly in 1.2.x codebase? - Original Message - From: "Terry Dontje" To: Sent: Wednesday, July 30, 2008 7:15 AM Subject: Re: [OMPI users] Communitcation between OpenMPI and ClusterTools One last note to close this out. After some disc

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Robert Kubrick
Mark, if you can run a server process on the remote machine, you could send a request from your local MPI app to your server, then use an Intercomm to link the local process to the new remote process? On Jul 30, 2008, at 9:55 AM, Mark Borgerding wrote: I'm afraid I can't dictate to the cust

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Mark Borgerding
I'm afraid I can't dictate to the customer that they must upgrade. The target platform is RHEL 5.2 ( uses openmpi 1.2.6 ) I will try to find some sort of workaround. Any suggestions on how to "fake" the functionality of MPI_Comm_spawn are welcome. To reiterate my needs: I am writing a shared o

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Mark Borgerding
Just to clarify: the test code I wrote does *not* use MPI_Comm_spawn in the mpirun case. The problem may or may not exist under miprun. Ralph Castain wrote: As your own tests have shown, it works fine if you just "mpirun -n 1 ./spawner". It is only singleton comm_spawn that appears to be hav

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Ralph Castain
Singleton comm_spawn works fine on the 1.3 release branch - if singleton comm_spawn is critical to your plans, I suggest moving to that version. You can get a pre-release version off of the www.open-mpi.org web site. On Jul 30, 2008, at 6:58 AM, Ralph Castain wrote: As your own tests have

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Ralph Castain
As your own tests have shown, it works fine if you just "mpirun -n 1 ./ spawner". It is only singleton comm_spawn that appears to be having a problem in the latest 1.2 release. So I don't think comm_spawn is "useless". ;-) I'm checking this morning to ensure that singletons properly spawns o

Re: [OMPI users] How to specify hosts for MPI_Comm_spawn

2008-07-30 Thread Mark Borgerding
I keep checking my email in hopes that someone will come up with something that Matt or I might've missed. I'm just having a hard time accepting that something so fundamental would be so broken. The MPI_Comm_spawn command is essentially useless without the ability to spawn processes on other n

Re: [OMPI users] Segmentation fault: Address not mapped

2008-07-30 Thread James Philbin
Hi, OK, to answer my own question, I recompiled OpenMPI appending '--with-memory-manager=none' to configure and now things seem to run fine. I'm not sure how this might affect performance, but at least it's working now. Maybe this can be put in the FAQ? James On Wed, Jul 30, 2008 at 2:02 AM, Jam

Re: [OMPI users] TCP Latency

2008-07-30 Thread Andy Georgi
Thanks again for all the answers. It seems that were was a bug in the driver in combination with Suse Linux Enterprise Server 10. It was fixed with version 1.0.146. Now we have 12us with NPtcp and 22us with NPmpi. This is still not fast enough but for the time acceptable. I will check the alter

Re: [OMPI users] Communitcation between OpenMPI and ClusterTools

2008-07-30 Thread Terry Dontje
One last note to close this out. After some discussion on the developers list it was pointed out that this problem was fixed with new code in the trunk and 1.3 branch. So my statement below of the trunk, 1.3 and CT8 EA2 supporting nodes on different subnets can be made stronger that we reall