Re: [OMPI users] Open-MPI and TCP port range

2006-04-20 Thread Jeff Squyres (jsquyres)
Greetings. Apologies it's taken us so long to reply -- we're all at an Open MPI workshop this week and it's consuming just about all of our time. Right now, there is no way to restrict the port range that Open MPI will use. We simply ask the operating system for available ports and it gives us a

[OMPI users] Configuration error

2006-04-20 Thread sdamjad
Folks I am trying to compile openmpi v1.0.2 on MAC OSX 10.4.6 It gives me ac_nonexistent.h error I am enclosing my config.log here config.out Description: Binary data

[OMPI users] Configuration error

2006-04-20 Thread sdamjad
Folks I am trying to compile openmpi v1.0.2 on MAC OSX 10.4.6 It gives me ac_nonexistent.h error I am enclosing my config.log here config.out Description: Binary data

Re: [OMPI users] Configuration error

2006-04-20 Thread Brian Barrett
On Apr 20, 2006, at 8:23 AM, sdamjad wrote: Folks I am trying to compile openmpi v1.0.2 on MAC OSX 10.4.6 It gives me ac_nonexistent.h error I am enclosing my config.log here The config.out file you attached points to a problem with your Fortran compiler. However, without seeing the con

[OMPI users] OMPI-F90-CHECK macro needs to be updated?

2006-04-20 Thread Michael Kluskens
Getting warnings like: WARNING: *** Fortran 77 alignment for INTEGER (1) does not match WARNING: *** Fortran 90 alignment for INTEGER (4) WARNING: *** OMPI-F90-CHECK macro needs to be updated! same for LOGICAL, REAL, COMPLEX, INTEGER*2, INTEGER*4, INTEGER*8, etc. I believe these are new within

[OMPI users] OpenMPI and SLURM configuration ?

2006-04-20 Thread 杨科
Hi,all£¬ recently I've installed OpenMPI on our cluster (4 nodes, each node got one installation), but finally I found it could not work well with the underlying resource management system SLURM. That is,after I typed the following, it seemed mpirun hang there: [semper@IA64_node2] srun ¨CN 2 ¨CA

Re: [OMPI users] OpenMPI and SLURM configuration ?

2006-04-20 Thread George Bosilca
On Apr 20, 2006, at 9:57 AM, 杨科 wrote: [semper@IA64_node2] srun ¨CN 2 ¨CA [semper@IA64_node2] mpirun -np 2 /tmp/cpi Do you really have a shared /tmp on your cluster ? We do not copy the file on the nodes, they have to be on a shared file system, or at least they have to exist on the sa

Re: [OMPI users] OpenMPI and SLURM configuration ?

2006-04-20 Thread 杨科
>Do you really have a shared /tmp on your cluster ? We do not copy the >file on the nodes, they have to be on a shared file system, or at >least they have to exist on the same place on all the nodes. > > george. > No,but I put a copy of cpi in the /tmp directory on each node. Because I wonde

Re: [OMPI users] Open-MPI and TCP port range

2006-04-20 Thread Bogdan Costescu
On Thu, 20 Apr 2006, Jeff Squyres (jsquyres) wrote: > Right now, there is no way to restrict the port range that Open MPI > will use. ... If this becomes a problem for you (i.e., the random > MPI-chose-the-same-port-as-your-app events happen a lot), let us > know and we can probably put in some co

[OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-20 Thread Michael Kluskens
Error in: openmpi-1.1a3r9663/ompi/mpi/f90/mpi-f90-interfaces.h subroutine MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierr) include 'mpif.h' integer, intent(in) :: comm integer, intent(in) :: comm_keyval integer(kind=MPI_ADDRESS_KIND), intent(out) :: attribute_val inte

Re: [OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-20 Thread Michael Kluskens
The file 'ompi/mpi/f90/mpi-f90-interfaces.h' is automatically generated by ompi/mpi/f90/scripts/mpi-f90-interfaces.h.sh? I couldn't get my temp fix to stick so I modified the latter. Should be? subroutine ${procedure}(comm, comm_keyval, attribute_val

Re: [OMPI users] Open-MPI and TCP port range

2006-04-20 Thread Jeff Squyres (jsquyres)
> -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of Bogdan Costescu > Sent: Thursday, April 20, 2006 10:32 AM > To: Open MPI Users > Subject: Re: [OMPI users] Open-MPI and TCP port range > > On Thu, 20 Apr 2006, Jeff Squyres (jsquyres

Re: [OMPI users] OpenMPI and SLURM configuration ?

2006-04-20 Thread Jeff Squyres (jsquyres)
No, the location of $HOME should not matter. What happens if you "mpirun -np 2 uptime"? (i.e., use mpirun to launch a non-MPI application) > -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of ?? > Sent: Thursday, April 20, 2006 10:1

Re: [OMPI users] Open-MPI and TCP port range

2006-04-20 Thread Ralph Castain
Just as a further point here - the biggest issue with making a routable public IP address is deciding what that address should be. This is not a simple problem as (a) we operate exclusively at the user level, and so (b) we can't define a single address that we can reliably know from a remote lo

Re: [OMPI users] (no subject)

2006-04-20 Thread semper
> No, the location of $HOME should not matter. > > What happens if you "mpirun -np 2 uptime"? (i.e., use mpirun to launch > a non-MPI application) > Thanks. it returns right result! but still only 2 local processes. I tried again to add a hostfile option "--hostfile $HOME/openmpi/bin/hostfil

Re: [OMPI users] OpenMPI and SLURM configuration ??

2006-04-20 Thread semper
> No, the location of $HOME should not matter. > > What happens if you "mpirun -np 2 uptime"? (i.e., use mpirun to launch > a non-MPI application) > Thanks. it returns right result! but still only 2 local processes. I tried again to add a hostfile option "--hostfile $HOME/openmpi/bin/host

Re: [OMPI users] (no subject)

2006-04-20 Thread Sang Chul Choi
I think that you need to install Open MPI into other machine as well. You might want to setup NSF (network file system) for the master (you are saying your local machine) so that your slave nodes could see your mpi executable. > bash line 1: orted : command not found This error might go away if y