Re: [OMPI users] missing mpi_allgather_f90.f90.sh inopenmpi-1.2a1r9704

2006-04-27 Thread Michael Kluskens
I've done yet another test and found the identical problem exists  
with openmpi-1.1a3r9704.


Michael

On Apr 26, 2006, at 8:38 PM, Jeff Squyres (jsquyres) wrote:


Ok, I am investigating -- I think I know what the problem is, but the
guy who did the bulk of the F90 work in OMPI is out traveling for a  
few

days (making these fixes take a little while).


-Original Message-

I made another test and the problem does not occur with --with-mpi-
f90-size=medium.

Michael

On Apr 26, 2006, at 11:50 AM, Michael Kluskens wrote:


Open MPI 1.2a1r9704
Summary: configure with --with-mpi-f90-size=large and then make.

/bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No such file
or directory

I doubt this one is system specific
---
my details:

Building OpenMPI 1.2a1r9704 with g95 (Apr 23 2006) on OS X 10.4.6
using

./configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs --with-mpi-f90-
size=large

Configures fine but make gives the error listed above.  However no
error if I don't specify f90-size=large.

./scripts/mpi_allgather_f90.f90.sh /Users/mkluskens/Public/MPI/
OpenMPI/openmpi-1.2a1r9704/ompi/mpi/f90 > mpi_allgather_f90.f90
/bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No such file
or directory
make[4]: *** [mpi_allgather_f90.f90] Error 127
make[3]: *** [all-recursive] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1

 mpi_allgather_f90.f90.sh does not exist in my configured

and built

Open MPI 1.1a3r9704 so I can't compare between the two.

I assume it should be generated into ompi/mpi/f90/scripts.




[OMPI users] mpirun problem

2006-04-27 Thread sdamjad
Brain

I can not find mpirun or mpicc executables.  Hence i send the logs

thanks



Re: [OMPI users] mpirun problem

2006-04-27 Thread Brian Barrett

On Apr 27, 2006, at 12:09 PM, sdamjad wrote:


I can not find mpirun or mpicc executables.  Hence i send the logs


It's generally useful to include that information in your report - I  
couldn't tell what problem you were having from your log files.


Anyway, I would guess that the problem you are having is a simple  
PATH issue - you installed the library into something that isn't in  
your default PATH.  It looks like you installed into your home  
directory, from your configure line:


$ ./configure --enable-mpi-f77 --prefix=/Users/amjad

That means the executables are in /Users/amjad/bin and libraries in / 
Users/amjad/lib.  If you add /Users/amjad/bin into your PATH  
environment variable, everything should work for you.


Brian


--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/




[OMPI users] crash inside mca_btl_tcp_proc_remove

2006-04-27 Thread Marcus G. Daniels

Hi all,

I built 1.0.2 on Fedora 5 for x86_64 on a cluster setup as described 
below and I witness the same behavior when I try to run a job.  Any 
ideas on the cause?


Jeff Squyres wrote:
> One additional question: are you using TCP as your communications
> network, and if so, do either of the nodes that you are running on
> have more than one TCP NIC? We recently fixed a bug for situations
> where at least one node in on multiple TCP networks, not all of which
> were shared by the nodes where the peer MPI processes were running.
> If this situation describes your network setup (e.g., a cluster where
> the head node has a public and a private network, and where the
> cluster nodes only have a private network -- and your MPI process was
> running on the head node and a compute node), can you try upgrading
> to the latest 1.0.2 release candidate tarball:
>
> http://www.open-mpi.org/software/ompi/v1.0/
>
>
$ mpiexec -machinefile ../bhost -np 9 ./ng
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0x6
[0] func:/opt/openmpi/1.0.2a9/lib/libopal.so.0 [0x2c062d0c]
[1] func:/lib64/tls/libpthread.so.0 [0x3b8d60c320]
[2]
func:/opt/openmpi/1.0.2a9/lib/openmpi/mca_btl_tcp.so(mca_btl_tcp_proc_remove+0xb5) 


[0x2e6e4c65]
[3] func:/opt/openmpi/1.0.2a9/lib/openmpi/mca_btl_tcp.so [0x2e6e2b09]
[4]
func:/opt/openmpi/1.0.2a9/lib/openmpi/mca_btl_tcp.so(mca_btl_tcp_add_procs+0x157) 


[0x2e6dfdd7]
[5]
func:/opt/openmpi/1.0.2a9/lib/openmpi/mca_bml_r2.so(mca_bml_r2_add_procs+0x231) 


[0x2e3cd1e1]
[6]
func:/opt/openmpi/1.0.2a9/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_add_procs+0x94) 


[0x2e1b1f44]
[7] func:/opt/openmpi/1.0.2a9/lib/libmpi.so.0(ompi_mpi_init+0x3af)
[0x2bdd2d7f]
[8] func:/opt/openmpi/1.0.2a9/lib/libmpi.so.0(MPI_Init+0x93)
[0x2bdbeb33]
[9] func:/opt/openmpi/1.0.2a9/lib/libmpi.so.0(MPI_INIT+0x28)
[0x2bdce948]
[10] func:./ng(MAIN__+0x38) [0x4022a8]
[11] func:./ng(main+0xe) [0x4126ce]
[12] func:/lib64/tls/libc.so.6(__libc_start_main+0xdb) [0x3b8cb1c4bb]
[13] func:./ng [0x4021da]
*** End of error message ***

Bye,
Czarek 




[OMPI users] error running MPI

2006-04-27 Thread Jorge Parra

Hi,

I configured and maked (make all install) succesfully (no errors) open 
MPI. I am doing that for a crossplataform. The host is a ppc 405 and the 
build machine is a i686. Once I succesfully built it, I wanted to run 
"ompi_info" to check the installation. So I copied all the prefix 
directory to the host plataform and I executed ompi_info.


I got the following error:

root@ml403:/opt/mpi-ppc-405-linux/exec/bin# ./ompi_info
./ompi_info: error while loading shared libraries: libstdc++.so.6: cannot 
open y


The host platform is a minimal linux instalation (just the kernel, the 
filesystem and a few commands). So I understand I should have something 
else installed, Is that the problem? If so, what should I have installed 
in the host platform to make it run?


Thank you,

Jorge


Re: [OMPI users] error running MPI

2006-04-27 Thread Brian Barrett

On Apr 27, 2006, at 5:26 PM, Jorge Parra wrote:


I configured and maked (make all install) succesfully (no errors) open
MPI. I am doing that for a crossplataform. The host is a ppc 405  
and the

build machine is a i686. Once I succesfully built it, I wanted to run
"ompi_info" to check the installation. So I copied all the prefix
directory to the host plataform and I executed ompi_info.

I got the following error:

root@ml403:/opt/mpi-ppc-405-linux/exec/bin# ./ompi_info
./ompi_info: error while loading shared libraries: libstdc++.so.6:  
cannot

open y

The host platform is a minimal linux instalation (just the kernel, the
filesystem and a few commands). So I understand I should have  
something
else installed, Is that the problem? If so, what should I have  
installed

in the host platform to make it run?


ompi_info is a C++ application, so it needs the C++ support libraries  
installed -- libstdc++.so.6.  I'm not sure how your host platform  
handles package management, but usually the library is provided in a  
package named libstdc++ or something similar.



Hope this helps,

Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/