Re: [OMPI users] Problems on large clusters

2011-06-22 Thread Thorsten Schuett
Sure. It's an SGI ICE cluster with dual-rail IB. The HCAs are Mellanox 
ConnectX IB DDR.

This is a 2040 cores job. I use 255 nodes with one MPI task on each node and 
use 8-way OpenMP.

I don't need -np and -machinefile, because mpiexec picks up this information 
from PBS.

Thorsten

On Tuesday, June 21, 2011, Gilbert Grosdidier wrote:
> Bonjour Thorsten,
> 
>   Could you please be a little bit more specific about the cluster
> itself ?
> 
>   G.
> 
> Le 21 juin 11 à 17:46, Thorsten Schuett a écrit :
> > Hi,
> > 
> > I am running openmpi 1.5.3 on a IB cluster and I have problems
> > starting jobs
> > on larger node counts. With small numbers of tasks, it usually
> > works. But now
> > the startup failed three times in a row using 255 nodes. I am using
> > 255 nodes
> > with one MPI task per node and the mpiexec looks as follows:
> > 
> > mpiexec --mca btl self,openib --mca mpi_leave_pinned 0 ./a.out
> > 
> > After ten minutes, I pulled a stracktrace on all nodes and killed
> > the job,
> > because there was no progress. In the following, you will find the
> > stack trace
> > generated with gdb thread apply all bt. The backtrace looks
> > basically the same
> > on all nodes. It seems to hang in mpi_init.
> > 
> > Any help is appreciated,
> > 
> > Thorsten
> > 
> > Thread 3 (Thread 46914544122176 (LWP 28979)):
> > #0  0x2b6ee912d9a2 in select () from /lib64/libc.so.6
> > #1  0x2b6eeabd928d in service_thread_start (context= > optimized out>)
> > at btl_openib_fd.c:427
> > #2  0x2b6ee835e143 in start_thread () from /lib64/libpthread.so.0
> > #3  0x2b6ee9133b8d in clone () from /lib64/libc.so.6
> > #4  0x in ?? ()
> > 
> > Thread 2 (Thread 46916594338112 (LWP 28980)):
> > #0  0x2b6ee912b8b6 in poll () from /lib64/libc.so.6
> > #1  0x2b6eeabd7b8a in btl_openib_async_thread (async= > optimized
> > out>) at btl_openib_async.c:419
> > #2  0x2b6ee835e143 in start_thread () from /lib64/libpthread.so.0
> > #3  0x2b6ee9133b8d in clone () from /lib64/libc.so.6
> > #4  0x in ?? ()
> > 
> > Thread 1 (Thread 47755361533088 (LWP 28978)):
> > #0  0x2b6ee9133fa8 in epoll_wait () from /lib64/libc.so.6
> > #1  0x2b6ee87745db in epoll_dispatch (base=0xb79050, arg=0xb558c0,
> > tv=) at epoll.c:215
> > #2  0x2b6ee8773309 in opal_event_base_loop (base=0xb79050,
> > flags= > optimized out>) at event.c:838
> > #3  0x2b6ee875ee92 in opal_progress () at runtime/
> > opal_progress.c:189
> > #4  0x39f1 in ?? ()
> > #5  0x2b6ee87979c9 in std::ios_base::Init::~Init () at
> > ../../.././libstdc++-v3/src/ios_init.cc:123
> > #6  0x7fffc32c8cc8 in ?? ()
> > #7  0x2b6ee9d20955 in orte_grpcomm_bad_get_proc_attr (proc= > optimized out>, attribute_name=0x2b6ee88e5780 " \020322351n+",
> > val=0x2b6ee875ee92, size=0x7fffc32c8cd0) at grpcomm_bad_module.c:500
> > #8  0x2b6ee86dd511 in ompi_modex_recv_key_value (key= > optimized
> > out>, source_proc=, value=0xbb3a00, dtype=14
> > '\016') at
> > runtime/ompi_module_exchange.c:125
> > #9  0x2b6ee86d7ea1 in ompi_proc_set_arch () at proc/proc.c:154
> > #10 0x2b6ee86db1b0 in ompi_mpi_init (argc=15, argv=0x7fffc32c92f8,
> > requested=, provided=0x7fffc32c917c) at
> > runtime/ompi_mpi_init.c:699
> > #11 0x7fffc32c8e88 in ?? ()
> > #12 0x2b6ee77f8348 in ?? ()
> > #13 0x7fffc32c8e60 in ?? ()
> > #14 0x7fffc32c8e20 in ?? ()
> > #15 0x09efa994 in ?? ()
> > #16 0x in ?? ()
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> --
> *-*
>Gilbert Grosdidier gilbert.grosdid...@in2p3.fr
>LAL / IN2P3 / CNRS Phone : +33 1 6446 8909
>Faculté des Sciences, Bat. 200 Fax   : +33 1 6446 8546
>B.P. 34, F-91898 Orsay Cedex (FRANCE)
> *-*




[OMPI users] [ompi-1.4.2] Infiniband issue on smoky @ ornl

2011-06-22 Thread Mathieu Gontier

Dear all,

First of all, all my apologies because I post this message to both the 
bug and user mailing list. But for the moment, I do not know if it is a bug!


I am running a CFD structured flow solver at ORNL, and I have an access 
to a small cluster (Smoky) using OpenMPI-1.4.2 with Infiniband by 
default. Recently we increased the size of our models, and since that 
time we have run into many infiniband related problems.  The most 
serious problem is a hard crash with the following error message:


[/smoky45][[60998,1],32][/sw/sources/ompi/1.4.2/ompi/mca/btl/openib/connect/btl_openib_connect_oob.c:464:qp_create_one] 
error creating qp errno says Cannot allocate memory/


If we force the solver to use ethernet (mpirun -mca btl ^openib) the 
computations works correctly, although very slowly (a single iteration 
take ages). Do you have any idea what could be causing these problems?


If it is due to a bug or a limitation into OpenMPI, do you think the 
version 1.4.3, the coming 1.4.4 or any 1.5 version could solve the 
problem? I read the release notes, but I did not read any obvious patch 
which could fix my problem. The system administrator is ready to compile 
a new package for us, but I do not want to ask to install to many of them.


Thanks.
--
/
Mathieu Gontier
skype: mathieu_gontier /


Re: [OMPI users] Problems on large clusters

2011-06-22 Thread Gilbert Grosdidier

Bonjour Thorsten,

 I'm not surprised about the cluster type, indeed,
but I do not remember getting such specific hang up you mention.

 Anyway, I suspect SGI Altix is a little bit special for OpenMPI,
and I usually run with the following setup:
- there is need to create for each job a specific tmp area,
like "/scratch/ggg/uuu/run/tmp/pbs.${PBS_JOBID}"
- then use something like that:

setenv TMPDIR "/scratch/ggg/uuu/run/tmp/pbs.${PBS_JOBID}"
setenv OMPI_PREFIX_ENV "/scratch/ggg/uuu/run/tmp/pbs.${PBS_JOBID}"
setenv OMPI_MCA_mpi_leave_pinned_pipeline 1

- then, for running, many of these -mca options are probably useless  
with your app,

while many of them may show to be useful. Your own way ...

mpiexec -mca coll_tuned_use_dynamic_rules 1 -hostfile $PBS_NODEFILE - 
mca rmaps seq -mca btl_openib_rdma_pipeline_send_length 65536 -mca  
btl_openib_rdma_pipeline_frag_size 65536 -mca  
btl_openib_min_rdma_pipeline_size 65536 -mca  
btl_self_rdma_pipeline_send_length 262144 -mca  
btl_self_rdma_pipeline_frag_size 262144 -mca plm_rsh_num_concurrent  
4096 -mca mpi_paffinity_alone 1 -mca mpi_leave_pinned_pipeline 1 -mca  
btl_sm_max_send_size 128 -mca  
coll_tuned_pre_allocate_memory_comm_size_limit 1048576 -mca  
btl_openib_cq_size 128 -mca btl_ofud_rd_num 128 -mca  
mpi_preconnect_mpi 0 -mca mpool_sm_min_size 131072 -mca btl  
sm,openib,self -mca btl_openib_want_fork_support 0 -mca  
opal_set_max_sys_limits 1 -mca osc_pt2pt_no_locks 1 -mca  
osc_rdma_no_locks 1 YOUR_APP


 (Watch the step : only one line only ...)

 This should be suitable for up to 8k cores.


 HTH,   Best,G.



Le 22 juin 11 à 09:13, Thorsten Schuett a écrit :


Sure. It's an SGI ICE cluster with dual-rail IB. The HCAs are Mellanox
ConnectX IB DDR.

This is a 2040 cores job. I use 255 nodes with one MPI task on each  
node and

use 8-way OpenMP.

I don't need -np and -machinefile, because mpiexec picks up this  
information

from PBS.

Thorsten

On Tuesday, June 21, 2011, Gilbert Grosdidier wrote:

Bonjour Thorsten,

 Could you please be a little bit more specific about the cluster
itself ?

 G.

Le 21 juin 11 à 17:46, Thorsten Schuett a écrit :

Hi,

I am running openmpi 1.5.3 on a IB cluster and I have problems
starting jobs
on larger node counts. With small numbers of tasks, it usually
works. But now
the startup failed three times in a row using 255 nodes. I am using
255 nodes
with one MPI task per node and the mpiexec looks as follows:

mpiexec --mca btl self,openib --mca mpi_leave_pinned 0 ./a.out

After ten minutes, I pulled a stracktrace on all nodes and killed
the job,
because there was no progress. In the following, you will find the
stack trace
generated with gdb thread apply all bt. The backtrace looks
basically the same
on all nodes. It seems to hang in mpi_init.

Any help is appreciated,

Thorsten

Thread 3 (Thread 46914544122176 (LWP 28979)):
#0  0x2b6ee912d9a2 in select () from /lib64/libc.so.6
#1  0x2b6eeabd928d in service_thread_start (context=)
at btl_openib_fd.c:427
#2  0x2b6ee835e143 in start_thread () from /lib64/ 
libpthread.so.0

#3  0x2b6ee9133b8d in clone () from /lib64/libc.so.6
#4  0x in ?? ()

Thread 2 (Thread 46916594338112 (LWP 28980)):
#0  0x2b6ee912b8b6 in poll () from /lib64/libc.so.6
#1  0x2b6eeabd7b8a in btl_openib_async_thread (async=) at btl_openib_async.c:419
#2  0x2b6ee835e143 in start_thread () from /lib64/ 
libpthread.so.0

#3  0x2b6ee9133b8d in clone () from /lib64/libc.so.6
#4  0x in ?? ()

Thread 1 (Thread 47755361533088 (LWP 28978)):
#0  0x2b6ee9133fa8 in epoll_wait () from /lib64/libc.so.6
#1  0x2b6ee87745db in epoll_dispatch (base=0xb79050,  
arg=0xb558c0,

tv=) at epoll.c:215
#2  0x2b6ee8773309 in opal_event_base_loop (base=0xb79050,
flags=) at event.c:838
#3  0x2b6ee875ee92 in opal_progress () at runtime/
opal_progress.c:189
#4  0x39f1 in ?? ()
#5  0x2b6ee87979c9 in std::ios_base::Init::~Init () at
../../.././libstdc++-v3/src/ios_init.cc:123
#6  0x7fffc32c8cc8 in ?? ()
#7  0x2b6ee9d20955 in orte_grpcomm_bad_get_proc_attr  
(proc=
optimized out>, attribute_name=0x2b6ee88e5780 " \020322351n+",
val=0x2b6ee875ee92, size=0x7fffc32c8cd0) at grpcomm_bad_module.c:500
#8  0x2b6ee86dd511 in ompi_modex_recv_key_value (key=, source_proc=, value=0xbb3a00, dtype=14
'\016') at
runtime/ompi_module_exchange.c:125
#9  0x2b6ee86d7ea1 in ompi_proc_set_arch () at proc/proc.c:154
#10 0x2b6ee86db1b0 in ompi_mpi_init (argc=15,  
argv=0x7fffc32c92f8,

requested=, provided=0x7fffc32c917c) at
runtime/ompi_mpi_init.c:699
#11 0x7fffc32c8e88 in ?? ()
#12 0x2b6ee77f8348 in ?? ()
#13 0x7fffc32c8e60 in ?? ()
#14 0x7fffc32c8e20 in ?? ()
#15 0x09efa994 in ?? ()
#16 0x in ?? ()
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
*

Re: [OMPI users] Problems on large clusters

2011-06-22 Thread Thorsten Schuett
Thanks for the tip. I can't tell yet whether it helped or not. However, with 
your settings I get the following warning:
WARNING: Open MPI will create a shared memory backing file in a
directory that appears to be mounted on a network filesystem.

I repeated the run with my settings and I noticed that on at least one node my 
app didn't came up. I can see an orted daemon on this node, but no other 
process. And this was 30 minutes after the app started.

orted -mca ess tm -mca orte_ess_jobid 125894656 -mca orte_ess_vpid 63 -mc
a orte_ess_num_procs 255 --hnp-uri ...

Thorsten

On Wednesday, June 22, 2011, Gilbert Grosdidier wrote:
> Bonjour Thorsten,
> 
>   I'm not surprised about the cluster type, indeed,
> but I do not remember getting such specific hang up you mention.
> 
>   Anyway, I suspect SGI Altix is a little bit special for OpenMPI,
> and I usually run with the following setup:
> - there is need to create for each job a specific tmp area,
> like "/scratch/ggg/uuu/run/tmp/pbs.${PBS_JOBID}"
> - then use something like that:
> 
> setenv TMPDIR "/scratch/ggg/uuu/run/tmp/pbs.${PBS_JOBID}"
> setenv OMPI_PREFIX_ENV "/scratch/ggg/uuu/run/tmp/pbs.${PBS_JOBID}"
> setenv OMPI_MCA_mpi_leave_pinned_pipeline 1
> 
> - then, for running, many of these -mca options are probably useless
> with your app,
> while many of them may show to be useful. Your own way ...
> 
> mpiexec -mca coll_tuned_use_dynamic_rules 1 -hostfile $PBS_NODEFILE -
> mca rmaps seq -mca btl_openib_rdma_pipeline_send_length 65536 -mca
> btl_openib_rdma_pipeline_frag_size 65536 -mca
> btl_openib_min_rdma_pipeline_size 65536 -mca
> btl_self_rdma_pipeline_send_length 262144 -mca
> btl_self_rdma_pipeline_frag_size 262144 -mca plm_rsh_num_concurrent
> 4096 -mca mpi_paffinity_alone 1 -mca mpi_leave_pinned_pipeline 1 -mca
> btl_sm_max_send_size 128 -mca
> coll_tuned_pre_allocate_memory_comm_size_limit 1048576 -mca
> btl_openib_cq_size 128 -mca btl_ofud_rd_num 128 -mca
> mpi_preconnect_mpi 0 -mca mpool_sm_min_size 131072 -mca btl
> sm,openib,self -mca btl_openib_want_fork_support 0 -mca
> opal_set_max_sys_limits 1 -mca osc_pt2pt_no_locks 1 -mca
> osc_rdma_no_locks 1 YOUR_APP
> 
>   (Watch the step : only one line only ...)
> 
>   This should be suitable for up to 8k cores.
> 
> 
>   HTH,   Best,G.
> 
> Le 22 juin 11 à 09:13, Thorsten Schuett a écrit :
> > Sure. It's an SGI ICE cluster with dual-rail IB. The HCAs are Mellanox
> > ConnectX IB DDR.
> > 
> > This is a 2040 cores job. I use 255 nodes with one MPI task on each
> > node and
> > use 8-way OpenMP.
> > 
> > I don't need -np and -machinefile, because mpiexec picks up this
> > information
> > from PBS.
> > 
> > Thorsten
> > 
> > On Tuesday, June 21, 2011, Gilbert Grosdidier wrote:
> >> Bonjour Thorsten,
> >> 
> >>  Could you please be a little bit more specific about the cluster
> >> 
> >> itself ?
> >> 
> >>  G.
> >> 
> >> Le 21 juin 11 à 17:46, Thorsten Schuett a écrit :
> >>> Hi,
> >>> 
> >>> I am running openmpi 1.5.3 on a IB cluster and I have problems
> >>> starting jobs
> >>> on larger node counts. With small numbers of tasks, it usually
> >>> works. But now
> >>> the startup failed three times in a row using 255 nodes. I am using
> >>> 255 nodes
> >>> with one MPI task per node and the mpiexec looks as follows:
> >>> 
> >>> mpiexec --mca btl self,openib --mca mpi_leave_pinned 0 ./a.out
> >>> 
> >>> After ten minutes, I pulled a stracktrace on all nodes and killed
> >>> the job,
> >>> because there was no progress. In the following, you will find the
> >>> stack trace
> >>> generated with gdb thread apply all bt. The backtrace looks
> >>> basically the same
> >>> on all nodes. It seems to hang in mpi_init.
> >>> 
> >>> Any help is appreciated,
> >>> 
> >>> Thorsten
> >>> 
> >>> Thread 3 (Thread 46914544122176 (LWP 28979)):
> >>> #0  0x2b6ee912d9a2 in select () from /lib64/libc.so.6
> >>> #1  0x2b6eeabd928d in service_thread_start (context= >>> optimized out>)
> >>> at btl_openib_fd.c:427
> >>> #2  0x2b6ee835e143 in start_thread () from /lib64/
> >>> libpthread.so.0
> >>> #3  0x2b6ee9133b8d in clone () from /lib64/libc.so.6
> >>> #4  0x in ?? ()
> >>> 
> >>> Thread 2 (Thread 46916594338112 (LWP 28980)):
> >>> #0  0x2b6ee912b8b6 in poll () from /lib64/libc.so.6
> >>> #1  0x2b6eeabd7b8a in btl_openib_async_thread (async= >>> optimized
> >>> out>) at btl_openib_async.c:419
> >>> #2  0x2b6ee835e143 in start_thread () from /lib64/
> >>> libpthread.so.0
> >>> #3  0x2b6ee9133b8d in clone () from /lib64/libc.so.6
> >>> #4  0x in ?? ()
> >>> 
> >>> Thread 1 (Thread 47755361533088 (LWP 28978)):
> >>> #0  0x2b6ee9133fa8 in epoll_wait () from /lib64/libc.so.6
> >>> #1  0x2b6ee87745db in epoll_dispatch (base=0xb79050,
> >>> arg=0xb558c0,
> >>> tv=) at epoll.c:215
> >>> #2  0x2b6ee8773309 in opal_event_base_loop (base=0xb79050,
> >>> flags= >>> optimized out>) at event.c:838
> >>> #3  0x2b6ee875ee92 in opal_progress () at r

[OMPI users] Need Source Code MPI

2011-06-22 Thread makhsun
Hi,

I need help for my research. I need source code of decrypt password using
mpi c and permutation using mpi c. Any one have or know where I can get?

Thank you for your help

Regard
Makhsun



[OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Alexandre Souza
Dear Group,
After compiling the openmpi source, the following message is displayed
when trying to compile
the hello program in fortran :
amscosta@amscosta-desktop:~/openmpi-1.4.3/examples$
/opt/openmpi-1.4.3/bin/mpif90 -g hello_f90.f90 -o hello_f90
--
Unfortunately, this installation of Open MPI was not compiled with
Fortran 90 support.  As such, the mpif90 compiler is non-functional.
--
Any clue how to solve it is very welcome.
Thanks,
Alex
P.S. I am using a ubuntu box with gfortran


Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Dmitry N. Mikushin
Alexandre,

Did you have a working Fortran compiler in system in time of OpenMPI
compilation? To my experience Fortran bindings are always compiled by
default. How did you configured it and have you noticed any messages
reg. Fortran support in configure output?

- D.

2011/6/22 Alexandre Souza :
> Dear Group,
> After compiling the openmpi source, the following message is displayed
> when trying to compile
> the hello program in fortran :
> amscosta@amscosta-desktop:~/openmpi-1.4.3/examples$
> /opt/openmpi-1.4.3/bin/mpif90 -g hello_f90.f90 -o hello_f90
> --
> Unfortunately, this installation of Open MPI was not compiled with
> Fortran 90 support.  As such, the mpif90 compiler is non-functional.
> --
> Any clue how to solve it is very welcome.
> Thanks,
> Alex
> P.S. I am using a ubuntu box with gfortran
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Alexandre Souza
Hi Dimitri,
Thanks for the reply.
I have openmpi installed before for another application in :
/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
I installed a new version in /opt/openmpi-1.4.3.
I reproduce some output from the screen :
amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
 Package: Open MPI amscosta@amscosta-desktop Distribution
Open MPI: 1.4.1
   Open MPI SVN revision: r22421
   Open MPI release date: Jan 14, 2010
Open RTE: 1.4.1
   Open RTE SVN revision: r22421
   Open RTE release date: Jan 14, 2010
OPAL: 1.4.1
   OPAL SVN revision: r22421
   OPAL release date: Jan 14, 2010
Ident string: 1.4.1
  Prefix:
/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
 Configured architecture: i686-pc-linux-gnu
  Configure host: amscosta-desktop
   Configured by: amscosta
   Configured on: Wed May 18 11:10:14 BRT 2011
  Configure host: amscosta-desktop
Built by: amscosta
Built on: Wed May 18 11:16:21 BRT 2011
  Built host: amscosta-desktop
  C bindings: yes
C++ bindings: no
  Fortran77 bindings: no
  Fortran90 bindings: no
 Fortran90 bindings size: na
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
  Fortran90 compiler: none
  Fortran90 compiler abs: none
 C profiling: no
   C++ profiling: no
 Fortran77 profiling: no
 Fortran90 profiling: no
  C++ exceptions: no
  Thread support: posix (mpi: no, progress: no)
   Sparse Groups: no
  Internal debug support: no
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
 MPI I/O support: yes
   MPI_WTIME support: gettimeofday
Symbol visibility support: yes
 ..


On Wed, Jun 22, 2011 at 12:34 PM, Dmitry N. Mikushin
 wrote:
> Alexandre,
>
> Did you have a working Fortran compiler in system in time of OpenMPI
> compilation? To my experience Fortran bindings are always compiled by
> default. How did you configured it and have you noticed any messages
> reg. Fortran support in configure output?
>
> - D.
>
> 2011/6/22 Alexandre Souza :
>> Dear Group,
>> After compiling the openmpi source, the following message is displayed
>> when trying to compile
>> the hello program in fortran :
>> amscosta@amscosta-desktop:~/openmpi-1.4.3/examples$
>> /opt/openmpi-1.4.3/bin/mpif90 -g hello_f90.f90 -o hello_f90
>> --
>> Unfortunately, this installation of Open MPI was not compiled with
>> Fortran 90 support.  As such, the mpif90 compiler is non-functional.
>> --
>> Any clue how to solve it is very welcome.
>> Thanks,
>> Alex
>> P.S. I am using a ubuntu box with gfortran
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Dmitry N. Mikushin
Here's mine produced from default compilation:

 Package: Open MPI marcusmae@T61p Distribution
Open MPI: 1.4.4rc2
   Open MPI SVN revision: r24683
   Open MPI release date: May 05, 2011
Open RTE: 1.4.4rc2
   Open RTE SVN revision: r24683
   Open RTE release date: May 05, 2011
OPAL: 1.4.4rc2
   OPAL SVN revision: r24683
   OPAL release date: May 05, 2011
Ident string: 1.4.4rc2
  Prefix: /opt/openmpi_gcc-1.4.4
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: T61p
   Configured by: marcusmae
   Configured on: Tue May 24 18:39:21 MSD 2011
  Configure host: T61p
Built by: marcusmae
Built on: Tue May 24 18:46:52 MSD 2011
  Built host: T61p
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: yes (all)
  Fortran90 bindings: yes
 Fortran90 bindings size: small
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
  Fortran90 compiler: gfortran
  Fortran90 compiler abs: /usr/bin/gfortran

gfortran version is:

gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC)

How do you run ./configure? Maybe try "./configure
FC=/usr/bin/gfortran" ? It should really really work out of box
though. Configure scripts usually cook some simple test apps and run
them to check if compiler works properly. So, your ./configure output
may help to understand more.

- D.

2011/6/22 Alexandre Souza :
> Hi Dimitri,
> Thanks for the reply.
> I have openmpi installed before for another application in :
> /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
> I installed a new version in /opt/openmpi-1.4.3.
> I reproduce some output from the screen :
> amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
>                 Package: Open MPI amscosta@amscosta-desktop Distribution
>                Open MPI: 1.4.1
>   Open MPI SVN revision: r22421
>   Open MPI release date: Jan 14, 2010
>                Open RTE: 1.4.1
>   Open RTE SVN revision: r22421
>   Open RTE release date: Jan 14, 2010
>                    OPAL: 1.4.1
>       OPAL SVN revision: r22421
>       OPAL release date: Jan 14, 2010
>            Ident string: 1.4.1
>                  Prefix:
> /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
>  Configured architecture: i686-pc-linux-gnu
>          Configure host: amscosta-desktop
>           Configured by: amscosta
>           Configured on: Wed May 18 11:10:14 BRT 2011
>          Configure host: amscosta-desktop
>                Built by: amscosta
>                Built on: Wed May 18 11:16:21 BRT 2011
>              Built host: amscosta-desktop
>              C bindings: yes
>            C++ bindings: no
>      Fortran77 bindings: no
>      Fortran90 bindings: no
>  Fortran90 bindings size: na
>              C compiler: gcc
>     C compiler absolute: /usr/bin/gcc
>            C++ compiler: g++
>   C++ compiler absolute: /usr/bin/g++
>      Fortran77 compiler: gfortran
>  Fortran77 compiler abs: /usr/bin/gfortran
>      Fortran90 compiler: none
>  Fortran90 compiler abs: none
>             C profiling: no
>           C++ profiling: no
>     Fortran77 profiling: no
>     Fortran90 profiling: no
>          C++ exceptions: no
>          Thread support: posix (mpi: no, progress: no)
>           Sparse Groups: no
>  Internal debug support: no
>     MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>         libltdl support: yes
>   Heterogeneous support: no
>  mpirun default --prefix: no
>         MPI I/O support: yes
>       MPI_WTIME support: gettimeofday
> Symbol visibility support: yes
>  ..
>
>
> On Wed, Jun 22, 2011 at 12:34 PM, Dmitry N. Mikushin
>  wrote:
>> Alexandre,
>>
>> Did you have a working Fortran compiler in system in time of OpenMPI
>> compilation? To my experience Fortran bindings are always compiled by
>> default. How did you configured it and have you noticed any messages
>> reg. Fortran support in configure output?
>>
>> - D.
>>
>> 2011/6/22 Alexandre Souza :
>>> Dear Group,
>>> After compiling the openmpi source, the following message is displayed
>>> when trying to compile
>>> the hello program in fortran :
>>> amscosta@amscosta-desktop:~/openmpi-1.4.3/examples$
>>> /opt/openmpi-1.4.3/bin/mpif90 -g hello_f90.f90 -o hello_f90
>>> --
>>> Unfortunately, this installation of Open MPI was not compiled with
>>> Fortran 90 support.  As such, the mpif90 compiler is non-functional.
>>> --
>>> Any clue how to solve it is very welcome.
>>> Thanks,
>>> Alex
>>> P.S. I am using a ubuntu box with 

Re: [OMPI users] Need Source Code MPI

2011-06-22 Thread Jeff Squyres
We make it a general policy on this list not to do student homework, sorry.  :-)

You can probably find what you're looking for with a few Google searches.


On Jun 22, 2011, at 9:04 AM, makh...@student.eepis-its.edu wrote:

> Hi,
> 
> I need help for my research. I need source code of decrypt password using
> mpi c and permutation using mpi c. Any one have or know where I can get?
> 
> Thank you for your help
> 
> Regard
> Makhsun
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Jeff Squyres
Dimitry is correct -- if OMPI's configure can find a working C++ and Fortran 
compiler, it'll build C++ / Fortran support.  Yours was not, indicating that:

a) you got a binary distribution from someone who didn't include C++ / Fortran 
support, or

b) when you built/installed Open MPI, it couldn't find a working C++ / Fortran 
compiler, so it skipped building support for them.



On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote:

> Here's mine produced from default compilation:
> 
> Package: Open MPI marcusmae@T61p Distribution
>Open MPI: 1.4.4rc2
>   Open MPI SVN revision: r24683
>   Open MPI release date: May 05, 2011
>Open RTE: 1.4.4rc2
>   Open RTE SVN revision: r24683
>   Open RTE release date: May 05, 2011
>OPAL: 1.4.4rc2
>   OPAL SVN revision: r24683
>   OPAL release date: May 05, 2011
>Ident string: 1.4.4rc2
>  Prefix: /opt/openmpi_gcc-1.4.4
> Configured architecture: x86_64-unknown-linux-gnu
>  Configure host: T61p
>   Configured by: marcusmae
>   Configured on: Tue May 24 18:39:21 MSD 2011
>  Configure host: T61p
>Built by: marcusmae
>Built on: Tue May 24 18:46:52 MSD 2011
>  Built host: T61p
>  C bindings: yes
>C++ bindings: yes
>  Fortran77 bindings: yes (all)
>  Fortran90 bindings: yes
> Fortran90 bindings size: small
>  C compiler: gcc
> C compiler absolute: /usr/bin/gcc
>C++ compiler: g++
>   C++ compiler absolute: /usr/bin/g++
>  Fortran77 compiler: gfortran
>  Fortran77 compiler abs: /usr/bin/gfortran
>  Fortran90 compiler: gfortran
>  Fortran90 compiler abs: /usr/bin/gfortran
> 
> gfortran version is:
> 
> gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC)
> 
> How do you run ./configure? Maybe try "./configure
> FC=/usr/bin/gfortran" ? It should really really work out of box
> though. Configure scripts usually cook some simple test apps and run
> them to check if compiler works properly. So, your ./configure output
> may help to understand more.
> 
> - D.
> 
> 2011/6/22 Alexandre Souza :
>> Hi Dimitri,
>> Thanks for the reply.
>> I have openmpi installed before for another application in :
>> /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
>> I installed a new version in /opt/openmpi-1.4.3.
>> I reproduce some output from the screen :
>> amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
>> Package: Open MPI amscosta@amscosta-desktop Distribution
>>Open MPI: 1.4.1
>>   Open MPI SVN revision: r22421
>>   Open MPI release date: Jan 14, 2010
>>Open RTE: 1.4.1
>>   Open RTE SVN revision: r22421
>>   Open RTE release date: Jan 14, 2010
>>OPAL: 1.4.1
>>   OPAL SVN revision: r22421
>>   OPAL release date: Jan 14, 2010
>>Ident string: 1.4.1
>>  Prefix:
>> /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
>>  Configured architecture: i686-pc-linux-gnu
>>  Configure host: amscosta-desktop
>>   Configured by: amscosta
>>   Configured on: Wed May 18 11:10:14 BRT 2011
>>  Configure host: amscosta-desktop
>>Built by: amscosta
>>Built on: Wed May 18 11:16:21 BRT 2011
>>  Built host: amscosta-desktop
>>  C bindings: yes
>>C++ bindings: no
>>  Fortran77 bindings: no
>>  Fortran90 bindings: no
>>  Fortran90 bindings size: na
>>  C compiler: gcc
>> C compiler absolute: /usr/bin/gcc
>>C++ compiler: g++
>>   C++ compiler absolute: /usr/bin/g++
>>  Fortran77 compiler: gfortran
>>  Fortran77 compiler abs: /usr/bin/gfortran
>>  Fortran90 compiler: none
>>  Fortran90 compiler abs: none
>> C profiling: no
>>   C++ profiling: no
>> Fortran77 profiling: no
>> Fortran90 profiling: no
>>  C++ exceptions: no
>>  Thread support: posix (mpi: no, progress: no)
>>   Sparse Groups: no
>>  Internal debug support: no
>> MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>> libltdl support: yes
>>   Heterogeneous support: no
>>  mpirun default --prefix: no
>> MPI I/O support: yes
>>   MPI_WTIME support: gettimeofday
>> Symbol visibility support: yes
>>  ..
>> 
>> 
>> On Wed, Jun 22, 2011 at 12:34 PM, Dmitry N. Mikushin
>>  wrote:
>>> Alexandre,
>>> 
>>> Did you have a working Fortran compiler in system in time of OpenMPI
>>> compilation? To my experience Fortran bindings are always compiled by
>>> default. How did you configured it and have you noticed any messages
>>> reg. Fortran support in configure output?
>>> 
>>> - D.
>>> 
>>> 2011/6/22 Alexandre Souza :
 Dear Group,
 After compiling the openmpi source, the following message is displayed
>>>

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Alexandre Souza
Thanks Dimitri and Jeff for the output,
I managed build the mpi and run the examples in f77 and f90 doing the guideline.
However the only problem is I was logged as Root.
When I compile the examples with mpif90 or mpif77 as common user, it
keeps pointing to the old installation of mpi that does not use the
fortran compiler.
(/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1)
How can I make to point to the new installed version in
/opt/openmpi-1.4.3, when calling mpif90 or mpif77 as a common user ?
Alex

On Wed, Jun 22, 2011 at 1:49 PM, Jeff Squyres  wrote:
> Dimitry is correct -- if OMPI's configure can find a working C++ and Fortran 
> compiler, it'll build C++ / Fortran support.  Yours was not, indicating that:
>
> a) you got a binary distribution from someone who didn't include C++ / 
> Fortran support, or
>
> b) when you built/installed Open MPI, it couldn't find a working C++ / 
> Fortran compiler, so it skipped building support for them.
>
>
>
> On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote:
>
>> Here's mine produced from default compilation:
>>
>>                 Package: Open MPI marcusmae@T61p Distribution
>>                Open MPI: 1.4.4rc2
>>   Open MPI SVN revision: r24683
>>   Open MPI release date: May 05, 2011
>>                Open RTE: 1.4.4rc2
>>   Open RTE SVN revision: r24683
>>   Open RTE release date: May 05, 2011
>>                    OPAL: 1.4.4rc2
>>       OPAL SVN revision: r24683
>>       OPAL release date: May 05, 2011
>>            Ident string: 1.4.4rc2
>>                  Prefix: /opt/openmpi_gcc-1.4.4
>> Configured architecture: x86_64-unknown-linux-gnu
>>          Configure host: T61p
>>           Configured by: marcusmae
>>           Configured on: Tue May 24 18:39:21 MSD 2011
>>          Configure host: T61p
>>                Built by: marcusmae
>>                Built on: Tue May 24 18:46:52 MSD 2011
>>              Built host: T61p
>>              C bindings: yes
>>            C++ bindings: yes
>>      Fortran77 bindings: yes (all)
>>      Fortran90 bindings: yes
>> Fortran90 bindings size: small
>>              C compiler: gcc
>>     C compiler absolute: /usr/bin/gcc
>>            C++ compiler: g++
>>   C++ compiler absolute: /usr/bin/g++
>>      Fortran77 compiler: gfortran
>>  Fortran77 compiler abs: /usr/bin/gfortran
>>      Fortran90 compiler: gfortran
>>  Fortran90 compiler abs: /usr/bin/gfortran
>>
>> gfortran version is:
>>
>> gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC)
>>
>> How do you run ./configure? Maybe try "./configure
>> FC=/usr/bin/gfortran" ? It should really really work out of box
>> though. Configure scripts usually cook some simple test apps and run
>> them to check if compiler works properly. So, your ./configure output
>> may help to understand more.
>>
>> - D.
>>
>> 2011/6/22 Alexandre Souza :
>>> Hi Dimitri,
>>> Thanks for the reply.
>>> I have openmpi installed before for another application in :
>>> /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
>>> I installed a new version in /opt/openmpi-1.4.3.
>>> I reproduce some output from the screen :
>>> amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
>>>                 Package: Open MPI amscosta@amscosta-desktop Distribution
>>>                Open MPI: 1.4.1
>>>   Open MPI SVN revision: r22421
>>>   Open MPI release date: Jan 14, 2010
>>>                Open RTE: 1.4.1
>>>   Open RTE SVN revision: r22421
>>>   Open RTE release date: Jan 14, 2010
>>>                    OPAL: 1.4.1
>>>       OPAL SVN revision: r22421
>>>       OPAL release date: Jan 14, 2010
>>>            Ident string: 1.4.1
>>>                  Prefix:
>>> /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
>>>  Configured architecture: i686-pc-linux-gnu
>>>          Configure host: amscosta-desktop
>>>           Configured by: amscosta
>>>           Configured on: Wed May 18 11:10:14 BRT 2011
>>>          Configure host: amscosta-desktop
>>>                Built by: amscosta
>>>                Built on: Wed May 18 11:16:21 BRT 2011
>>>              Built host: amscosta-desktop
>>>              C bindings: yes
>>>            C++ bindings: no
>>>      Fortran77 bindings: no
>>>      Fortran90 bindings: no
>>>  Fortran90 bindings size: na
>>>              C compiler: gcc
>>>     C compiler absolute: /usr/bin/gcc
>>>            C++ compiler: g++
>>>   C++ compiler absolute: /usr/bin/g++
>>>      Fortran77 compiler: gfortran
>>>  Fortran77 compiler abs: /usr/bin/gfortran
>>>      Fortran90 compiler: none
>>>  Fortran90 compiler abs: none
>>>             C profiling: no
>>>           C++ profiling: no
>>>     Fortran77 profiling: no
>>>     Fortran90 profiling: no
>>>          C++ exceptions: no
>>>          Thread support: posix (mpi: no, progress: no)
>>>           Sparse Groups: no
>>>  Internal debug support: no
>>>     MPI parameter check: runtime
>>> Memory profiling support: no
>>> Memory debugging support: no
>>>         libltdl

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Gus Correa

Alexandre

One simple way is to set your
PATH and LD_LIBRARY_PATH in your .[t]cshrc/.bashrc file
to point to the OpenMPI version that you want to use.
Something like this:

[t]csh:
setenv PATH opt/openmpi-1.4.3/bin:$PATH

bash:
export PATH=/opt/openmpi-1.4.3/bin:$PATH

and similar for LD_LIBRARY_PATH

If this is a cluster, /opt/openmpi-1.4.3 needs to be
either copied over to all nodes, or say, NFS-mounted on all nodes.
For a single machine this is not an issue.

I hope this helps,
Gus Correa

Alexandre Souza wrote:

Thanks Dimitri and Jeff for the output,
I managed build the mpi and run the examples in f77 and f90 doing the guideline.
However the only problem is I was logged as Root.
When I compile the examples with mpif90 or mpif77 as common user, it
keeps pointing to the old installation of mpi that does not use the
fortran compiler.
(/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1)
How can I make to point to the new installed version in
/opt/openmpi-1.4.3, when calling mpif90 or mpif77 as a common user ?
Alex

On Wed, Jun 22, 2011 at 1:49 PM, Jeff Squyres  wrote:

Dimitry is correct -- if OMPI's configure can find a working C++ and Fortran 
compiler, it'll build C++ / Fortran support.  Yours was not, indicating that:

a) you got a binary distribution from someone who didn't include C++ / Fortran 
support, or

b) when you built/installed Open MPI, it couldn't find a working C++ / Fortran 
compiler, so it skipped building support for them.



On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote:


Here's mine produced from default compilation:

Package: Open MPI marcusmae@T61p Distribution
   Open MPI: 1.4.4rc2
  Open MPI SVN revision: r24683
  Open MPI release date: May 05, 2011
   Open RTE: 1.4.4rc2
  Open RTE SVN revision: r24683
  Open RTE release date: May 05, 2011
   OPAL: 1.4.4rc2
  OPAL SVN revision: r24683
  OPAL release date: May 05, 2011
   Ident string: 1.4.4rc2
 Prefix: /opt/openmpi_gcc-1.4.4
Configured architecture: x86_64-unknown-linux-gnu
 Configure host: T61p
  Configured by: marcusmae
  Configured on: Tue May 24 18:39:21 MSD 2011
 Configure host: T61p
   Built by: marcusmae
   Built on: Tue May 24 18:46:52 MSD 2011
 Built host: T61p
 C bindings: yes
   C++ bindings: yes
 Fortran77 bindings: yes (all)
 Fortran90 bindings: yes
Fortran90 bindings size: small
 C compiler: gcc
C compiler absolute: /usr/bin/gcc
   C++ compiler: g++
  C++ compiler absolute: /usr/bin/g++
 Fortran77 compiler: gfortran
 Fortran77 compiler abs: /usr/bin/gfortran
 Fortran90 compiler: gfortran
 Fortran90 compiler abs: /usr/bin/gfortran

gfortran version is:

gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC)

How do you run ./configure? Maybe try "./configure
FC=/usr/bin/gfortran" ? It should really really work out of box
though. Configure scripts usually cook some simple test apps and run
them to check if compiler works properly. So, your ./configure output
may help to understand more.

- D.

2011/6/22 Alexandre Souza :

Hi Dimitri,
Thanks for the reply.
I have openmpi installed before for another application in :
/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
I installed a new version in /opt/openmpi-1.4.3.
I reproduce some output from the screen :
amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
Package: Open MPI amscosta@amscosta-desktop Distribution
   Open MPI: 1.4.1
  Open MPI SVN revision: r22421
  Open MPI release date: Jan 14, 2010
   Open RTE: 1.4.1
  Open RTE SVN revision: r22421
  Open RTE release date: Jan 14, 2010
   OPAL: 1.4.1
  OPAL SVN revision: r22421
  OPAL release date: Jan 14, 2010
   Ident string: 1.4.1
 Prefix:
/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
 Configured architecture: i686-pc-linux-gnu
 Configure host: amscosta-desktop
  Configured by: amscosta
  Configured on: Wed May 18 11:10:14 BRT 2011
 Configure host: amscosta-desktop
   Built by: amscosta
   Built on: Wed May 18 11:16:21 BRT 2011
 Built host: amscosta-desktop
 C bindings: yes
   C++ bindings: no
 Fortran77 bindings: no
 Fortran90 bindings: no
 Fortran90 bindings size: na
 C compiler: gcc
C compiler absolute: /usr/bin/gcc
   C++ compiler: g++
  C++ compiler absolute: /usr/bin/g++
 Fortran77 compiler: gfortran
 Fortran77 compiler abs: /usr/bin/gfortran
 Fortran90 compiler: none
 Fortran90 compiler abs: none
C profiling: no
  C++ profiling: no
Fortran77 profiling: no
Fortran90 profiling: no
 C++ exceptions: no
 Thread support: posix (mpi: no, progress: no)
 

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Jeff Squyres
This this:

http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path


On Jun 22, 2011, at 1:18 PM, Alexandre Souza wrote:

> Thanks Dimitri and Jeff for the output,
> I managed build the mpi and run the examples in f77 and f90 doing the 
> guideline.
> However the only problem is I was logged as Root.
> When I compile the examples with mpif90 or mpif77 as common user, it
> keeps pointing to the old installation of mpi that does not use the
> fortran compiler.
> (/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1)
> How can I make to point to the new installed version in
> /opt/openmpi-1.4.3, when calling mpif90 or mpif77 as a common user ?
> Alex
> 
> On Wed, Jun 22, 2011 at 1:49 PM, Jeff Squyres  wrote:
>> Dimitry is correct -- if OMPI's configure can find a working C++ and Fortran 
>> compiler, it'll build C++ / Fortran support.  Yours was not, indicating that:
>> 
>> a) you got a binary distribution from someone who didn't include C++ / 
>> Fortran support, or
>> 
>> b) when you built/installed Open MPI, it couldn't find a working C++ / 
>> Fortran compiler, so it skipped building support for them.
>> 
>> 
>> 
>> On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote:
>> 
>>> Here's mine produced from default compilation:
>>> 
>>> Package: Open MPI marcusmae@T61p Distribution
>>>Open MPI: 1.4.4rc2
>>>   Open MPI SVN revision: r24683
>>>   Open MPI release date: May 05, 2011
>>>Open RTE: 1.4.4rc2
>>>   Open RTE SVN revision: r24683
>>>   Open RTE release date: May 05, 2011
>>>OPAL: 1.4.4rc2
>>>   OPAL SVN revision: r24683
>>>   OPAL release date: May 05, 2011
>>>Ident string: 1.4.4rc2
>>>  Prefix: /opt/openmpi_gcc-1.4.4
>>> Configured architecture: x86_64-unknown-linux-gnu
>>>  Configure host: T61p
>>>   Configured by: marcusmae
>>>   Configured on: Tue May 24 18:39:21 MSD 2011
>>>  Configure host: T61p
>>>Built by: marcusmae
>>>Built on: Tue May 24 18:46:52 MSD 2011
>>>  Built host: T61p
>>>  C bindings: yes
>>>C++ bindings: yes
>>>  Fortran77 bindings: yes (all)
>>>  Fortran90 bindings: yes
>>> Fortran90 bindings size: small
>>>  C compiler: gcc
>>> C compiler absolute: /usr/bin/gcc
>>>C++ compiler: g++
>>>   C++ compiler absolute: /usr/bin/g++
>>>  Fortran77 compiler: gfortran
>>>  Fortran77 compiler abs: /usr/bin/gfortran
>>>  Fortran90 compiler: gfortran
>>>  Fortran90 compiler abs: /usr/bin/gfortran
>>> 
>>> gfortran version is:
>>> 
>>> gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC)
>>> 
>>> How do you run ./configure? Maybe try "./configure
>>> FC=/usr/bin/gfortran" ? It should really really work out of box
>>> though. Configure scripts usually cook some simple test apps and run
>>> them to check if compiler works properly. So, your ./configure output
>>> may help to understand more.
>>> 
>>> - D.
>>> 
>>> 2011/6/22 Alexandre Souza :
 Hi Dimitri,
 Thanks for the reply.
 I have openmpi installed before for another application in :
 /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
 I installed a new version in /opt/openmpi-1.4.3.
 I reproduce some output from the screen :
 amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
 Package: Open MPI amscosta@amscosta-desktop Distribution
Open MPI: 1.4.1
   Open MPI SVN revision: r22421
   Open MPI release date: Jan 14, 2010
Open RTE: 1.4.1
   Open RTE SVN revision: r22421
   Open RTE release date: Jan 14, 2010
OPAL: 1.4.1
   OPAL SVN revision: r22421
   OPAL release date: Jan 14, 2010
Ident string: 1.4.1
  Prefix:
 /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
  Configured architecture: i686-pc-linux-gnu
  Configure host: amscosta-desktop
   Configured by: amscosta
   Configured on: Wed May 18 11:10:14 BRT 2011
  Configure host: amscosta-desktop
Built by: amscosta
Built on: Wed May 18 11:16:21 BRT 2011
  Built host: amscosta-desktop
  C bindings: yes
C++ bindings: no
  Fortran77 bindings: no
  Fortran90 bindings: no
  Fortran90 bindings size: na
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
  Fortran90 compiler: none
  Fortran90 compiler abs: none
 C profiling: no
   C++ profiling: no
 Fortran77 profiling: no
 Fortran90 profilin

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Dmitry N. Mikushin
Alexandre,

> How can I make to point to the new installed version in
> /opt/openmpi-1.4.3, when calling mpif90 or mpif77 as a common user ?

If you need to switch between multiple working MPI implementations
frequently (a common problem on public clusters or during local
testing/benchmarking), scripts like mpi-selector can be very handy.
First you register all possible variants with mpi-selector --register
 , and then you can switch current with
mpi-selector --set name (and restart shell). Technically, it does the
same thing already mentioned - adding records to $PATH and
LD_LIBRARY_PATH. Script is part of redhat distros (and is written by
Jeff, I suppose), but you can easily rebuild its source rpm for your
system or convert it with alien if you are on ubuntu (works for me).

- D.

2011/6/22 Alexandre Souza :
> Thanks Dimitri and Jeff for the output,
> I managed build the mpi and run the examples in f77 and f90 doing the 
> guideline.
> However the only problem is I was logged as Root.
> When I compile the examples with mpif90 or mpif77 as common user, it
> keeps pointing to the old installation of mpi that does not use the
> fortran compiler.
> (/home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1)
> How can I make to point to the new installed version in
> /opt/openmpi-1.4.3, when calling mpif90 or mpif77 as a common user ?
> Alex
>
> On Wed, Jun 22, 2011 at 1:49 PM, Jeff Squyres  wrote:
>> Dimitry is correct -- if OMPI's configure can find a working C++ and Fortran 
>> compiler, it'll build C++ / Fortran support.  Yours was not, indicating that:
>>
>> a) you got a binary distribution from someone who didn't include C++ / 
>> Fortran support, or
>>
>> b) when you built/installed Open MPI, it couldn't find a working C++ / 
>> Fortran compiler, so it skipped building support for them.
>>
>>
>>
>> On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote:
>>
>>> Here's mine produced from default compilation:
>>>
>>>                 Package: Open MPI marcusmae@T61p Distribution
>>>                Open MPI: 1.4.4rc2
>>>   Open MPI SVN revision: r24683
>>>   Open MPI release date: May 05, 2011
>>>                Open RTE: 1.4.4rc2
>>>   Open RTE SVN revision: r24683
>>>   Open RTE release date: May 05, 2011
>>>                    OPAL: 1.4.4rc2
>>>       OPAL SVN revision: r24683
>>>       OPAL release date: May 05, 2011
>>>            Ident string: 1.4.4rc2
>>>                  Prefix: /opt/openmpi_gcc-1.4.4
>>> Configured architecture: x86_64-unknown-linux-gnu
>>>          Configure host: T61p
>>>           Configured by: marcusmae
>>>           Configured on: Tue May 24 18:39:21 MSD 2011
>>>          Configure host: T61p
>>>                Built by: marcusmae
>>>                Built on: Tue May 24 18:46:52 MSD 2011
>>>              Built host: T61p
>>>              C bindings: yes
>>>            C++ bindings: yes
>>>      Fortran77 bindings: yes (all)
>>>      Fortran90 bindings: yes
>>> Fortran90 bindings size: small
>>>              C compiler: gcc
>>>     C compiler absolute: /usr/bin/gcc
>>>            C++ compiler: g++
>>>   C++ compiler absolute: /usr/bin/g++
>>>      Fortran77 compiler: gfortran
>>>  Fortran77 compiler abs: /usr/bin/gfortran
>>>      Fortran90 compiler: gfortran
>>>  Fortran90 compiler abs: /usr/bin/gfortran
>>>
>>> gfortran version is:
>>>
>>> gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC)
>>>
>>> How do you run ./configure? Maybe try "./configure
>>> FC=/usr/bin/gfortran" ? It should really really work out of box
>>> though. Configure scripts usually cook some simple test apps and run
>>> them to check if compiler works properly. So, your ./configure output
>>> may help to understand more.
>>>
>>> - D.
>>>
>>> 2011/6/22 Alexandre Souza :
 Hi Dimitri,
 Thanks for the reply.
 I have openmpi installed before for another application in :
 /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
 I installed a new version in /opt/openmpi-1.4.3.
 I reproduce some output from the screen :
 amscosta@amscosta-desktop:/opt/openmpi-1.4.3/bin$ ompi_info
                 Package: Open MPI amscosta@amscosta-desktop Distribution
                Open MPI: 1.4.1
   Open MPI SVN revision: r22421
   Open MPI release date: Jan 14, 2010
                Open RTE: 1.4.1
   Open RTE SVN revision: r22421
   Open RTE release date: Jan 14, 2010
                    OPAL: 1.4.1
       OPAL SVN revision: r22421
       OPAL release date: Jan 14, 2010
            Ident string: 1.4.1
                  Prefix:
 /home/amscosta/OpenFOAM/ThirdParty-1.7.x/platforms/linuxGcc/openmpi-1.4.1
  Configured architecture: i686-pc-linux-gnu
          Configure host: amscosta-desktop
           Configured by: amscosta
           Configured on: Wed May 18 11:10:14 BRT 2011
          Configure host: amscosta-desktop
                Built by: amscosta
                Built on: Wed May