[OMPI users] Regarding Fortran 90 subroutines with MPI

2009-12-31 Thread Arunkumar C R
Hi,

I have encountered some run time errors in the general purpose program given
below. Could you please assist me  in correcting this.
The MPI code is written in Fortran 90. The concept of subroutine is used
because I want to write program for another scientific problem.


module data
use mpi
implicit none
integer::np, ierr, irank
end module

program prog
use data
implicit none

integer::trial, ntrials
ntrials=10

do trial=1, ntrials
   call mpi_comm_rank(mpi_comm_world, irank, ierr)
   call mpi_comm_size(mpi_comm_world,np, ierr)
   write(1, *) 'trial no=', trial
   write(1, *) 'irank  np'
   call
process
!subroutine call
end do
print*,'Program completed!'
call mpi_finalize(ierr)
end

subroutine
process
!subroutine
use data
implicit none

if(irank.eq.0) then
write(10, *) irank, np
end if
end subroutine process

Could you please run the program and let e know the error?

Thanking you
Sincerely,
Arun


-- 
**
Arunkumar C R
Research Assistant
Solid State & Structural Chemistry Unit
Indian Institute of Science
Bangalore -12, INDIA
arunkumar...@sscu.iisc.ernet.in
Mob: +91- 9900 549 059
**


Re: [OMPI users] Regarding Fortran 90 subroutines with MPI

2009-12-31 Thread Eugene Loh

It would help if you would include the error messages you encounter.

You need to call MPI_Init(ierr) before you can call (just about) any 
other MPI call.  E.g., add "call MPI_Init(ierr)" as the first executable 
statement of your "program  prog".


The error I get with your program is

*** An error occurred in MPI_Comm_f2c
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

I suppose that message could be clearer.

If I add the MPI_Init call, things work fine.

Arunkumar C R wrote:

I have encountered some run time errors in the general purpose program 
given below. Could you please assist me  in correcting this.
The MPI code is written in Fortran 90. The concept of subroutine is 
used because I want to write program for another scientific problem.


module data
use mpi
implicit none
integer::np, ierr, irank
end module

program prog
use data
implicit none

integer::trial, ntrials


insert "call mpi_init(ierr)" here


ntrials=10
   
do trial=1, ntrials

   call mpi_comm_rank(mpi_comm_world, irank, ierr)
   call mpi_comm_size(mpi_comm_world,np, ierr)
   write(1, *) 'trial no=', trial
   write(1, *) 'irank  np'
   call 
process  
!subroutine call

end do
print*,'Program completed!'
call mpi_finalize(ierr)
end

subroutine 
process
!subroutine

use data
implicit none

if(irank.eq.0) then
write(10, *) irank, np
end if
end subroutine process

Could you please run the program and let e know the error?




Re: [OMPI users] Regarding Fortran 90 subroutines with MPI

2009-12-31 Thread ETHAN DENEAULT
Arun, 

Before you call any MPI subroutines, you have to call MPI_Init(ierr) first. 
Once you put that line in, it should work.

Cheers, 
Ethan

--
Dr. Ethan Deneault
Assistant Professor of Physics
SC 234
University of Tampa
Tampa, FL 33606



-Original Message-
From: users-boun...@open-mpi.org on behalf of Arunkumar C R
Sent: Thu 12/31/2009 1:29 AM
To: us...@open-mpi.org
Subject: [OMPI users] Regarding Fortran 90 subroutines with MPI
 
Hi,

I have encountered some run time errors in the general purpose program given
below. Could you please assist me  in correcting this.
The MPI code is written in Fortran 90. The concept of subroutine is used
because I want to write program for another scientific problem.


module data
use mpi
implicit none
integer::np, ierr, irank
end module

program prog
use data
implicit none

integer::trial, ntrials
ntrials=10

do trial=1, ntrials
   call mpi_comm_rank(mpi_comm_world, irank, ierr)
   call mpi_comm_size(mpi_comm_world,np, ierr)
   write(1, *) 'trial no=', trial
   write(1, *) 'irank  np'
   call
process
!subroutine call
end do
print*,'Program completed!'
call mpi_finalize(ierr)
end

subroutine
process
!subroutine
use data
implicit none

if(irank.eq.0) then
write(10, *) irank, np
end if
end subroutine process

Could you please run the program and let e know the error?

Thanking you
Sincerely,
Arun


-- 
**
Arunkumar C R
Research Assistant
Solid State & Structural Chemistry Unit
Indian Institute of Science
Bangalore -12, INDIA
arunkumar...@sscu.iisc.ernet.in
Mob: +91- 9900 549 059
**

<>

Re: [OMPI users] Regarding Fortran 90 subroutines with MPI

2009-12-31 Thread Eugene Loh

Eugene Loh wrote:


It would help if you would include the error messages you encounter.

You need to call MPI_Init(ierr) before you can call (just about) any 
other MPI call.  E.g., add "call MPI_Init(ierr)" as the first 
executable statement of your "program  prog".


The error I get with your program is

*** An error occurred in MPI_Comm_f2c
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

I suppose that message could be clearer.


I filed a request for a "minor enhancement".  See 
https://svn.open-mpi.org/trac/ompi/ticket/2152


Re: [OMPI users] OpenMPI problem on Fedora Core 12

2009-12-31 Thread Gijsbert Wiesenekker
First of all, the reason that I have created a CPU-friendly version of
MPI_Barrier is that my program is asymmetric (so some of the nodes can
easily have to wait for several hours) and that it is I/O bound. My program
uses MPI mainly to synchronize I/O and to share some counters between the
nodes, followed by a gather/scatter of the files. MPI_Barrier (or any of the
other MPI calls) caused the four CPU's of my Quad Core to continuously run
at 100% because of the aggressive polling, making the server almost unusable
and also slowing my program down because there was less CPU time available
for I/O and file synchronization. With this version of MPI_Barrier CPU usage
averages out at about 25%. I only recently learned about
the OMPI_MCA_mpi_yield_when_idle variable, I still have to test if that is
an alternative to my workaround.
Meanwhile I seem to have found the cause of problem thanks to Ashley's
excellent padb tool. Following Eugene's recommendation, I have added the
MPI_Wait call: the same problem. Next I created a separate program that just
calls my_barrier repeatedly with randomized 1-2 seconds intervals. Again the
same problem (with 4 nodes), sometimes after a couple of iterations,
sometimes after 500, 1000 or 2000 iterations. Next I followed Ashley's
suggestion to use padb. I ran padb --all --mpi-queue and padb --all
--message-queue while the program was running fine and after the problem
occured. When the problem occurred padb said:

Warning, remote process state differs across ranks
state : ranks
R : [2-3]
S : [0-1]

and

$ padb --all --stack-trace --tree
Warning, remote process state differs across ranks
state : ranks
R : [2-3]
S : [0-1]
-
[0-1] (2 processes)
-
main() at ?:?
  barrier_util() at ?:?
my_sleep() at ?:?
  __nanosleep_nocancel() at ?:?
-
[2-3] (2 processes)
-
??() at ?:?
  ??() at ?:?
??() at ?:?
  ??() at ?:?
??() at ?:?
  ompi_mpi_signed_char() at ?:?
ompi_request_default_wait_all() at ?:?
  opal_progress() at ?:?
-
2 (1 processes)
-
mca_pml_ob1_progress() at ?:?

suggests that rather than OpenMPI being the problem, nanosleep is the
culprit because the call to it seems to hang.

Thanks for all the help.

Gijsbert

On Mon, Dec 14, 2009 at 8:22 PM, Ashley Pittman wrote:

> On Sun, 2009-12-13 at 19:04 +0100, Gijsbert Wiesenekker wrote:
> > The following routine gives a problem after some (not reproducible)
> > time on Fedora Core 12. The routine is a CPU usage friendly version of
> > MPI_Barrier.
>
> There are some proposals for Non-blocking collectives before the MPI
> forum currently and I believe a working implementation which can be used
> as a plug-in for OpenMPI, I would urge you to look at these rather than
> try and implement your own.
>
> > My question is: is there a problem with this routine that I overlooked
> > that somehow did not show up until now
>
> Your code both does all-to-all communication and also uses probe, both
> of these can easily be avoided when implementing Barrier.
>
> > Is there a way to see which messages have been sent/received/are
> > pending?
>
> Yes, there is a message queue interface allowing tools to peek inside
> the MPI library and see these queues.  That I know of there are three
> tools which use this, either TotalView, DDT or my own tool, padb.
> TotalView and DDT are both full-featured graphical debuggers and
> commercial products, padb is a open-source text based tool.
>
> Ashley,
>
> --
>
> Ashley Pittman, Bath, UK.
>
> Padb - A parallel job inspection tool for cluster computing
> http://padb.pittman.org.uk
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>