Re: [OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-11-14 Thread Stefan Kuhne
Jeff Squyres schrieb:
> On Nov 13, 2009, at 1:06 AM, Stefan Kuhne wrote:
> 
Hello,

>> user@head:~$ ulimit -l
>> 64
>> 
> This should really be unlimited.  See:
> 
> http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

with such an error message i had find it, but with my error message
thwrw is no chance.

I'll try it on monday.

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


[OMPI users] fortran and MPI_Barrier, not working?

2009-11-14 Thread Ricardo Reis


 Hi

 I'm testing this in a debian box, openmpi 1.3-2, compiled with gcc suite 
(all from packages). After compiling and running the code I'm baffled with 
the output, it seems MPI_Barrier is not working. Maybe it is such a basic 
error I'm doing that I can't figure it out... See the code below, the 
output it gives (one of because it's a bit erratic) and what I would 
expect as output. Any help would be aprecciated...


 Code was compiled with

 mpif90 -O0 -g -fbounds-check -Wall test_mpi.f90 -o test_mpi

 - > code - cut here --

program testmpi

  use iso_fortran_env

  implicit none

  include 'mpif.h'

  integer, parameter :: ni=16,nj=16,nk=16

  integer, parameter :: stdout=output_unit, stderr=error_unit, &
   stdin=input_unit

  integer :: istep,  idest, idx, &
   ierr, my_rank, world, nprocs

  ! > CODE STARTS --- *

  call MPI_Init(ierr)

  world = MPI_COMM_WORLD
  call MPI_comm_rank(world, my_rank, ierr)
  call MPI_comm_size(world, nprocs, ierr)

  call MPI_Barrier(world, ierr)

  do istep=1, nprocs

 idest=ieor(my_rank, istep)

 if(my_rank.eq.0) print '("*",/)'
 call flush(stdout)

 call MPI_Barrier(world, ierr)

 do idx=0,nprocs-1

if(idx.eq.my_rank .and. idest.lt.nprocs)then
   print '("ISTEP",I2," IDX",I2," my_rank ",I5," idest ",I5)', &
   istep, idx, my_rank, idest
   call flush(stdout)
endif

call MPI_Barrier(world, ierr)
 enddo

  call MPI_Barrier(world, ierr)

  enddo


  call MPI_Barrier(world, ierr)
  call MPI_Finalize(ierr)


end program testmpi

 - < code - cut here --

 - > output - cut here --

*

ISTEP 1 IDX 1 my_rank 1 idest 0
ISTEP 2 IDX 1 my_rank 1 idest 3
ISTEP 1 IDX 2 my_rank 2 idest 3
ISTEP 2 IDX 2 my_rank 2 idest 0
ISTEP 1 IDX 3 my_rank 3 idest 2
ISTEP 1 IDX 0 my_rank 0 idest 1
*

ISTEP 2 IDX 0 my_rank 0 idest 2
ISTEP 2 IDX 3 my_rank 3 idest 1
ISTEP 3 IDX 3 my_rank 3 idest 0
ISTEP 3 IDX 1 my_rank 1 idest 2
ISTEP 3 IDX 2 my_rank 2 idest 1
*

ISTEP 3 IDX 0 my_rank 0 idest 3
*

 - < output - cut here --



 - > expected output - cut here --

*

ISTEP 1 IDX 0 my_rank 0 idest 1
ISTEP 1 IDX 1 my_rank 1 idest 0
ISTEP 1 IDX 2 my_rank 2 idest 3
ISTEP 1 IDX 3 my_rank 3 idest 2

*

ISTEP 2 IDX 0 my_rank 0 idest 2
ISTEP 2 IDX 1 my_rank 1 idest 3
ISTEP 2 IDX 2 my_rank 2 idest 0
ISTEP 2 IDX 3 my_rank 3 idest 1

*

ISTEP 3 IDX 0 my_rank 0 idest 3
ISTEP 3 IDX 1 my_rank 1 idest 2
ISTEP 3 IDX 2 my_rank 2 idest 1
ISTEP 3 IDX 3 my_rank 3 idest 0

 - < expected output - cut here --

 Ricardo Reis

 'Non Serviam'

 PhD candidate @ Lasef
 Computational Fluid Dynamics, High Performance Computing, Turbulence
 http://www.lasef.ist.utl.pt

 Cultural Instigator @ Rádio Zero
 http://www.radiozero.pt

 Keep them Flying! Ajude a/help Aero Fénix!

 http://www.aeronauta.com/aero.fenix

 http://www.flickr.com/photos/rreis/

[OMPI users] Array Declaration different approaches

2009-11-14 Thread amjad ali
Hi All.

I have parallel PDE/CFD code in fortran.
Let we consider it consisting of two parts:

1) Startup part; that  includes input reads, splits, distributions, forming
neighborhood information arrays, grid arrays, and all related. It includes
most of the necessary array declarations.

2) Iterative part; we proceed the solution in time.


Approach One:

What I do is that during the Startup phase, I declare the most array
allocatable and then allocate them sizes depending upon the input reads and
domain partitioning. And then In the iterative phase I utilize those arrays.
But I "do not" allocate/deallocate new arrays in the iterative part.


Approach Two:

I think that,  what if I first use to run only the start -up phase of my
parallel code having allocatable like things and get the sizes-values
required for array allocations for a specific problem size and partitioning.
Then I use these values as constant in another version of my code in which I
will declare array with the constant values obtained.

So my question is that will there be any significant performance/efficiency
difference in the "ITERATIVE part" if the approach two is used (having
arrays declared fixed sizes/values)?




ANOTHER QUESTION ABOUT CALLING SUBROUTINES:
Assume two ways:
1) One way is that we declare arrays in some global module and "USE"
arrays in subroutines which ever is needed there.

2) We pass a large number of arrays (10, 20 or 30 may be) in the header
while making call to a  subroutine.

Which way is quite fast and efficient over the other?




Thank You for your kind attention.

with best regards,
Amjad Ali.


[OMPI users] get the process Id of mpirun

2009-11-14 Thread Kritiraj Sajadah
Dear All,
  I am trying to get the process Id of Mpirun from within my MPI 
application. When i use getpid() and getppid(), i get the PID of my application 
and the PID of "orted --daemonize -mca..." respectively. 
Is there a way to get the PID of the mpirun? In this case, it looks like it is 
the grandparent of the application.

Thank you 

Regards,

Raj





Re: [OMPI users] fortran and MPI_Barrier, not working?

2009-11-14 Thread Gus Correa

Alberto,
digo, Alvaro,
digo, Fernando,
digo, Ricardo Reis ...

Salve, oh pa'!

I think MPI doesn't ensure that the output will come ordered
according to process rank, as in your expected output list.
Even MPI_Barrier doesn't sync the output, I suppose.
It syncs only the communication among the processes,
but you actually have no communication on your code!
(Other than the barrier itself, of course.)

You have a different stdout buffer for each process,
and the processes probably compete for access
to the (single) output file,
when they hit "call flush", I would guess.
The Linux scheduler may set the game here,
and tell who's in first, in second, in third, etc.
But I'm not knowledgeable on these things,
I am just wildly guessing.

Note that both lists you sent have exactly the same lines,
though in different order.
I think this is telling that there is nothing wrong
with MPI_Barrier or with your code.
A shuffled output order is to be expected, no more no less.
And the order will probably vary from run to run, right?

Also, on your outer loop istep runs from 1 to 4,
and process rank zero prints an asterisk at each outer loop iteration.
Hence, I think four asterisks, not three, should be expected, right?
Four asterisks is what I see on your first list (the shuffled one),
not on the ordered one.

Now, the question is how to produce the
ordered output you want.

One way would be to send everything to process 0,
and let it order the messages, `a la mode de "hello_world",
but this would be kind of cheating.
Maybe there is a solution with MPI-IO,
to concatenate the output file they way you want first,
then flush it.

**

Arre lo'gica bina'ria impiedosa!

"Onde pode acolher-se um fraco humano,
Onde tera' segura a curta vida,
Que na~o se arme e se indigne o Ce'u sereno
contra um bicho da terra ta~o pequeno?"

Me diga?

Abrac,o
Gus
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
1
Ricardo Reis wrote:


 Hi

 I'm testing this in a debian box, openmpi 1.3-2, compiled with gcc 
suite (all from packages). After compiling and running the code I'm 
baffled with the output, it seems MPI_Barrier is not working. Maybe it 
is such a basic error I'm doing that I can't figure it out... See the 
code below, the output it gives (one of because it's a bit erratic) and 
what I would expect as output. Any help would be aprecciated...


 Code was compiled with

 mpif90 -O0 -g -fbounds-check -Wall test_mpi.f90 -o test_mpi

 - > code - cut here --

program testmpi

  use iso_fortran_env

  implicit none

  include 'mpif.h'

  integer, parameter :: ni=16,nj=16,nk=16

  integer, parameter :: stdout=output_unit, stderr=error_unit, &
   stdin=input_unit

  integer :: istep,  idest, idx, &
   ierr, my_rank, world, nprocs

  ! > CODE STARTS --- *

  call MPI_Init(ierr)

  world = MPI_COMM_WORLD
  call MPI_comm_rank(world, my_rank, ierr)
  call MPI_comm_size(world, nprocs, ierr)

  call MPI_Barrier(world, ierr)

  do istep=1, nprocs

 idest=ieor(my_rank, istep)

 if(my_rank.eq.0) print '("*",/)'
 call flush(stdout)

 call MPI_Barrier(world, ierr)

 do idx=0,nprocs-1

if(idx.eq.my_rank .and. idest.lt.nprocs)then
   print '("ISTEP",I2," IDX",I2," my_rank ",I5," idest ",I5)', &
   istep, idx, my_rank, idest
   call flush(stdout)
endif

call MPI_Barrier(world, ierr)
 enddo

  call MPI_Barrier(world, ierr)

  enddo


  call MPI_Barrier(world, ierr)
  call MPI_Finalize(ierr)


end program testmpi

 - < code - cut here --

 - > output - cut here --

*

ISTEP 1 IDX 1 my_rank 1 idest 0
ISTEP 2 IDX 1 my_rank 1 idest 3
ISTEP 1 IDX 2 my_rank 2 idest 3
ISTEP 2 IDX 2 my_rank 2 idest 0
ISTEP 1 IDX 3 my_rank 3 idest 2
ISTEP 1 IDX 0 my_rank 0 idest 1
*

ISTEP 2 IDX 0 my_rank 0 idest 2
ISTEP 2 IDX 3 my_rank 3 idest 1
ISTEP 3 IDX 3 my_rank 3 idest 0
ISTEP 3 IDX 1 my_rank 1 idest 2
ISTEP 3 IDX 2 my_rank 2 idest 1
*

ISTEP 3 IDX 0 my_rank 0 idest 3
*

 - < output - cut here --



 - > expected output - cut here --

*

ISTEP 1 IDX 0 my_rank 0 idest 1
ISTEP 1 IDX 1 my_rank 1 idest 0
ISTEP 1 IDX 2 my_rank 2 idest 3
ISTEP 1 IDX 3 my_rank 3 idest 2

*

ISTEP 2 IDX 0 my_rank 0 idest 2
ISTEP 2 IDX 1 my_rank 1 idest 3
ISTEP 2 IDX 2 my_rank 2 idest 0
ISTEP 2 IDX 3 my_rank 3 idest 1

*

ISTEP 3 IDX 0 my_rank 0 idest 3
ISTEP 3 IDX 1 my_rank 1 idest 2
ISTEP 3 IDX 2 my_rank 2 idest 1
ISTEP 3 IDX 3 my_rank 3 idest 0

 - < expected output - cut here --

Re: [OMPI users] get the process Id of mpirun

2009-11-14 Thread Ralph Castain
Not that I know of - mpirun may not even be on the same node!

And we certainly don't pass that information to the remote processes.

On Nov 14, 2009, at 8:05 AM, Kritiraj Sajadah wrote:

> Dear All,
>  I am trying to get the process Id of Mpirun from within my MPI 
> application. When i use getpid() and getppid(), i get the PID of my 
> application and the PID of "orted --daemonize -mca..." respectively. 
> Is there a way to get the PID of the mpirun? In this case, it looks like it 
> is the grandparent of the application.
> 
> Thank you 
> 
> Regards,
> 
> Raj
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users