amjad ali wrote:
Hi All.

I have parallel PDE/CFD code in fortran.
Let we consider it consisting of two parts:

1) Startup part; that  includes input reads, splits, distributions, forming neighborhood information arrays, grid arrays, and all related. It includes most of the necessary array declarations.

2) Iterative part; we proceed the solution in time.


Approach One:
============
What I do is that during the Startup phase, I declare the most array allocatable and then allocate them sizes depending upon the input reads and domain partitioning. And then In the iterative phase I utilize those arrays. But I "do not" allocate/deallocate new arrays in the iterative part.


Approach Two:
============
I think that,  what if I first use to run only the start -up phase of my parallel code having allocatable like things and get the sizes-values required for array allocations for a specific problem size and partitioning. Then I use these values as constant in another version of my code in which I will declare array with the constant values obtained.

So my question is that will there be any significant performance/efficiency difference in the "ITERATIVE part" if the approach two is used (having arrays declared fixed sizes/values)?


--------------------
ANOTHER QUESTION ABOUT CALLING SUBROUTINES:
Assume two ways:
1) One way is that we declare arrays in some global module and "USE"
arrays in subroutines which ever is needed there.


2) We pass a large number of arrays (10, 20 or 30 may be) in the header
while making call to a  subroutine.

Which way is quite fast and efficient over the other?
  
Interesting question, but probably outside the scope of this mail list.  Probably try some timing experiments for your particular compiler and/or ask a Fortran list.

Reply via email to