Hello,

On Mon, Mar 10 2025, Zhang lv via Gcc wrote:
> Hi here,
>
> I'm a Master of Computer Science student at the University of Melbourne.
> Previously, I worked on implementing a GCC optimization prefetching pass
> (which involved loop unrolling) for an Alpha-like computer architecture.
> I'm set to complete my research project in July and graduate soon.

great, that sounds you are definitely qualified.  Is this prefething
project available anywhere?

>
> I'm very interested in applying for the 2025 GSoC project and discussing
> potential ideas with the community. However, I'm still unfamiliar with the
> best way to engage in these discussions and apply for the project, so this
> is my first attempt at reaching out.

I think emailing gcc@gcc.gnu.org and fort...@gcc.gnu.org is exactly the
right thing to do.

>
> My primary interest is in projects related to auto-parallelization,
> particularly the Fortran *DO CONCURRENT* project. Below, I outline my
> initial understanding of the project and would appreciate any feedback from
> the community to ensure I'm on the right track:
>

I am not a Fortran front-end developer but...

>    1. The *front-end parser* processes the Fortran *DO CONCURRENT* syntax
>    and converts it into a language-independent IR—*GIMPLE*.

This sounds correct.  The important bit, AFAIU, is that the IR would
utilize a gimple statement for parallel execution that is so far created
for OpenMP input.

>    2. The *middle-end* then applies tree optimizations, utilizing SSA
>    passes to optimize the code for auto-parallelization.

Not necessarily, some OpenMP bits are processed in gimple form that is
not yet in SSA.  I also don't think any auto-parallelization is
involved, the whole point of the task is to implement manual
parallelization, I believe?

>    3. This project will involve both *front-end* work (parser and
>    parameterized command-line flags) and *middle-end* optimizations
>    (optimization passes).

Right, though the middle end bits might be mostly about proper
"lowering" rather than optimization.

>
> Loop unrolling is influenced by multiple factors, such as loop nesting and
> whether iteration counts are deterministic. A significant performance gain
> comes from reducing array memory access delays, which techniques like
> prefetching can help with. Since *DO CONCURRENT* assumes iteration
> independence, we have more flexibility to unroll loops and implement
> parallelization efficiently.

Frankly, I don't quite understand the above.  If anything, I'd expect
iteration independence makes unrolling much harder.  But I fail to see
how unrolling is in any way relevant.

Good luck!

Martin


>
> One of the most exciting advantages of this project is that it enables
> auto-parallelization for a broader range of code without requiring
> developers to be an OpenMP expert. *DO CONCURRENT* in Fortran is much
> easier to use, and previous auto-parallelization techniques have been quite
> limited in their applicability, as only specific loop structures could
> benefit from them.
>
> I look forward to engaging with the community and gaining insights on how
> best to contribute to this project.
>
> Best regards,
>
> Chenlu Zhang

Reply via email to