Re: which opt. flags go where? - references
Hi Kenneth, Just found your post in the gcc archive ;) - wanted to say that there have been many projects on the topic of optimal compiler flag selections. Actually, there are several free and commercial tools available for several years already: PathOpt from PathScale and ESTO from IBM, for example. We switched to PathOpt in 2003 and currently use it for our research since it is easily configurable, works with any languages, does not require project modifications and has many extendable basic search strategies (random, one by one, all but one). We easily coupled it with the PAPI library to obtain hardware counters, retargeted it to work with GCC and Intel Compilers, and added different machine learning algorithms. If you are interested, you can find more information about these tools and search techniques on global and fine-grain level, including publications and links to groups working on iterative optimizations at my webpage here: http://fursin.net/research_iter.html Also, if you are interested, we will present our new paper "Rapidly Selecting Good Compiler Optimizations using Performance Counters" at the CGO in a few days where we use hardware counters and machine learning to select best optimization flags for the PathScale compiler (Open64/ORC compiler). Unfortunately, I will not be able to come to CGO and present it due to some passport problems, but there will be my colleagues whom you can chat with about iterative optimizations ... By the way, we now move most of our tools and techniques to the GCC within MilePost project and HiPEAC network, and would be happy for any comments and suggestions. Our development website is here: http://gcc-ici.sourceforge.net Cheers, Grigori ===== Grigori Fursin, PhD Research Fellow, INRIA Futurs, France http://fursin.net * From: Kenneth Hoste * To: Diego Novillo * Cc: gcc at gcc dot gnu dot org, Ben Elliston * Date: Wed, 7 Feb 2007 15:35:15 +0100 * Subject: Re: which opt. flags go where? - references * References: <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> Hi, On 07 Feb 2007, at 15:22, Diego Novillo wrote: Kenneth Hoste wrote on 02/07/07 08:56: [1] Almagor et al., Finding effective compilation sequences (LCES'04) [2] Cooper et al., Optimizing for Reduced Code Space using Genetic Algorithms (LCTES'99) [3] Almagor et al., Compilation Order Matters: Exploring the Structure of the Space of Compilation Sequences Using Randomized Search Algorithms (Tech.Report) [3] Acovea: Using Natural Selection to Investigate Software Complexities (http://www.coyotegulch.com/products/acovea/) You should also contact Ben Elliston (CC'd) and Grigori Fursin (sorry, no email). Ben worked on dynamic reordering of passes, his thesis will have more information about it. Grigori is working on an API for iterative an adaptive optimization, implemented in GCC. He presented at the last HiPEAC 2007 GCC workshop. Their presentation should be available at http://www.hipeac.net/node/746 I actually talked to Grigori about the -Ox flags, I was at the HiPEAC conference too ;-) I didn't include references to his work, because my aim wouldn't be at reordering of passes, but just selecting them. I understand that reordering is of great importance while optimizing, but I think this project is big enough as is. Some other questions: * I'm planning to do this work on an x86 platform (i.e. Pentium4), but richi told me that's probably not a good idea, because of the low number of registers available on x86. Comments? When deriving ideal flag combinations for -Ox, we will probably want common sets for the more popular architectures, so I would definitely include x86. OK. I think richi's comment on x86 was the fact that evaluating the technique we are thinking about might produce results which are hard to 'port' to a different architecture. But then again, we won't be stating we have found _the_ best set of flags for a given goal... Thank you for your comment. * Since we have done quite some analysis on the SPEC2k benchmarks, we'll also be using them for this work. Other suggestions are highly appreciated. We have a collection of tests from several user communities that we use as performance benchmarks (DLV, TRAMP3D, MICO). There should be links to the testers somewhere in http://gcc.gnu.org/ OK, sounds interesting, I'll look into it. In which way are these benchmarks used? Just to test the general performance of GCC? Have they been compared to say, SPEC CPU, or other 'research/industrial' benchmark suites (such as MiBench, MediaBench, EEMBC, ...) ? * Since there has been some previous work on this, I wonder why none of it has made it into GCC development. Were the methods proposed unfeasible fo
GCC Interactive Compilation Interface development
Dear all, Just wanted to announce that we are working on the GCC Interactive Compilation Interface to enable automatic tuning of optimization heuristic. This interface is used in HiPEAC, MilePost, SARC and GCCC projects. A website with latest patches, software, forums, mailing lists, publications is available here: http://gcc-ici.sourceforge.net Current prototype of GCC-ICI has an ability to reorder compiler optimization phases and fine-tune several transformations. We are currently working on the following issues: * incrementally modifying Tree-SSA to support dynamic pass reordering * splitting analysis and optimization code * extracting program features * adding support for fine-grain tuning of most of the GCC optimizations * developing a component model for passes to enable dynamic linking of external optimization plugins * developing a scripting language inside GCC to simplify and automatize transformations & optimization process We presented our preliminary work at the SMART'07 workshop and GCC HiPEAC Tutorial where I had a chat with Diego Novillo et al about further GCC-ICI developments. We now plan to have an open discussion at the GCC-ICI forum about further developments and will be happy for any comments and suggestions. Here is the formal motivation for our research and developments: Current innovations in science and industry demand ever-increasing computing resources while placing strict requirements on system performance, power consumption, code size, response, reliability, portability and design time. However compilers often fail to deliver satisfactory levels of performance on modern processors, due to rapidly evolving hardware, lack/cost of expert resources, fixed and black-box optimization heuristics, simplistic hardware models, inability to fine-tune the application of transformations, and highly dynamic behavior of the system. Recently, we started developing an Interactive Compilation Interface (ICI) to connect external optimization drivers to the GCC. This interface is meant to facilitate the prototyping and evaluation of iterative optimization, fine-grain customization and design-space exploration strategies. The early design to enable non-intrusive feature extraction and meddling with heuristic's decisions was presented at the SMART'07 workshop. Currently, we are working on a more advanced design, incrementally modifying Tree-SSA to support dynamic pass reordering, structured split of analysis and optimization code, and a component model for passes to enable dynamic linking of external optimization plugins. We believe these modification will simplify the tuning process of new optimization heuristics and will eventually simplify the whole compiler design where compiler heuristics will be learned automatically, continuously and transparently for a user using statistical and machine learning techniques. Looking forward to your input, Grigori Fursin ========= Grigori Fursin, PhD Research Fellow, INRIA Futurs, France http://fursin.net/research_desc.html
Continuous run-time adaptation and optimization of statically compiled programs
Hi all, Also wanted to announce that we are currently developing run-time adaptation techniques for GCC for statically compiled programs with varying context and behavior. Our technique relies on function/loop versioning and static low-overhead monitoring and adaptation routines. We extend our previous work on adaption techniques for regular numerical codes with stable phases (presented at HiPEAC 2005) to codes with any (irregular) behavior by using time-slot run-time performance monitoring and statistical selection of appropriate versions. We use it for continuous program optimizations, speeding up iterative optimizations and for auto-tuning of libraries. We also use this technique for program run-time adaptation on heterogeneous computing systems. Here is the development website: http://unidapt.sourceforge.net This adaptation technique is used in HiPEAC, MilePost and SARC projects. Any comments and suggestions are welcome! Cheers, Grigori = Grigori Fursin, PhD Research Fellow, INRIA Futurs, France http://fursin.net/research_desc.html
MiDataSets for MiBench to enable more realistic benchmarking and better tuning of the GCC optimization heuristic
Hi all, In case someone is interested, we are developing a set of inputs (MiDataSets) for the MiBench benchmark. Iterative optimization is now a popular technique to obtain performance or code size improvements over the default settings in a compiler. However, in most of the research projects, the best configuration is found for one arbitrary dataset and it is assumed that this configuration will work well with any other dataset that a program uses. We created 20 different datasets per program for free MiBench benchmark to evaluate this assumption and analyze the behavior of various programs with multiple datasets. We hope that this will enable more realistic benchmarking, practical iterative optimizations (iterative compilation), and can help to automatically improve GCC optimization heuristic. We just made a pre-release of the 1st version of MiDataSets and we made an effort to include only copyright free inputs from the Internet. However, mistakes are possible - in such cases, please contact me to resolve the issue or remove the input. More information can be found at the MiDataSets development website: http://midatasets.sourceforge.net or in the paper: Grigori Fursin, John Cavazos, Michael O'Boyle and Olivier Temam. MiDataSets: Creating The Conditions For A More Realistic Evaluation of Iterative Optimization. Proceedings of the International Conference on High Performance Embedded Architectures Compilers (HiPEAC 2007), Ghent, Belgium, January 2007 Any suggestions and comments are welcome! Yours, Grigori Fursin = Grigori Fursin, PhD INRIA Futurs, France http://fursin.net/research
RE: MiDataSets for MiBench to enable more realistic benchmarking and better tuning of the GCC optimization heuristic
Yes, that's right that without good analysis part it is semi-useless. Actually, it can even make life harder since instead of one dataset you now have to try many ;) ... We did some preliminary analysis of the compiler optimizations for programs with multiple datasets in our HiPEAC'07 paper using the PathScale compiler and now we continue working on a better program characterization with multiple inputs and how it should be properly used to improve compiler heuristic. So, at the moment it is still more for research purposes. We had some requests to make these datasets public after HiPEAC conference and HiPEAC GCC tutorial so that some researchers and engineers can start looking at this issue. There was an interest from some companies that develop embedded systems - they use GCC more and more and they often use MediaBench and MiBench for the benchmarking but find that using only two datasets may not be representative enough ... I now have a few projects where we use GCC and MiDataSets, so whenever we have more practical results, I will post it here! Hope it will be of any use, Grigori ===== Grigori Fursin, PhD INRIA Futurs, France http://fursin.net/research On 3/19/07, Grigori Fursin <[EMAIL PROTECTED]> wrote: > Hi all, > > In case someone is interested, we are developing a set of inputs > (MiDataSets) for the MiBench benchmark. Iterative optimization is now > a popular technique to obtain performance or code size improvements > over the default settings in a compiler. However, in most of the > research projects, the best configuration is found for one arbitrary > dataset and it is assumed that this configuration will work well with > any other dataset that a program uses. > We created 20 different datasets per program for free MiBench > benchmark to evaluate this assumption and analyze the behavior of > various programs with multiple datasets. We hope that this will enable > more realistic benchmarking, practical iterative optimizations > (iterative compilation), and can help to automatically improve GCC > optimization heuristic. I think this is nice but semi useless unless you look into also why stuff is better. The anylsis part is the hard part really but the most useful part of to figure out why GCC is failing to produce good code. An example of this is I was working on a patch which speeds up most code (and reduces code size there) but slows down some code (and increase the code too) and I found that scheduling, and reordering blocks decisions would change which causes the code to become slower/larger. This anylsis was neccessary to figure out my patch/pass was not directly causing the slower/larger code. This is the same thing with any kind of heuristic tuning is needed. Thanks, Andrew Pinski
collaborative tuning of GCC optimization heuristic
Dear colleagues, If it's of interest, we have released a new version of our open-source framework to share compiler optimization knowledge across diverse workloads and hardware. We would like to thank all the volunteers who ran this framework and shared some results for GCC 4.9 .. 6.0 in the public repository here: http://cTuning.org/crowdtuning-results-gcc Here is a brief note how this framework for crowdtuning compiler optimization heuristics works (for more details, please see https://github.com/ctuning/ck/wiki/Crowdsource_Experiments): you just install a small Android app (https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments) or python-based Collective Knowledge framework (http://github.com/ctuning/ck). This program sends system properties to a public server. The server compiles a random shared workload using some flag combinations that have been found to work well on similar machines, as well as some new random ones. The client executes the compiled workload several times to account for variability etc, and sends the results back to the server. If a combination of compiler flags is found that improves performance over the combinations found so far, it gets reduced (by removing flags that do now affect the performance) and uploaded to a public repository. Importantly, if a combination significantly degrades performance for a particular workload, it gets recorded as well. This potentially points to a problem with optimization heuristics for a particular target, which may be worth investigating and improving. At the moment, only global GCC compiler flags are exposed for collaborative optimization. Longer term, it can be useful to cover finer-grain transformation decisions (vectorization, unrolling, etc) via plugin interface. Please, note that this is a prototype framework and much more can be done! Please get in touch if you are interested to know more or contribute! Take care, Grigori = Grigori Fursin, CTO, dividiti, UK
Re: collaborative tuning of GCC optimization heuristic
Hi David, Thanks a lot for a good question - I completely forgot to discuss that. Current workloads in the CK are just to test our collaborative optimization prototype. They are even a bit outdated (benchmarks and codelets from the MILEPOST project). However, our point is to make an open system where the community can add any workload via GitHub with some meta information in JSON format to be able to participate in collaborative tuning. This meta information exposes data sets used, command lines, input/output files, etc. This helps add multiple data sets for a given benchmark or even reuse already shared ones. Finally, this meta information makes it relatively straightforward to apply predictive analytics to find correlations between workloads and optimizations. Our hope is to eventually make a large and diverse pool of public workloads. In such case, users themselves can derive representative workloads for their requirements (performance, code size, energy, resource constraints, etc) and a target hardware. Furthermore, since optimization spaces are huge and it is infeasible to explore them by one user or even in one data center, our approach allows all shared workloads to continuously participate in crowdtuning, i.e. searching for good optimizations across diverse platforms while recording "unexpected behavior". Actually, adding more workloads to CK (while making this process more user-friendly) and tuning them can be a GSOC project - we can help with that ... You can find more about our view here: * http://arxiv.org/abs/1506.06256 * https://hal.inria.fr/hal-01054763 Hope it makes sense and take care, Grigori On 05/03/2016 16:16, David Edelsohn wrote: On Sat, Mar 5, 2016 at 9:13 AM, Grigori Fursin wrote: Dear colleagues, If it's of interest, we have released a new version of our open-source framework to share compiler optimization knowledge across diverse workloads and hardware. We would like to thank all the volunteers who ran this framework and shared some results for GCC 4.9 .. 6.0 in the public repository here: http://cTuning.org/crowdtuning-results-gcc Here is a brief note how this framework for crowdtuning compiler optimization heuristics works (for more details, please see https://github.com/ctuning/ck/wiki/Crowdsource_Experiments): you just install a small Android app (https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments) or python-based Collective Knowledge framework (http://github.com/ctuning/ck). This program sends system properties to a public server. The server compiles a random shared workload using some flag combinations that have been found to work well on similar machines, as well as some new random ones. The client executes the compiled workload several times to account for variability etc, and sends the results back to the server. If a combination of compiler flags is found that improves performance over the combinations found so far, it gets reduced (by removing flags that do now affect the performance) and uploaded to a public repository. Importantly, if a combination significantly degrades performance for a particular workload, it gets recorded as well. This potentially points to a problem with optimization heuristics for a particular target, which may be worth investigating and improving. At the moment, only global GCC compiler flags are exposed for collaborative optimization. Longer term, it can be useful to cover finer-grain transformation decisions (vectorization, unrolling, etc) via plugin interface. Please, note that this is a prototype framework and much more can be done! Please get in touch if you are interested to know more or contribute! Thanks for creating and sharing this interesting framework. I think a central issue is the "random shared workload" because the optimal optimizations and optimization pipeline are application-dependent. The proposed changes to the heuristics may benefit for the particular set of workloads that the framework tests but why are those workloads and particular implementations of the workloads representative for applications of interest to end users of GCC? GCC is turned for an arbitrary set of workloads, but why are the workloads from cTuning any better? Thanks, David
Successful build and some autotuning of GCC 7.1.0 on Raspberry Pi 3
Dear GCC colleagues, I managed to build GCC 7.1.0 on Raspberry Pi 3 (Raspbian GNU/Linux 8 (jessie), GCC 4.9.2). It required combination of specific configuration flags: --with-cpu=cortex-a53 --with-fpu=neon-fp-armv8 --with-float=hard --build=arm-linux-gnueabihf --host=arm-linux-gnueabihf --target=arm-linux-gnueabihf --disable-bootstrap By the way, if it's of any interest, I also automated such customized builds using Collective Knowledge workflow framework: $ sudo pip install ck $ ck pull repo:ck-dev-compilers $ ck install package:compiler-gcc-any-src-linux-no-deps --env.PARALLEL_BUILDS=1 --env.GCC_COMPILE_CFLAGS=-O0 --env.GCC_COMPILE_CXXFLAGS=-O0 --env.EXTRA_CFG_GCC=--disable-bootstrap --env.RPI3=YES This also allowed us to autotune GCC 7.1.0 across shared programs/data sets and collect some "interesting" combinations of optimization flags here: * http://tinyurl.com/k4jlwq5 If you build GCC via CK as above, you are welcome to participate in collaborative optimization and share stats for your platforms/compiler as following: $ ck crowdsource program.optimization --iterations=80 --repetitions=3 --scenario=8289e0cf24346aa7ck Cheers, Grigori
brief update on 2 GSOC'09 GCC projects
Dear all, I got a few minutes between vacations and wanted to give you a small update on the GSOC'09 developments for GCC by Liang Peng and Yunajie Huang from ICT, China (in CC). The projects were about extending GCC to enable fine-grain optimization selection (GRAPHITE optimizations, unrolling, vectorization, inliing etc) and reordering through ICI and plugins, provide XML representation of the compilation flow. Another project was about enabling generic function cloning, fine-grain optimization of clones and program instrumentation (i.e. adding call-backs during compilation) to enable run-time selection of optimizations (using standard machine learning techniques such as decision trees and predictive modelling). I think students did a very good job and though not everything was implemented, I think it was a very good start. The students documented their developments here: http://ctuning.org/wiki/index.php/CTools:ICI:Projects:GSOC09:Function_cloning_and_program_instrument ation http://ctuning.org/wiki/index.php/CTools:ICI:Projects:GSOC09:Fine_grain_tuning We use cTuning (performance tuning) collaborative website since many techniques originate from there and they are still a bit experimental but since the developments were successful and my colleagues are now extending MILEPOST GCC to improve program execution time, code size and compilation time automatically, since now on we plan to sync everything with GCC mainline and continue this work using GCC mailing list and the website. We will continue using cTuning more for collaborative program optimizations and tuning of GCC optimization heuristic. When Liang and Yuanjie are back from vacations and I am back from the wedding trip in 3 weeks, we will continue discussions about that ... In the mean time, you can check the developments here using GCC ICI SVN branch called adapt for GCC 4.4.0: http://gcc-ici.svn.sourceforge.net/viewvc/gcc-ici/branches/gcc-4.4.0-ici-2.0-adapt/ (again we hope to move all the developments from the external places to the GCC main site and to the mainline shortly). If you are interested in performance tuning and some iterative optimization results for GCC (and comparison with other compilers) including profitable cases for SPEC and EEMBC benchmarks that improve program execution time, code size and compilation time, you can check the Collective Optimization Repository here: http://ctuning.org/cdatabase The comments about those developments and suggestions to Yunajie and Liang are welcome! By the way, I think the important part of GSOC'09 was that the students got much more familiar with GCC and I hope that they and their research group in ICT will continue using and extending GCC from now on, and we will hopefully see some more interesting research results soon as well on automating program fine-grain optimization and compiler optimization heuristic construction using collective compilation and machine learning techniques ... I have to leave now and will get back in touch at the end of September ;) ... Take care, Grigori
CFP reminder: GROW'10 - 2nd Workshop on GCC Research Opportunities
Apologies if you receive multiple copies of this call. CALL FOR PAPERS 2nd Workshop on GCC Research Opportunities (GROW'10) http://ctuning.org/workshop-grow10 January 23, 2010, Pisa, Italy (co-located with HiPEAC 2010 Conference) GROW workshop focuses on current challenges in research and development of compiler analyses and optimizations based on the free GNU Compiler Collection (GCC). The goal of this workshop is to bring together people from industry and academia that are interested in conducting research based on GCC and enhancing this compiler suite for research needs. The workshop will promote and disseminate compiler research (recent, ongoing or planned) with GCC, as a robust industrial-strength vehicle that supports free and collaborative research. The program will include an invited talk and a discussion panel on future research and development directions of GCC. Topics of interest Any issue related to innovative program analysis, optimizations and run-time adaptation with GCC including but not limited to: * Classical compiler analyses, transformations and optimizations * Power-aware analyses and optimizations * Language/Compiler/HW cooperation * Optimizing compilation tools for heterogeneous/reconfigurable/ multicore systems * Tools to improve compiler configurability and retargetability * Profiling, program instrumentation and dynamic analysis * Iterative and collective feedback-directed optimization * Case studies and performance evaluations * Techniques and tools to improve usability and quality of GCC * Plugins to enhance research capabilities of GCC Paper Submission Guidelines Submitted papers should be original and not published or submitted for publication elsewhere; papers similar to published or submitted work must include an explicit explanation. Papers should use the LNCS format and should be 12 pages maximum. Please, submit via the easychair system at the GROW'10 website. Papers will be refereed by the Program Committee and if accepted, and if the authors wish, will be made available on the workshop web site. Authors of the best papers from the workshop may be invited to revise their submission for the journal "Transactions on HiPEAC", if the work is in sufficiently mature form. Important Dates Deadline for submission: November 13, 2009 Decision notification: December 14, 2009 Workshop: January 23, 2010 half-day **** Organizers **** Grigori Fursin, INRIA, France Dorit Nuzman, IBM, Israel Program Committee Arutyun I. Avetisyan, ISP RAS, Russia Zbigniew Chamski, Infrasoft IT Solutions, Poland Albert Cohen, INRIA, France David Edelsohn, IBM, USA Bjorn Franke, University of Edinburgh, UK Grigori Fursin, INRIA, France Benedict Gaster, AMD, USA Jan Hubicka, SUSE Paul H.J. Kelly, Imperial College of London, UK Ondrej Lhotak, University of Waterloo, Canada Hans-Peter Nilsson, Axis Communications, Sweden Diego Novillo, Google, Canada Dorit Nuzman, IBM, Israel Sebastian Pop, AMD, USA Ian Lance Taylor, Google, USA Chengyong Wu, ICT, China Kenneth Zadeck, NaturalBridge, USA Ayal Zaks, IBM, Israel Keynote talk Diego Novillo, Google, Canada "Using GCC as a toolbox for research: GCC plugins and whole-program compilation" Previous Workshops GROW'09: http://www.doc.ic.ac.uk/~phjk/GROW09 GREPS'07: http://sysrun.haifa.il.ibm.com/hrl/greps2007
RE: plugin hooks
Hi Joern, I think, that's very reasonable. Just as we agreed, wait a few days until you are covered by the INRIA agreement to transfer copyright to FSF ... I should be able to tell you more within a few days ... Also, I added Yuanjie, Liang (they did GSOC developments) and Yuri in CC since they will help with testing and extending the ICI and GSOC developments... Cheers, Grigori > -Original Message- > From: Joern Rennecke [mailto:amyl...@spamcop.net] > Sent: Monday, November 02, 2009 1:50 PM > To: Grigori Fursin > Cc: 'Zbigniew Chamski'; 'Richard Guenther'; 'Basile STARYNKEVITCH'; 'Ian > Lance Taylor'; 'GCC > Mailing List'; 'Albert Cohen'; ctuning-discussi...@googlegroups.com; Yuri > Kashnikoff > Subject: RE: plugin hooks > > Quoting Grigori Fursin : > > Also, I hope that we will start collaborating with Joern Rennecke in > > a few weeks to update the ICI > > and GSOC'09 > > > > developments based on the recent feedback to see if we can move it > > to the mainline ... > > We still need a branch name for that. Since GCC is currently in phase 3, > I suppose we should plan on re-baselining at least once - re-baselining > from 4.6 experimental mainline after 4.5 is branched - so instead of > a gcc version number a branch date seems more suitable. > > I.e. if the branch were to be created today, I propose to use the name > branches/ici-20091102-branch for the branch.
RE: new plugin events
Hi Basile et al, > My suggestion to ICI friends is : just propose quickly your needed plugin > events, and make > your ICI a GPLv3 plugin. > When you can show that your ICI plugin to an *unmodified* gcc-4.5 brings some > value, GCC > people will perhaps start to > listen and look inside. Just to mention that I am a bit confused because I actually don't expect to have problems moving ICI to the mainline unless we find some big bugs that can change GCC behavior (but I really don't think so). We had many online and offline discussions to move ICI to the mainline GCC in the last few years with GCC colleagues/maintainers. We just sadly got delayed at INRIA this summer due to different reasons but Joern is now working with us for 2 months fully time to clean and test ICI and submit patches as soon as they are ready. It's true that we actually need a few hooks and Joern will communicate about that shortly BUT these hooks are already used in real plugins for real performance tuning (in a way as current hooks are used in Dehydra for real program analysis in several companies). Our performance results are gradually added to the online performance database at http://cTuning.org/cdatabase for EEMBC, SPEC and other programs across multiple architectures which real users and companies are using to optimize their real code... A few days ago I got a feedback from Loongson group that they considerably speeded up EEMBC on their latest processor using GCC 4.4.x and they should upload the results to the database shortly ... They have been actively working with us using and extending ICI ... That's why only after we showed real results, we would now like to have MINIMAL ICI in mainline GCC but patches for other extensions including GSOC'09 projects will be submitted to GCC only after testing ... We will keep in touch about that, Grigori
RE: new plugin events
That's very reasonable and it's our eventual goal too so we will start discussions about that in detail whenever ICI is clean. By the way, just to mention that I am working with a student (Yuri) to provide/understand/describe/characterize performance dependencies and interaction between passes using ICI to be able to make selection and ordering of passes more systematic and avoid having the same multiple passes in the compilation sequence just because they may potentially improve the code. We use ICI with GSOC'09 extensions and is it so far very simple to manipulate passes through plugins (we use XML outside to save all IP passes and passes per function). This is a long term project, but as soon as we have some promising results, we will tell you ;) ... Cheers, Grigori > -Original Message- > From: Richard Guenther [mailto:richard.guent...@gmail.com] > Sent: Saturday, November 07, 2009 3:06 PM > To: Grigori Fursin > Cc: Basile STARYNKEVITCH; Steven Bosscher; Diego Novillo; Rafael Espindola; > gcc; Joern > Rennecke; Zbigniew Chamski > Subject: Re: new plugin events > > On Sat, Nov 7, 2009 at 1:24 PM, Grigori Fursin > wrote: > > Hi Basile et al, > > > >> My suggestion to ICI friends is : just propose quickly your needed plugin > >> events, and make > >> your ICI a GPLv3 plugin. > >> When you can show that your ICI plugin to an *unmodified* gcc-4.5 brings > >> some value, GCC > >> people will perhaps start to > >> listen and look inside. > > > > Just to mention that I am a bit confused because I actually don't expect to > > have problems > moving > > ICI to the mainline unless we find some big bugs that can change GCC > > behavior (but I really > don't > > think so). > > We had many online and offline discussions to move ICI to the mainline GCC > > in the last few > years > > with GCC > > colleagues/maintainers. We just sadly got delayed at INRIA this summer due > > to different > reasons but > > Joern > > is now working with us for 2 months fully time to clean and test ICI and > > submit patches as > soon as > > they are ready. > > > > It's true that we actually need a few hooks and Joern will communicate > > about that shortly > BUT these > > hooks are > > already used in real plugins for real performance tuning (in a way as > > current hooks are used > in > > Dehydra > > for real program analysis in several companies). > > And I don't expect problems in adding hooks that ICI needs. I expect > that ICI is a reason to improve GCCs pass manager - and I expect that > we will improve GCCs pass manager not by simply adding hooks to it > to "fix" it from inside plugins, but I expect that we'll get a more powerful > pass manager _inside_ GCC. I also expect or at least hope that more > parts of the compilation process get driven by the pass manager rather > than by ad-hoc code gluing several phases of pass manager driven > stages. > > Richard.
RE: new plugin events
Basile, I understand your constraints and concerns. I personally would also be happy to see ICI and pass manager in GCC soon, BUT it was delay on our side that prevented submission/checking of the patch, so I am just taking a pragmatic approach of preparing an ICI patch first (well, actually not me but Joern who is now working full time on that), testing it and then submitting it and discussing with everyone and you if it's reasonable. ONLY THEN, depending if the changes are small and if GCC 4.5.0 is not yet closed, we will negotiate to move it to the mainline. But at the moment, before submitting it, it's just a gamble if it can go through or not and I personally don't want to do that because we will annoy all other GCC people who are working hard to make current GCC stable ... So, let's continue ICI discussions as soon as the ICI patch is ready ;) ... Cheers, Grigori > -Original Message- > From: Basile STARYNKEVITCH [mailto:bas...@starynkevitch.net] > Sent: Saturday, November 07, 2009 3:45 PM > To: Richard Guenther > Cc: Grigori Fursin; Steven Bosscher; Diego Novillo; Rafael Espindola; gcc; > Joern Rennecke; > Zbigniew Chamski > Subject: Re: new plugin events > > Richard Guenther wrote: > > On Sat, Nov 7, 2009 at 1:24 PM, Grigori Fursin > > wrote: > >> Hi Basile et al, > >> > >>> My suggestion to ICI friends is : just propose quickly your needed plugin > >>> events, and make > >>> your ICI a GPLv3 plugin. > >>> When you can show that your ICI plugin to an *unmodified* gcc-4.5 brings > >>> some value, GCC > >>> people will perhaps start to > >>> listen and look inside. > > >> Just to mention that I am a bit confused because I actually don't expect > >> to have problems > moving > >> ICI to the mainline unless we find some big bugs that can change GCC > >> behavior (but I really > don't > >> think so). > >> We had many online and offline discussions to move ICI to the mainline GCC > >> in the last few > years > >> with GCC > >> colleagues/maintainers. We just sadly got delayed at INRIA this summer due > >> to different > reasons but > >> Joern > >> is now working with us for 2 months fully time to clean and test ICI and > >> submit patches as > soon as > >> they are ready. > >> > >> It's true that we actually need a few hooks and Joern will communicate > >> about that shortly > BUT these > >> hooks are > >> already used in real plugins for real performance tuning (in a way as > >> current hooks are > used in > >> Dehydra > >> for real program analysis in several companies). > > > > And I don't expect problems in adding hooks that ICI needs. I expect > > that ICI is a reason to improve GCCs pass manager - and I expect that > > we will improve GCCs pass manager not by simply adding hooks to it > > to "fix" it from inside plugins, but I expect that we'll get a more powerful > > pass manager _inside_ GCC. I also expect or at least hope that more > > parts of the compilation process get driven by the pass manager rather > > than by ad-hoc code gluing several phases of pass manager driven > > stages. > > We probably all agree on goals, but perhaps less on timeline. > > My feeling (but I admit I don't understand well what stage 3 means precisely > for gcc 4.5.0, in > particular w.r.t. plugins > & pass management, and why exactly stage 2 was skipped in 4.5) was up to now: > > 1. Only very small patches can go into 4.5. So ICI pass manager won't go into > 4.5.0, and any > improved pass manager won't > go into 4.5.0, only in 4.6.0. This probably means in last quarter of 2010 or > first quarter of > 2011, since 4.4.0 was > released in april 2009, and 4.3.0 was released in march 2008 and 4.2.0 was > released in may > 2007 and 4.1.0 at end of > february 2006. I am guessing 4.5.0 would be released for christmas 2009 at > the earliest, so > 4.6.0 would go out at end of > 2010 in the best case. > > 2. I was hoping that the few PLUGIN_* hooks absolutely needed by ICI could go > into 4.5. My > intuition is that it really > means small unobtrusive patches which might be accepted before Christmas 2009. > > In other words, are there hope to delay 4.5.0 release for a month to get the > ICI-improved pass > manager inside it? In > case the answer is yes, will we keep the same interface, so that the > interface for > PLUGIN_PASS_MANAGER_SETUP won't > change with the improved pass manag
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
Hi Joern, > After checking in the patch to provide unique pass names for all passes, > I created > > svn://gcc.gnu.org/svn/gcc/branches/ici-20091108-branch > > and merged in the patches from: > > http://gcc-ici.svn.sourceforge.net/svnroot/gcc-ici/branches/patch-gcc-4.4.0-ici-2.0 > > Could you please check that this contains the functionality that we want to > integrate in the first step. Thanks a lot, Joern! I downloaded it and will be gradually checking it. In the mean time, Yuanjie, Liang and Yuri - could you check this version ASAP and check that the functionality provided during GSOC'09 developments/pass reordering work is correct in this version!.. The reason is that since there will be some small changes, our plugins will have to slightly change as well (see register_pass change)... By the way, we should keep the track of such changes on the GCC Wiki for ICI ... > FWIW I know that the code does not conform to the GNU coding standard yet. > > I've changed register_pass to register_pass_name to resolve the name clash. > I'm not sure if it should be called register_pass_by_name or something else, > opinions welcome. I think register_pass_by_name will be better to show what it does now ;) ... > Both the gcc 4.5 code and the ICI patches have the concept of events, but > the implementations are so different that the functionality is pretty much > orthogonal. > > 4.5 has a real C-style interface with an enum to identify the event and > a single pointer for the data. I.e. low overhead, but rigid typing, > and the different parts of the program presumably find their data by virtue > of using the same header files. > Multiple plugins can register a callback for any event, and all will get > called. However, since the set of events is hard-coded by an enum > declaration, you can't have two plugins communicating using events that > the gcc binary doesn't know about. > > The ICI events feel much more like TCL variables and procedure calls. > Events are identified by strings, and parameters are effectively global > variables found in a hash table. This is very flexible and can allow > a good deal of ABI stability, but costs a lot of processor cycles as > before an event call the parameters are looked up to be entered in the hash > table, and afterwards they are looked up to be removed, even if no callback > is registered for the event. > Also, when a plugin registers a callback for an ICI event, it overrides any > previous callback registered by another (or even the same) plugin. That's very true. Our current idea is that for prototyping of ideas, it is often fine to slow down the compiler slightly but as soon as development matures and there are some interesting results, the developers will try to persuade the GCC community to add the event permanently... > I think we could have the ICI event flexibility/stability with lower > overhead if the event sender requests an event identifier number (which > can be allocated after the numbers of the gcc 4.5 static event enum values) > for an event name at or before the first event with that name, and then > sends this identifier number with one or more pointers, which might point > to internal gcc data structures, and a pointer to a function to look up > the address of a named parameter. The event sender site source code can > then provide information to build the lookup functions at build time, > e.g. using gperf. > > I.e.: > /* Call an event with number ID, which is either a value of enum plugin_event, > or a number allocated for a named event. If the event named parameters, > the first parameter after id should be as if declared > void * (*lookup_func) (const char *, va_list) . > LOOKUP_FUNC can be passed the name of a parameter as its first argument, > and a va_list as its second argument, which will be the list of parameters > after LOOKUP_FUNC, to find the named parameter. */ > void > call_plugin_event (int id, ...) > { >struct callback_info *callback; >va_list ap; > >gcc_assert (id < event_id_max); >callback = plugin_callbacks[id]; >va_start (ap, id); >for (callback = plugin_callbacks[id]; callback; callback = callback->next) > (*callback->func) ((void *) ap, callback->user_data); >va_end (ap); > } I have been discussing that with Zbigniew some months ago and I think it can be reasonable to add such functionality on top of current ICI. ICI users will still prototype their ideas using events referenced by name, however if it works fine and before potential lengthy approval to get such an event, they can speed up their plugins using this extended functionality... The good thing is that their plugins will still be compatible if we will decide to keep associated names with the hardwired event number ... Cheers, Grigori
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
> After checking in the patch to provide unique pass names for all passes, > I created > > svn://gcc.gnu.org/svn/gcc/branches/ici-20091108-branch > > and merged in the patches from: > > http://gcc-ici.svn.sourceforge.net/svnroot/gcc-ici/branches/patch-gcc-4.4.0-ici-2.0 By the way, not to forget - we should compile/test GCC with ICI with the following libraries (orthogonal to ICI but we need them for our experiments): * gmp & mpfr (for fortran) * ppl & cloog (for GRAPHITE) i.e. I configure GCC with the following flags: configure --enable-languages=c,c++,fortran --with-mpfr=$BUILD_DIR --with-gmp=$BUILD_DIR --with-ppl=$BUILD_DIR --with-cloog=$BUILD_DIR I used it for the GCC 4.4.0 - maybe some GRAPHITE related flags changed ... The idea is to have the same setup that we used for our local developments ... Also, we have been using ICI with C and Fortran a lot, but never checked C++ - it will be important to check it too ... Cheers, Grigori
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
Hi Joern, > > I think we could have the ICI event flexibility/stability with lower > > overhead if the event sender requests an event identifier number (which > > can be allocated after the numbers of the gcc 4.5 static event enum values) > > for an event name at or before the first event with that name, and then > > sends this identifier number with one or more pointers, which might point > > to internal gcc data structures, and a pointer to a function to look up > > the address of a named parameter. The event sender site source code can > > then provide information to build the lookup functions at build time, > > e.g. using gperf. > > I thought a bit more about this, and decided that using gperf-generated hash > tables is probably overkill. > > It is useful to have provisions for the event generator and the event > callback being somewhat out of step, but do we really have to cater > for multiple sources of the same event providing their parameters in > a different way? > If there is only one way to find a parameter with a particular name for > a particular event (for a given combination of compiler binary and plugins), > that this can be defined with an accessor function, which would > generally be defined in the same module which raises the event. > Actually, if we passed around the dso which raises the event, we could > dynamically look up the accessor function to allow co-existence of different > accessor functions for the same event::parameter tuple, but I don't think > there is a need for that. > > Here is an example of how I think we can reduce the overhead while keeping > a good amount of flexibility; in loop-unroll.c we currently have: > /* Code for loop-unrolling ICI decision enabling. */ > register_event_parameter ("loop->num", &(loop->num)); > register_event_parameter ("loop->ninsns", &(loop->ninsns)); > register_event_parameter ("loop->av_ninsns", &(loop->av_ninsns)); > > register_event_parameter ("loop->lpt_decision.times", > &(loop->lpt_decision.times)); > register_event_parameter ("loop->lpt_decision.decision", > &(loop->lpt_decision.decision)); > register_event_parameter ("loop->lpt_decision.unroll_runtime", >loop->lpt_decision.decision == LPT_UNROLL_RUNTIME ? > (void *) 1 : (void *) 0); > register_event_parameter ("loop->lpt_decision.unroll_constant", >loop->lpt_decision.decision == LPT_UNROLL_CONSTANT ? > (void *) 1 : (void *) 0); > > call_plugin_event("unroll_feature_change"); > > unregister_event_parameter ("loop->num"); > unregister_event_parameter ("loop->ninsns"); > > unregister_event_parameter ("loop->av_ninsns"); > unregister_event_parameter ("loop->lpt_decision.times"); > unregister_event_parameter ("loop->lpt_decision.decision"); > > Instead we could have: > > invoke_plugin_va_callbacks (PLUGIN_UNROLL_FEATURE_CHANGE, loop); > and then accessor functions: > int > plugin_unroll_feature_change_param_loop_num (va_list va) > { >struct loop *loop = va_arg (va, struct loop *); >return loop->num; > } > > unsigned > plugin_unroll_feature_change_param_loop_ninsns (va_list va) > { >struct loop *loop = va_arg (va, struct loop *); >return loop->ninsns; > } > > unsigned > plugin_unroll_feature_change_param_loop_av_ninsns (va_list va) > { >struct loop *loop = va_arg (va, struct loop *); >return loop->av_ninsns; > } > ... > bool > plugin_unroll_feature_change_param_loop_lpt_decision_unroll_runtime > (va_list va) > { >struct loop *loop = va_arg (va, struct loop *); >return loop->lpt_decision.decision == LPT_UNROLL_RUNTIME; > } > ... I am a bit confused about your above example - you suggest to add this functionality on top of current ICI or substitute it? If you want to substitute it, I personally disagree. We spent a very long time with many colleagues and real ICI users discussing how to simplify the usage of events for people who are not programming professionals, so I really prefer to keep the current structure as it is ... However, if it is the way to speedup slow prototype plugins and addition to ICI, it may be fine but I need to think about it more. In both cases, I think it is not critical for now and should be the second step after current ICI is synced with the mainline. However, suggestions from others are very welcome ;) !.. > There is still another practical issue: as I change the ICI infrastructure > to fit better with the existing gcc 4.5 plugin infrastructure, > the ICI plugins must be adapted to keep working. > As we are trying to have something working in a short time frame, I think > I should pick one plugin and modify it in lock-step with the infrastructure > to demonstrate how it is supposed to work. > > Do you think the adapt.c plugin is suitable for that purpose? Yes, adapt.c is the latest plugin that we use for our experiments ... Cheers, Grigo
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
Hi Joern, > > I am a bit confused about your above example - you suggest to add > > this functionality on top of current ICI or substitute it? > > The idea was to replace it. The current event implementation has two > issues: > - It is very different from the existing GCC 4.5 events which makes it >confusing to have it side-by-side. I think if we can make the ICI code >more of a natural extension of the current event code, it would be more >likely to be acceptable. > - The parameter registering / unregistering and the event call itself >introduce a number of hashing operations which are performed irrespective >of the presence of a plugin. This makes these events intrusive if placed >in places with a high execution frequency. Ok, thanks for the update. I will send you another private email asking for more tech. details to be sure that we are in sync before doing further changes in ICI ... > > If you want to substitute it, I personally disagree. We spent a very > > long time > > with many colleagues and real ICI users discussing how to simplify > > the usage of > > events for people who are not programming professionals, > > so I really prefer to keep the current structure as it is ... > > Were these discussions done face-to-face or on a news group/mailing list? > If the latter, could you point me where I can read up on the discussion > so that I better understand the issues involved. > Would these people who are not programming professionals both insert > event raising points as well as write event handling code? Well, it's a long story. I moved first prototype of ICI from Open64 to GCC in 2006 and then had discussions with Albert and Diego at one of the HiPEAC tutorials about having plugins in GCC. However, since it was a taboo for GCC at that time, I continued extending it in the MILEPOST project and within the HiPEAC network of excellence. We had multiple discussions during MILEPOST and HiPEAC face-to-face meetings and private mailing lists. Recent emails at GCC mailing lists and comparison of plugin systems at GCC Wiki are here: http://www.mail-archive.com/gcc@gcc.gnu.org/msg41368.html http://gcc.gnu.org/wiki/GCC_PluginComparison > If we would use C++, some things would get easier, i.e. we could have an > event class with subclasses for the separate event types, and then have > the parameter accessors be member functions. This would remove the need > to repeatedly type the event name when accessing the parameters. > However, it would require to build GCC with C++, so I'd say this > significantly reduces the chances of having ICI integrated into the > gcc mainline and having release GCC binaries shipped with the > functionality enabled. I didn't mean rewriting ICI in C++. I meant that we need to check that the functionality works (i.e. pass selection and reordering + fine-grain selection of optimization and function cloning for c++ programs) when using g++ ... > If plugin-side run time is not important, we can have register_plugin_event > as a wrapper for register_callback and use user_data to squirrel away the > event name and the actual callback function; then we can have a wrapper > callback function which sets a thread-local variable (can be non-exported > global inside the plugin as long as no threads are used) to the plugin name, > and make get_event_parameter use a dynamic function name lookup by stringing > together the event name with the parameter name. > This way, we could look the ICI interface on the plugin side pretty much > what it looks now, except that we should probably restrict the event and > parameter names to identifier characters, lest we need name mangling > to translate them to function names. > > I had a look at the adapt branch to gauge if the there is really a point for > having the varargs stuff, i.e. events with parameters that are not readily > available in a single struct. > The unroll_parameter_handler / graphite_parameter_handler events are somewhat > odd because they have a varying set of parameters. So they really have a list > of parameters. We could do this with somehing like: > invoke_plugin_va_callbacks (PLUGIN_UNROLL_PARAMETER_HANDLER, > "_loop.id", EP_INT, loop->num, > "_loop.can_duplicate_loop_p", > EP_UNSIGNED_CHAR, > ici_can_duplicate_loop_p); > And then have the plugin pick through the va_list. Or, if preferred, > have a helper function - possibly invoked implicitly by the ICI > infrastructure - go through the va_list and build the hash table of arguments > from it so that the current parameter query interface can be used. > In fact, we could use such a mechanism in general, so if we go back to > passing pointers to parameters instead of parameters, you'd have > backward > compatibility on the plugin side. > OTOH, does using pointers to parameters really m
CFP (deadline extension): GROW'10 (2nd Workshop on GCC Research Opportunities)
The submission deadline is extended until the 22nd of November, 2009. Apologies if you receive multiple copies of this call. CALL FOR PAPERS 2nd Workshop on GCC Research Opportunities (GROW'10) http://ctuning.org/workshop-grow10 January 23, 2010, Pisa, Italy (co-located with HiPEAC 2010 Conference) GROW workshop focuses on current challenges in research and development of compiler analyses and optimizations based on the free GNU Compiler Collection (GCC). The goal of this workshop is to bring together people from industry and academia that are interested in conducting research based on GCC and enhancing this compiler suite for research needs. The workshop will promote and disseminate compiler research (recent, ongoing or planned) with GCC, as a robust industrial-strength vehicle that supports free and collaborative research. The program will include an invited talk and a discussion panel on future research and development directions of GCC. Topics of interest Any issue related to innovative program analysis, optimizations and run-time adaptation with GCC including but not limited to: * Classical compiler analyses, transformations and optimizations * Power-aware analyses and optimizations * Language/Compiler/HW cooperation * Optimizing compilation tools for heterogeneous/reconfigurable/ multicore systems * Tools to improve compiler configurability and retargetability * Profiling, program instrumentation and dynamic analysis * Iterative and collective feedback-directed optimization * Case studies and performance evaluations * Techniques and tools to improve usability and quality of GCC * Plugins to enhance research capabilities of GCC Paper Submission Guidelines Submitted papers should be original and not published or submitted for publication elsewhere; papers similar to published or submitted work must include an explicit explanation. Papers should use the LNCS format and should be 12 pages maximum. Please, submit via the easychair system at the GROW'10 website. Papers will be refereed by the Program Committee and if accepted, and if the authors wish, will be made available on the workshop web site. Authors of the best papers from the workshop may be invited to revise their submission for the journal "Transactions on HiPEAC", if the work is in sufficiently mature form. Important Dates Deadline for submission: November 22, 2009 Decision notification: December 14, 2009 Workshop: January 23, 2010 half-day **** Organizers **** Grigori Fursin, INRIA, France Dorit Nuzman, IBM, Israel Program Committee Arutyun I. Avetisyan, ISP RAS, Russia Zbigniew Chamski, Infrasoft IT Solutions, Poland Albert Cohen, INRIA, France David Edelsohn, IBM, USA Bjorn Franke, University of Edinburgh, UK Grigori Fursin, INRIA, France Benedict Gaster, AMD, USA Jan Hubicka, SUSE Paul H.J. Kelly, Imperial College of London, UK Ondrej Lhotak, University of Waterloo, Canada Hans-Peter Nilsson, Axis Communications, Sweden Diego Novillo, Google, Canada Dorit Nuzman, IBM, Israel Sebastian Pop, AMD, USA Ian Lance Taylor, Google, USA Chengyong Wu, ICT, China Kenneth Zadeck, NaturalBridge, USA Ayal Zaks, IBM, Israel Keynote talk Diego Novillo, Google, Canada "Using GCC as a toolbox for research: GCC plugins and whole-program compilation" Previous Workshops GROW'09: http://www.doc.ic.ac.uk/~phjk/GROW09 GREPS'07: http://sysrun.haifa.il.ibm.com/hrl/greps2007
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
Hi all, Just a small update, that after some discussions with Joern we think that based on our time constraints and the current state of GCC, instead of trying to push full ICI into GCC we start from the opposite approach: We take all our plugins (support pass selection and reordering from MILEPOST; generic function cloning and fine-grain optimizations from GSOC'09) and trying to see which low-level GCC functionality is missing to support them. Then we provide a few hooks to support them, provide a few small updates to GCC and rewrite our plugins to support low-level plugin system. Joern will continue communicating about a few extensions to the plugin system we need to make it happen. This is a pragmatic step and should require minimal changes in GCC and will help us already use current plugin system for our work. However, I think there is still a benefit of ICI in separating GCC and plugins when using internal data structures, i.e. currently the referencing of data structures in GCC is hardwired in plugins. If one day these data structures change, we will need to rewrite all plugins. Using referencing mechanism in ICI (data is used in plugins indirectly through parameter registering) allows us to insure plugins compatibility but with the performance degradation. We can discuss that later after GCC 4.5 release and when we get some more experience from the users about plugins ... By the way, due to that, I think that maybe besides documenting all the data structures we should maybe also start providing info if they are used in some plugins. This maybe will help clearning up the internals of the compiler and will prevent careless changes of the data structures in GCC to keep plugins compatible?.. Anyway, Joern will continue communicating about the progress and extensions to the plugin system ... Take care and have a good weekend, Grigori
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
Just one more issue to mention (particularly for those who have been writing ICI plugins). ICI sometime has been using environment variables inside GCC with its own invocation flags (-fici) and dynamic library loading. Naturally, Joern will remove duplicate dynamic library handling and invocation flags from ICI to use the plugin functionality from GCC 4.5. As for environment variables inside GCC - we needed it for transparent program analysis and optimizations but I remember that there were several concerns about that (I think Richard mentioned several times that it complicated debugging if there are problems), so we will remove them too. We will use current plugin flags to pass parameters to the plugins (we can pass a configuration file I guess that will be parsed by our plugins if there is lots of information to pass) and if we need to do transparent program analysis and optimization we will use a script and a wrapper around GCC that translated our environment variables into flags. I already did that recently for MILEPOST GCC so it should be easy. But for now we assume that it's fine to use environment variables (such as to control verbose output or pass some parameters) in plugins themselves ... However, eventually we should also use some configuration files for plugins that can be easily shared with the community if there is a bug in plugins ... Cheers, Grigori
Re: copyright assignment
Hi Paolo, Just saw your email. Do you mind to send the forms to Yuanjie (huangyuan...@ict.ac.cn) and Liang (pengli...@ict.ac.cn) so that they could add their GSoC'09 developments to the mainline with the help of Joern, please?! I send an offline email to David and Sebastian but if you already know where are the forms and how to use them, that would be great and can save us time! Thanks a lot in advance!!! Grigori >On 11/22/2009 10:48 AM, John Nowak wrote: > >>Hello. I would like to get the necessary forms for copyright assignment >>to GCC for future work on GNAT. I was told this is the way to kick off >>the process. > > >I sent them offlist. > >Paolo
RE: On strategies for function call instrumentation
Hi Derrick, As Yuri pointed out we needed some similar instrumentation for our function cloning work. It may not be exactly what you need but could be useful. By the way, it seems that you are interested to use GCC as a research platform. In this case, sorry for a small advertisement, but I would like to draw your attention to the GROW'10 workshop which brings together GCC people using this compiler for research purposes. The deadline just passed but you may still be interested to look at the past ones or participate in the future ones. It is co-located with the HiPEAC conference and the HiPEAC network of universities is using GCC as a common research platforms. We have been developing Interactive Compilation Interface to simplify the use of GCC for researchers and make research plugins more portable during compiler evolution. It is a collaborative effort and after GSoC'09 we are trying to move some of the functionality to the mainline. You are warmly welcome to participate in those activities or you can follow recent discussions at this mailing list: http://groups.google.com/group/ctuning-discussions Hope it will be of any use, Grigori >* From: Derrick Coetzee >* To: gcc at gcc dot gnu dot org >* Date: Mon, 23 Nov 2009 19:44:10 -0800 >* Subject: On strategies for function call instrumentation > > Hi, I'm Derrick Coetzee and I'm a grad student working with Daniel > Wilkerson et al on the Hard Object project at UC Berkeley (see > http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-97.html). To > minimize implementation effort, we'd like to use gcc as the compiler > for our platform. The main trick is, to implement our custom stack > management, we need to inject a custom instruction before each > function call begins pushing its arguments, and insert a different > instruction right after each function call. We considered a couple > different ways to do this: > > 1. We have a C/C++ source-to-source translation framework. We could > translate each function call "f(a,b,c)" to something like "({ _asm { >... }; typeof(f(a,b,c)) result = f(a,b,c); _asm { ... }; result; })" > 2. We could modify the code generation of gcc in a private fork. > > Our main concern is that we need to be sure the instructions will be > executed at the right time, right before it starts pushing arguments > for the function and right after it returns, even in complex contexts > like nested function calls (f(g(a,b),c)). We're not sure how much gcc > will reorder these type of sequences, or what optimizations we might > be neglecting to consider. We're also not sure if we might be > overlooking superior approaches to the problem. Any advice is > appreciated. > > -- > Derrick Coetzee > University of California, Berkeley
RE: target hooks / plugins
Hi Joseph, cTuning-discussions is an open public mailing list - I have been moderating lots of spam there recently and mixed up some settings but it is fixed now ... Cheers, Grigori > -Original Message- > From: Joseph Myers [mailto:jos...@codesourcery.com] > Sent: Thursday, December 24, 2009 1:26 PM > To: Joern Rennecke > Cc: 'GCC Mailing List'; Grigori Fursin; 'Yuanjie Huang'; 'Liang Peng'; > 'Zbigniew Chamski'; > 'Yuri Kashnikoff'; 'Diego Novillo' > Subject: Re: target hooks / plugins > > It appears you CC:ed your message to a closed mailing list > ctuning-discussi...@googlegroups.com that bounces messages from > non-subscribers. Please avoid doing this; messages to public lists should > not be CC:ed to any list that will send bounces or other automatic replies > to non-subscribers. You are free to forward GCC list messages manually to > such a list, of course. > > -- > Joseph S. Myers > jos...@codesourcery.com
Call for participation: GROW'10 - 2nd Workshop on GCC Research Opportunities
Apologies if you receive multiple copies of this call. CALL FOR PARTICIPATION 2nd Workshop on GCC Research Opportunities (GROW'10) http://ctuning.org/workshop-grow10 January 23, 2010, Pisa, Italy (co-located with HiPEAC 2010 Conference) EARLY REGISTRATION DEADLINE: JAN. 6th, 2010 We invite you to participate in GROW 2010, the Workshop on GCC Research opportunities, to be held in Pisa, Italy in January 23, 2010, along with the conference on High-Performance Embedded Architectures and Compilers (HiPEAC). The Workshop Program includes: * Presentations of 8 selected papers * A Keynote talk by Diego Novillo, Google, Canada, on: "Using GCC as a toolbox for research: GCC plugins and whole-program compilation" * A panel on plugins and the future of GCC The Workshop Program is now available: http://cTuning.org/wiki/index.php/Dissemination:Workshops:GROW10:Program GROW workshop focuses on current challenges in research and development of compiler analyses and optimizations based on the free GNU Compiler Collection (GCC). The goal of this workshop is to bring together people from industry and academia that are interested in conducting research based on GCC and enhancing this compiler suite for research needs. The workshop will promote and disseminate compiler research (recent, ongoing or planned) with GCC, as a robust industrial-strength vehicle that supports free and collaborative research. The program will include an invited talk and a discussion panel on future research and development directions of GCC. Topics of interest Any issue related to innovative program analysis, optimizations and run-time adaptation with GCC including but not limited to: * Classical compiler analyses, transformations and optimizations * Power-aware analyses and optimizations * Language/Compiler/HW cooperation * Optimizing compilation tools for heterogeneous/reconfigurable/ multicore systems * Tools to improve compiler configurability and retargetability * Profiling, program instrumentation and dynamic analysis * Iterative and collective feedback-directed optimization * Case studies and performance evaluations * Techniques and tools to improve usability and quality of GCC * Plugins to enhance research capabilities of GCC Organizers Dorit Nuzman, IBM, Israel Grigori Fursin, INRIA, France Program Committee Arutyun I. Avetisyan, ISP RAS, Russia Zbigniew Chamski, Infrasoft IT Solutions, Poland Albert Cohen, INRIA, France David Edelsohn, IBM, USA Bjorn Franke, University of Edinburgh, UK Grigori Fursin, INRIA, France Benedict Gaster, AMD, USA Jan Hubicka, SUSE Paul H.J. Kelly, Imperial College of London, UK Ondrej Lhotak, University of Waterloo, Canada Hans-Peter Nilsson, Axis Communications, Sweden Diego Novillo, Google, Canada Dorit Nuzman, IBM, Israel Sebastian Pop, AMD, USA Ian Lance Taylor, Google, USA Chengyong Wu, ICT, China Kenneth Zadeck, NaturalBridge, USA Ayal Zaks, IBM, Israel Previous Workshops GROW'09: http://www.doc.ic.ac.uk/~phjk/GROW09 GREPS'07: http://sysrun.haifa.il.ibm.com/hrl/greps2007
new MILEPOST GCC pre-release
Dear all, If anyone is interested, we pre-released new MILEPOST GCC supporting versions 4.4.0,4.4.1,4.4.2 and 4.4.3. You can download it here: http://sourceforge.net/projects/gcc-ici/files/MILEPOST-GCC/V2.1-pre-release The new documentation is available here: http://cTuning.org/wiki/index.php/CTools:MilepostGCC:Documentation:MILEPOST_V2.1 More info is available here: http://cTuning.org/milepost-gcc The update includes support of transparent optimizations of programs and libraries, better multi-objective optimization (including balancing of execution time, code size and compilation time), bug fixes in averaging multiple optimization cases, C++ support, new static features in MILEPOST GCC, extended documentation, etc. Hope it will be of any use ;) ... Cheers, Grigori
RE: dragonegg in FSF gcc?
Hello, Hope my question will not completely divert the topic of this discussion - just curious what do you mean by better code? Better execution time, code size, compilation time?.. If yes, then why not to compare different compilers by just compiling multiple programs with GCC, LLVM, Open64, ICC, etc, separately to compare those characteristics and then find missing optimizations or better combinations of optimizations to achieve the result? By the way, using iterative feedback-directed compilation (searching for best combinations of optimizations) can considerably improve default optimization heuristic of nearly any compiler (look at ACOVEA or cTuning.org results) so it may not be so straightforward to answer a question which compiler is better when just using default optimization heuristic ... Cheers, Grigori * Grigori Fursin, Exascale Research Center, France http://unidapt.org/people/gfursin * -Original Message- From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Duncan Sands Sent: Sunday, April 11, 2010 5:36 PM To: Eric Botcazou Cc: gcc@gcc.gnu.org; Steven Bosscher Subject: Re: dragonegg in FSF gcc? Hi Eric, >> As for "negating the efforts of those working on the middle ends and back >> ends", would you complain if someone came up with a new register allocator >> because it negates the efforts of those who work on the old one? If LLVM >> is technically superior, then that's a fact and a good thing, not >> subversion, and hopefully will encourage the gcc devs to either improve gcc >> or migrate to LLVM. > > Well, the last point is very likely precisely what Steven is talking about. > GCC doesn't have to shoot itself in the foot by encouraging its developers to > migrate to LLVM. I hope it was clear from my email that by "gcc" I was talking about the gcc optimizers and code generators and not the gcc frontends. If the dragonegg project shows that feeding the output of the gcc frontends into the LLVM optimizers and code generators results in better code, then gcc can always change to using the LLVM optimizers and code generators, resulting in a better compiler. I don't see how this is gcc the compiler shooting itself in the foot. Of course, some gcc devs have invested a lot in the gcc middle and back ends, and moving to LLVM might be personally costly for them. Thus they might be shooting themselves in the foot by helping the LLVM project, but this should not be confused with gcc the compiler shooting itself in the foot. All this is predicated on gcc-frontends+LLVM producing better code than the current gcc-frontends+gcc-middle/backends. As I mentioned, dragonegg makes it easier, even trivial, to test this. So those who think that LLVM is all hype should be cheering on the dragonegg project, because now they have a great way to prove that gcc does a better job! Ciao, Duncan.
RE: dragonegg in FSF gcc?
Hi Duncan, >how do you compile a program with LLVM? It's not a compiler, it's a set of >optimization and codegen libraries. You also need a front-end, which takes >the users code and turns it into the LLVM intermediate representation [IR]. >The >dragonegg plugin takes the output of the gcc-4.5 front-ends, turns it into LLVM >IR and runs the LLVM optimizers and code generators on it. In other words, it >is exactly what you need in order to compile programs with LLVM. There is also >llvm-gcc, which is a hacked version of gcc-4.2 that does much the same thing, >and for C and C++ there is now the clang front-end to LLVM. The big advantage >of dragonegg is that it isolates the effect of the LLVM optimizers and code >generators by removing the effect of having a different front-end. For >example, >if llvm-gcc produces slower code than gcc-4.5, this might be due to front-end >changes between gcc-4.2 and gcc-4.5 rather than because the gcc optimizers are >doing a better job. This confounding factor goes away with the dragonegg >plugin. Ok. I see what you mean. We simply used llvm-gcc so that's why the confusion ;) ... Cheers, Grigori
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Thanks, Chris! At GROW'10 panel, we have been discussing how to make GCC more attractive to researchers and to start listing features that are important to researchers and missing in GCC but present in other compilers. Maybe we should also make a "Publications" wiki page on GCC website and start collecting references to papers where GCC has been used - however, the most important part will be to provide details on important or missing features, not just list publications ... As for OpenCL and lack of JIT support in GCC, we have been effectively overcoming this problem for many years using static multi-versioning and run-time version selection based on program and system behavior (even though there are obvious limitations), so I guess we can temporally continue using similar techniques for OpenCL in GCC... By the way, I remember that when we had first discussions to include plugin framework to GCC some years ago, first feedback was extremely negative. Nevertheless, GCC 4.5 will feature plugin framework (that will also be very useful for research), so maybe GCC will support JIT compilation too one day ;) ... Cheers, Grigori -Original Message- From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Chris Lattner Sent: Sunday, April 11, 2010 8:15 PM To: Dorit Nuzman Cc: gcc@gcc.gnu.org Subject: Re: Notes from the GROW'10 workshop panel (GCC research opportunities workshop) On Apr 11, 2010, at 5:54 AM, Dorit Nuzman wrote: > > * Get statistics on percentage of papers/projects that use compilers other > than GCC, and ask them why... Hi Dorit, Here is a semi reasonably list of llvm-based publications: http://llvm.org/pubs/ which might be useful. > (By the way, why was OpenCL implemented only on LLVM and not on GCC?) There are many reasons, but one of the biggest is the GCC doesn't (practically speaking) support JIT compilation. While it is possible to implement OpenCL on GCC, I suspect that the end result wouldn't be very compelling without some major architecture changes. -Chris
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Sure, Chris, I agree ... Still, I hope that those incremental improvements will continue even if they may not be immediately useful or fully operational ... Cheers, Grigori -Original Message- From: Chris Lattner [mailto:clatt...@apple.com] Sent: Sunday, April 11, 2010 9:38 PM To: Grigori Fursin Cc: 'Dorit Nuzman'; gcc@gcc.gnu.org Subject: Re: Notes from the GROW'10 workshop panel (GCC research opportunities workshop) On Apr 11, 2010, at 12:05 PM, Grigori Fursin wrote: > By the way, I remember that when we had first discussions to include plugin > framework to GCC some > years ago, > first feedback was extremely negative. Nevertheless, GCC 4.5 will feature > plugin framework (that > will > also be very useful for research), so maybe GCC will support JIT compilation > too one day ;) ... Sure, I'm not saying that GCC won't make amazing infrastructure improvements, just explaining why opencl implementors didn't want to block on waiting for it to happen. > As for OpenCL and lack of JIT support in GCC, we have been effectively > overcoming this problem > for many years using static multi-versioning and run-time version selection > based on program > and system behavior (even though there are obvious limitations), > so I guess we can temporally continue using similar techniques for OpenCL in > GCC... I don't think that this is sufficient to implement OpenCL-the-spec well. You can definitely add support for opencl-the-kernel-language, but that's only half of OpenCL: the runtime, GPU integration, and OpenGL integration aspects are just as important. You can definitely implement all this by forking out to an assembler etc, the implementation will just not be great. -Chris=
RE: dragonegg in FSF gcc?
Hi Diego, I agree with what you said. As a researcher I started using GCC instead of Open64 in 2005 after I saw some steps towards modularity when pass manager has been introduced since it was really simplifying my life when working on iterative/collective compilation. We have been also trying to propose further modularization/API-zation using plugins and interactive compilation interface to provide more abstractions to GCC but the acceptance was far too slow (6+ years). Up to now, LLVM is quite behind in terms of optimizations, but it's modularity simplifies adding new optimization, instrumentation and analysis passes among other things. I still use or plan to GCC for many reasons but I also use LLVM and I see some of my colleagues moving from GCC to LLVM mainly due to modularity and simplicity-of-use reasons. I still sometimes hear comments that GCC shouldn't be driven at all by the needs of researchers but lots of advanced optimizations actually came from academic research, so I think this can be a bit short-sighted. If GCC will not move towards modularization and clean APIs (by the way, I am not saying that it's easy), it doesn't mean that it will disappear, but it will change the role and will have to catch up. So, I think having 2 good open-source compilers and a healthy competition is not bad ;) ... We also heard many similar comments from our colleagues at GROW'09 and GROW'10 workshops... Cheers, Grigori -Original Message- From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Diego Novillo Sent: Tuesday, April 13, 2010 11:06 PM To: Steven Bosscher Cc: Jack Howarth; Paolo Bonzini; Dave Korn; Manuel López-Ibáñez; Duncan Sands; gcc@gcc.gnu.org Subject: Re: dragonegg in FSF gcc? On Tue, Apr 13, 2010 at 16:51, Steven Bosscher wrote: > You say you see benefits for both compilers. What benefits do you see > for GCC then, if I may ask? And what can GCC take from LLVM? (And I > mean the FSF GCC, long term.) This is an honest question, because I > personally really don't see any benefit for GCC. If comparisons between the two compilers are easy to make, then it's easy to determine what one compiler is doing better than the other and do the necessary port. In terms of internal structure, LLVM is more modular, which simplifies maintenance (e.g., the automatic bug finder, unit tests). The various components of the pipeline have better separation and stronger APIs. GCC has been slowly moving in that direction, but it still have ways to go. LLVM has already proven that organizing the compiler that way is advantageous (additionally, other research compilers were structured similarly: Sage++, SUIF), so emulating that structure sounds like a reasonable approach. Another example where GCC may want to operate with LLVM is in JIT compilation. Clearly, LLVM has made a significant investment in this area. If GCC were to generate LLVM IR, it could just use all the JIT machinery without having to replicate it. There may be other things GCC could take advantage of. OTOH, GCC has optimizer and codegen features that LLVM may want to incorporate. I don't have specific examples, since I am not very familiar with LLVM. Diego.
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Hi all, Dorit and I just got an anonymous ;) feedback about GCC vs LLVM following our email about GROW'10 panel questions so we are just forwarding it here (non-edited): The reasons I have seen for using llvm/clang are basically two-fold: gcc is too slow and too complicated. (This is true even for a company with significant in-house gcc expertise.) The C language parser for llvm (clang) is far faster than the gcc equivalent, easier to modify, and easier to build into a separate library. Speed is a major decision for choosing clang/llvm, as OpenCL needs to be complied on the fly. The llvm infrastructure is also far easier to manipulate and modify than gcc. This is particularly important as implementing OpenCL means building complier backends to generate Nvidia's PTX or AMD's IL, adding specific vector extensions, and supporting various language intrinsics. I don't know how much of an issue the licensing issues may be. These issues, plus significant corporate backing, appear to be really driving llvm in the OpenCL area. My impression is that for gcc to be competitive it will have to offer both comparable compilation speed and dramatically better code optimizations. Even then, I'm not sure if the difficulty of working with it will be considered a good tradeoff for most companies. Cheers, Grigori -Original Message- From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Dorit Nuzman Sent: Sunday, April 11, 2010 2:54 PM To: gcc@gcc.gnu.org Subject: Notes from the GROW'10 workshop panel (GCC research opportunities workshop) Dear all, We would like to share notes from the lively panel discussion at GROW'10 (GCC Research Opportunities Workshop) that took place at the end of January in Pisa, Italy (alongside the HiPEAC conference). The main topic of the discussion was: How to make GCC more attractive to researchers, and how GCC can benefit from researchers? Here is the list of major points and wishes raised by participants: * Need to encourage cleanup/infrastructure work on GCC and provide stable/flexible/extensible APIs (the question is how to encourage such infrastructure work...?) * Encourage people to work on infrastructure and full implementation that actually works: Lobby for high quality conferences to reserve a quota of the accepted papers to papers that focus on *implementation* work (and start with HiPEAC!) * Follow up to the above: Encourage research that actually works: Lobby for conferences to favor research work that is tested on a realistic environment (e.g. pass all of SPECcpu...), and that is reproducible. Then GCC and the community could immediately benefit from the results and not wait for many years before someone decides to reproduce/redo research ideas in GCC. * Get statistics on percentage of papers/projects that use compilers other than GCC, and ask them why... (By the way, why was OpenCL implemented only on LLVM and not on GCC?) * Open up GCC to be used as a library by other tools, so that other tools could use its analysis facilities. For example, having alias analysis as an API rather than a pass that needs to be applied and then collect information. Allow developers/tools access those functions outside GCC (should be a high-level API). * Follow up to the above: Need to come up with a common API / standardization / common levels of abstractions. Decide on how to coordinate efforts and find commonalities. * Need a simple pass manager that can list all available passes and allow any scheduling (providing dependency information). * Make more visible/accessible guides for building/extending GCC. * Better advertize Google Summer of Code, and provide more mentoring. * Send feedback on which plugin events are desired to add to next releases of GCC. * GCC tries to auto-parallelize the programs it compiles. Will GCC itself be made more multi-threaded and run faster in a multi-core environment...? We've collected these and other comments/suggestions before/at/after the workshop, on the GROW'10 wiki page: http://cTuning.org/wiki/index.php/Dissemination:Workshops:GROW10:Panel_Questions We hope these notes would help improve GCC and its appeal/usability for researches (or at least raise more discussion!) Yours, Grigori Fursin and Dorit Nuzman (GROW'10 organizers)
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Dear Manuel, Thank you very much for your answers! This is what I believe researchers who are trying to select a compiler for their work would like to know/hear. I think, the main problem for students and researchers is that they see lots of stuff going on with GCC and on mailing lists but they may be shy/scared/not sure where to start if they want to contribute or even if they will be welcome to contribute. The reason is that some of their ideas/work may not be necessarily immediately useful to the community and they may be concerned that they can get lots of aggressive, negative feedback due to that. Of course, this can be also understandable, since many of you on this list do an extremely hard and time-consuming work to support/update GCC and some novel/fancy/long-term ideas may be really distractive from the current GCC agenda no matter how useful they are in the future. However, such feedback can immediately drive away young and motivated students who can otherwise become really active contributors (look at the GRAPHITE and students contributing to GCC now, for example). So, what I think could be useful, is to try to agree on what can be some general common suggestions/recommendations to students/researchers who may want to contribute but not sure how to approach GCC community. Maybe we can make a page on GCC Wiki with such recommendations or even maybe make a separate pre-processing mailing list for novel/crazy/future/unclassified ideas so that only those of you who are interested in that can follow/discuss them and from time to time approach this mailing list with already mature ideas to avoid bothering others who are distracted by such discussions on this mailing list? Maybe it will help to avoid long, repetitive discussions and occasional sad accusations on this technical mailing list?.. Actually, maybe this can be an interesting topic for discussion at the next GCC Summit and GROW'11?.. Also, I think that the modularity and stable API of the compiler would generally simplify this issue since it would allow much more non-intrusive modifications but I understand that it's not easy to do now without collaborative agreement and development, and that there is some gradual movement in that direction anyway... By the way, Manuel, do you mind, if I will forward you email to the HiPEAC mailing list? Thanks again!!! Grigori > * Need to encourage cleanup/infrastructure work on GCC and provide > stable/flexible/extensible APIs (the question is how to encourage such > infrastructure work...?) I think by: 1) Asking for it in precise terms in the gcc@ list. What exactly you want to achieve? How would you suggest to achieve it? If you ask for small changes, there is high chance that someone will do it for you for free. 2) Otherwise, providing patches! Honestly, if you find that something can be made more moduler/flexible/extensible, provide a patch and if you are right there is a high chance it will be committed. 3) For large changes, creating a project, a public gcc branch, attracting some developers and getting it done. Then commit it to GCC trunk. > * Open up GCC to be used as a library by other tools, so that other tools > could use its analysis facilities. For example, having alias analysis as an > API rather than a pass that needs to be applied and then collect > information. Allow developers/tools access those functions outside GCC > (should be a high-level API). For this to get done, people that are going to use it should first get together and define such API and its requirements. That would be the first step! Do this, by discussing it in the gcc@ list. Do not define it on your own and just drop it on us (and on other potential users). That is never going to work. The next step would be to implement it. The smaller the changes, the easier to get them merged. So provide small self-contained and "useful" patches, not a huge patch that implements everything in one go. That won't work either. > * Follow up to the above: Need to come up with a common API / > standardization / common levels of abstractions. Decide on how to > coordinate efforts and find commonalities. Good! Again, the keys are "API discussion in gcc@" and "small patches". > * Need a simple pass manager that can list all available passes > and allow any scheduling (providing dependency information). I think GCC will love to have this. So if someone contributes this in the "proper" way, it will be eagerly accepted. > * Make more visible/accessible guides for building/extending GCC. Again, we will love to have this but we need people to do it. Another point: I think it is more likely that GCC developers would answer in this list all questions necessary to build such guides than to sit down and actually write them. So the problem is not to get the knowledge but that someone puts the effort to write it down in an organised manner. If such guides were available, we will be happy to link/host them in GCC se
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Thanks again, Manuel! I really appreciate your detailed thoughts and I think they will be really very useful to the new-comers. By the way, I agree with them and I was just still trying to pass the message from the participants of the GROW workshop but I think that now things should be much more clear. I will try to create this page on GCC Wiki this weekend so that if new students/researchers try to approach GCC community without reading that, they can be gently pointed out to this page that can avoid offences and mis-interpretation. I hope that my colleagues will help to update this page later since I changed the job a few months ago and my work is for now orthogonal/not directly related to GCC so I can only do it in my spare time but I still hope it will be useful ... Cheers, Grigori -Original Message- From: Manuel López-Ibáñez [mailto:lopeziba...@gmail.com] Sent: Friday, April 16, 2010 6:51 PM To: Grigori Fursin Cc: Dorit Nuzman; gcc@gcc.gnu.org; erven.ro...@inria.fr; David Edelsohn Subject: Re: Notes from the GROW'10 workshop panel (GCC research opportunities workshop) On 16 April 2010 13:21, Grigori Fursin wrote: > > I think, the main problem for students and researchers is that they > see lots of stuff going on with GCC and on mailing lists but they may > be shy/scared/not sure where to start if they want to contribute > or even if they will be welcome to contribute. The reason is that > some of their ideas/work may not be necessarily immediately useful to the > community > and they may be concerned that they can get lots of aggressive, negative > feedback That is why mentoring could be helpful. Technical discussions by email sometimes appear harsh and dry to newcomers. Moreover, negative opinions are more vocal than positive ones. So something that most people think is a good idea or they are indifferent may only get negative feedback from a few. > no matter how useful they are in the future. However, such feedback can > immediately > drive away young and motivated students who can otherwise become really active > contributors (look at the GRAPHITE and students contributing to GCC now, for > example). > > So, what I think could be useful, is to try to agree on what can be > some general common suggestions/recommendations to students/researchers A short list out of the top of my head for proposing ideas in gcc mailing lists: * If you do not have the time/resources/people to implement your own idea, do not expect GCC developers dropping what they are doing to help you. Volunteers have very very limited time and paid developers are paid to do something else. In fact, asking GCC developers to do anything for you, no matter how trivial it seems to you, will likely result in negative feedback. Probably it is no trivial at all. * if your idea may potentially slow down the compiler, increase memory consumption, increase complexity, remove features, or change defaults, it will receive negative feedback. Guaranteed. If you are sure that this is not the case or that the benefits outweigh the drawbacks, but GCC developers disagree, discussion is not going to solve it. The only way is to implement your idea (or a working prototype) and give substantial experimental evidence in many scenarios/targets that you are right. * If you have a great idea implemented and provide a substantial patch, expect negative feedback. There are many ongoing projects in GCC. A patch that comes out of the blue and breaks those projects will not be welcome by the people working on those projects. * Your email/patch may not receive much feedback. This may happen if you provide your idea in an old thread (people stop reading long threads after a while), your subject line was not interesting/descriptive enough (I do not read all emails from the list), the main audience of your email just missed/overlooked it by chance (bad luck, busy period, vacations), your email was too long (people stopped reading before reaching the interesting part), ... The only feasible choice is to try again sometime later with an improved message. * There is also the IRC channels (http://gcc.gnu.org/wiki), which are more interactive, but the same rules apply to them. Specially being ignored despite people talking to each other. That is because people are working, and sometimes they have deadlines, urgent stuff to do, they want to go home early... * Read the gcc and the gcc-patches lists for a while to get to know how things work and who is who. I am sure there are many more little rules-of-thumb I can come up with. > who may want to contribute but not sure how to approach GCC community. > Maybe we can make a page on GCC Wiki with such recommendations or even Anyone can edit the wiki, so be my guest. > maybe make a separate pre-processing mailing list for > novel/crazy/future/unclassified > ideas so that only those of you who are interested i
RE: Defining a common plugin machinery
Dear all, I noticed a long discussion about plugins for GCC. It seems that it's currently moving toward important legal issues, however, I wanted to backtrack and just mention that we at INRIA and in the MILEPOST project are clearly interested in having a proper plugin system in the mainline of GCC which will simplify our work on automatically tuning optimization heuristics (cost models, etc) or easily adding new transformations and modules for program analysis. We currently have a simple plugin system within Interactive Compilation Interface (http://gcc-ici.sourceforge.net) that can load external DLL plugin (transparently through the environment variables to avoid changing project Makefiles or through command line), substitutes the original Pass Manager to be able to call any passes (new analysis passes for example) in any legal order and has an event mechanism to rise events in any place in GCC and pass data to the plugin (such as information about cost model dependencies) or return parameters (such as decisions about transformations for example). Since it's relatively simple, we are currently able to port it to the new versions of GCC without problems, however, naturally, we would like to have this functionality within the GCC mainline with the defined API (that what I discussed with Taras from Mozilla during GCC Summit this year). I believe it may help making GCC a modular compiler and simplify future designs (and can be in line with the idea to use C++ for GCC development if Ian moves this project forward). By the way, Hugh Leather also developed an interesting plugin system for GCC that allows to substitute internal GCC functions with the external ones within plugins to enable hooks inside GCC (he mentioned that he plans to release it soon)... Furthermore, we will then be able to use our current MILEPOST tools and Collective Compilation Framework to automatically tune default GCC optimization heuristic for performance, code size or power consumption (instead of using -O1,2,3 levels) for a particular architecture before new versions of GCC are actually released (or for additional testing of a compiler using different combinations and orders of passes). And when the compiler is released, users can further tune their particular programs interactively or automatically through the external tools and GCC plugins. By the way, we are extending current ICI for GCC 4.4 to add cost-model tuning for major optimizations (GRAPHITE, vectorization, inlining, scheduling, register allocation, unrolling, etc) and provide function cloning with different optimizations, and naturally would like to make it compatible with the potential future common GCC plugin system, so I hope we will be able to agree on a common plugin design soon and move it forward ;) ... Regards, Grigori ===== Grigori Fursin, INRIA, France http://fursin.net/research > -Original Message- > From: Taras Glek [mailto:[EMAIL PROTECTED] > Sent: Tuesday, September 16, 2008 11:43 PM > To: Diego Novillo > Cc: Basile STARYNKEVITCH; gcc@gcc.gnu.org; Sean Callanan; Albert Cohen; > [EMAIL PROTECTED] > Subject: Re: Defining a common plugin machinery > > Basile STARYNKEVITCH wrote: > > Hello Diego and all, > > > > Diego Novillo wrote: > >> > >> After the FSF gives final approval on the plugin feature, we will need > >> to coordinate towards one common plugin interface for GCC. I don't > >> think we should be adding two different and incompatible plugin > >> harnesses. > > > > What exactly did happen on the FSF side after the last GCC summit? I > > heard nothing more detailed than the Steeering Committee Q&A BOFS and > > the early draft of some legal licence involving plugins. What happened > > on the Steering Commitee or legal side since august 2008? Is there any > > annoucement regarding FSF approving plugins? > > > >> I am CCing everyone who I know is working on plugin features. > >> Apologies if I missed anyone. > >> > >> I would like to start by understanding what the plugin API should > >> have. What features do we want to support? What should we export? > >> What should be the user interface to incorporate plugin code? At > >> what point in the compilation stage should it kick in? > >> > >> Once we define a common API and then we can take the implementation > >> from the existing branches. Perhaps create a common branch? I would > >> also like to understand what the status and features of the > >> different branches is. > > > > > > The MELT plugin machinery is quite specific in its details, and I > > don't believe it can be used -in its current form- for other plugins. > > It really expects th
RE: Defining a common plugin machinery
Thanks, Taras! I slightly updated this page, i.e. we would like to be able to load plugins through environment variables to be able to optimize programs transparently as it is done in MILEPOST GCC (without Makefile modifications). By the way, we plan to extend the Interactive Compilation Interface by the end of this year to access most of the internal transformations, however it will be based on the event and call-back mechanisms, which is similar to your GCC API proposal so we shouldn't have lots of compatibility problems if we later agree on the same plugin system... Take care, Grigori > -Original Message- > From: Taras [mailto:[EMAIL PROTECTED] > Sent: Monday, October 06, 2008 11:57 PM > To: Basile STARYNKEVITCH > Cc: Brendon Costa; Hugh Leather; gcc@gcc.gnu.org; 'Sean Callanan'; Cupertino > Miranda; > [EMAIL PROTECTED]; [EMAIL PROTECTED]; 'Taras Glek'; 'Diego Novillo'; Mike > O'Boyle; Grigori > Fursin > Subject: Re: Defining a common plugin machinery > > Basile STARYNKEVITCH wrote: > > > > My hypothesis is that several plugin mechanisms for GCC already exist > > (on some branches or somewhere else). If a small plugin patch has a > > better chance to get accepted into the trunk, we should limit > > ourselves to such a small thing. If big plugin machinery could be > > accepted (I would prefer that) we should understand what would make > > them more acceptable. In both cases, plugins have probably some > > requirements defined by the future runtime license, which I don't know > > yet. > http://gcc.gnu.org/wiki/GCC_PluginAPI > > I put up an API proposal. It's a result of the plugin API discussion at > the GCC summit. > > > Taras
RE: Defining a common plugin machinery
Ok. I am fine with that. Actually, it requires minor modifications to the GCC anyway, so I can just keep them as patches for the ICI/MILEPOST GCC ;) ... Cheers, Grigori > -Original Message- > From: Taras Glek [mailto:[EMAIL PROTECTED] > Sent: Thursday, October 09, 2008 8:29 AM > To: Grigori Fursin > Cc: 'Basile STARYNKEVITCH'; 'Brendon Costa'; 'Hugh Leather'; gcc@gcc.gnu.org; > 'Sean Callanan'; > 'Cupertino Miranda'; [EMAIL PROTECTED]; [EMAIL PROTECTED]; 'Taras Glek'; > 'Diego Novillo'; 'Mike > O'Boyle' > Subject: Re: Defining a common plugin machinery > > Grigori Fursin wrote: > > Thanks, Taras! > > > > I slightly updated this page, i.e. we would like to be able to load plugins > > through environment variables to be able to optimize programs transparently > > as it is done in MILEPOST GCC (without Makefile modifications). By the way, > > we plan to extend the Interactive Compilation Interface by the end of this > > year > > to access most of the internal transformations, however it will be > > based on the event and call-back mechanisms, which is similar to your > > GCC API proposal so we shouldn't have lots of compatibility problems > > if we later agree on the same plugin system... > > > Personally I'm against the env var idea as it would make it harder to > figure out what's going on. I think someone mentioned that the same > effect could be achieved using spec files. > > Taras
RE: Defining a common plugin machinery
> > Personally I'm against the env var idea as it would make it harder to > > figure out what's going on. I think someone mentioned that the same > > effect could be achieved using spec files. > > > Ian mentioned the idea of creating small wrapper scripts with the names: > gcc/g++ etc which just call the real gcc/g++... adding the necessary > command line args. These can then just be put earlier in the search path. > > I currently use the env var method in my project, but I think the > wrapper script idea is a bit nicer than using env vars personally, so i > will likely change to that soon. That's right. It's a nicer solution. We just already have environment variables in our ICI implementation, but it can be useful if we will one day switch to the common plugin system without support for env variables ... Cheers, Grigori > Brendon.
RE: Defining a common plugin machinery
Well, I see the point and I am fine with that. And as I mentioned I can continue using some patches for my projects that currently use environment variables or later move to the GCC wrappers if the majority decides not to support this mode ;) Cheers, Grigori > -Original Message- > From: Dave Korn [mailto:[EMAIL PROTECTED] > Sent: Thursday, October 09, 2008 5:54 PM > To: 'Diego Novillo'; 'Hugh Leather' > Cc: 'Grigori Fursin'; 'Brendon Costa'; 'Taras Glek'; 'Basile STARYNKEVITCH'; > gcc@gcc.gnu.org; > 'Sean Callanan'; 'Cupertino Miranda'; [EMAIL PROTECTED]; [EMAIL PROTECTED]; > 'Taras Glek'; 'Mike > O'Boyle' > Subject: RE: Defining a common plugin machinery > > Diego Novillo wrote on 09 October 2008 14:27: > > > On Thu, Oct 9, 2008 at 05:26, Hugh Leather <[EMAIL PROTECTED]> wrote: > > > >> I think the env var solution is easier for people to use and > >> immediately understand. There would be nothing to stop those people who > >> don't like env vars from using the shell wrapper approach. Why not > >> allow both? > > > > Environment variables have hidden side-effects that are often hard to > > debug. It is easy to forget that you set them. > > And they presumably break ccache's determinism. > > > I also agree that we > > should not allow environment variables to control plugins. > > I think it's best if the compiler's generated output is solely determined by > the command line and the pre-processed source, so I'm with this side of the > argument. > > > cheers, > DaveK > -- > Can't think of a witty .sigline today
RE: Defining a common plugin machinery
Well, we need to return values or even structures using plugins for our MILEPOST project to tune cost models. A simple example is loop unrolling: in our current plugin implementation (MILEPOST GCC/ICI) we register a plugin function that will predict loop unrolling and pass some code features to it (code size, etc) whenever loop unrolling is performed. Then the plugin communicates with the predictive model server (implemented in MATLAB) and returns a loop unrolling factor (as an integer). However, we will need to return more complex structures in case of polyhedral optimizations in GCC 4.4 (GRAPHITE) or other optimizations/code generation... Cheers, Grigori > -Original Message- > From: Taras Glek [mailto:[EMAIL PROTECTED] > Sent: Thursday, October 09, 2008 6:11 PM > To: Hugh Leather > Cc: Grigori Fursin; 'Brendon Costa'; 'Basile STARYNKEVITCH'; gcc@gcc.gnu.org; > 'Sean Callanan'; > 'Cupertino Miranda'; [EMAIL PROTECTED]; [EMAIL PROTECTED]; 'Taras Glek'; > 'Diego Novillo'; 'Mike > O'Boyle' > Subject: Re: Defining a common plugin machinery > > Hugh Leather wrote: > > Aye up all, > > > >I think the env var solution is easier for people to use and > > immediately understand. There would be nothing to stop those people > > who don't like env vars from using the shell wrapper approach. Why > > not allow both? > I think the other replies addressed this question. I've updated the wiki > to reflect the consensus on env vars. > > > >Are you sure about this style of event/callback mechanism? It > > seems to scale poorly. Isn't it likely to be a bit inefficient, too? > > Through this approach, plugins can't cooperate; they can't easily > > define their own events and it feels too early to prevent that. > I'm trying to take the pragmatic approach. There are features we > absolutely need in GCC. The simplest possible solution is likely to be > accepted sooner. > The proposed API is not to become carved in stone. If it turns out 10 > years down the road we chose the wrong plugin API, I'm all for scrapping > it and replacing it with something more relevant. > >It looks like it's trying to replicate the abstraction of calling a > > function with something worse. What I mean is that I think what > > people would like to write is: > > GCC side: > >void fireMyEvent( 10, "hello" ); > > Plugin side: > >void handleMyEvent( int n, const char* msg ) { > > // use args > >} > > > >But now they have to write: > > GCC side: > >//Add to enum > >enum plugin_event { > > PLUGIN_MY_EVENT > >} > >// Create struct for args > >typedef struct my_event_args { > > int n; > > const char* msg; > >} my_event_args; > >// Call it: > >my_event_args args = { 10, "hello" }; > >plugin_callback( PLUGIN_MY_EVENT, &args }; > > Plugin side: > >void handleMyEvent( enum plugin_event, void* data, void* > > registration_data ) { > > if( plugin_event == PLUGIN_MY_EVENT ) { > > my_event_args* args = ( my_event_args* )data; > > // Use args > > } > >} > > > >Which seems a bit ugly to me. Although, it does have the advantage > > of being easy to implement on the GCC side. > This approach takes 5 more minutes of development time, that wont be a > noticeable difference in the long run. > > > >And, if they're replacing a heuristic and need a return value then > > even more lines of code are needed on both sides. How would this > > style work for replacing heuristics? > I don't know how heuristics work in GCC work. It is possible that these > handlers might need to return an integer value, but so far in my plugin > work I have not needed that so I didn't include return values in the > proposal. > > Taras
RE: Defining a common plugin machinery
That's right - I just now need to fight my laziness and at some point write a few lines of code for the wrapper ;) ... Cheers, Grigori > -Original Message- > From: Dave Korn [mailto:[EMAIL PROTECTED] > Sent: Thursday, October 09, 2008 6:09 PM > To: 'Grigori Fursin'; 'Diego Novillo'; 'Hugh Leather' > Cc: 'Brendon Costa'; 'Taras Glek'; 'Basile STARYNKEVITCH'; gcc@gcc.gnu.org; > 'Sean Callanan'; > 'Cupertino Miranda'; [EMAIL PROTECTED]; [EMAIL PROTECTED]; 'Taras Glek'; > 'Mike O'Boyle' > Subject: RE: Defining a common plugin machinery > > Grigori Fursin wrote on 09 October 2008 17:03: > > > Well, I see the point and I am fine with that. And as I mentioned I can > > continue using some patches for my projects that currently use > > environment variables or later move to the GCC wrappers if the majority > > decides not to support this mode ;) > > Wrappers could look up your preferred environment variables and turn them > into the appropriate command-line options! :) > > cheers, > DaveK > -- > Can't think of a witty .sigline today
RE: Defining a common plugin machinery
I currently don't have any preference for a specific way to deal with data marshaling since currently it's enough for the MILEPOST project just to return void* pointer, but just wanted to mention that last year Cupertino Miranda tried to introduce an intermediate data layer to ICI to separate program analysis from transformations and potentially simplify dealing with external optimization plugins. I think the project has been frozen or Cupertino can comment on that if he follows this thread ;), but I thought to give a link to the tech. report Cupertino presented last year, if anyone is interested: http://gcc-ici.sourceforge.net/papers/mfpp2007.pdf By the way, if I am correct, GCC MELT (developed by Basile) also attempts to address some of these issues with data marshaling to modularize GCC ... Cheers, Grigori > -Original Message- > From: Brendon Costa [mailto:[EMAIL PROTECTED] > Sent: Friday, October 10, 2008 2:33 AM > To: Dave Korn > Cc: 'Taras Glek'; 'Grigori Fursin'; 'Hugh Leather'; 'Basile STARYNKEVITCH'; > gcc@gcc.gnu.org; > 'Sean Callanan'; 'Cupertino Miranda'; [EMAIL PROTECTED]; [EMAIL PROTECTED]; > 'Taras Glek'; > 'Diego Novillo'; 'Mike O'Boyle' > Subject: Re: Defining a common plugin machinery > > > > Sounds like you're almost in need of a generic data marshalling interface > > here. > > > Why do we need the complication of data marshaling? > > I don't see why we need to define that all plugin hooks have the same > function interface as currently proposed. I.e. a single void*. This > makes a lot of work marshaling data both as parameters and from return > values. This is already done for us by the language (Though i may have > mis-understood the train of thought here). > > I will propose the start of a new idea. This needs to be fleshed out a > lot but it would be good to get some feedback. > > I will use the following terminology borrowed from QT: > signal: Is a uniquely identified "hook" to which zero or more slots are > added. (I.e. Caller) > slot: Is a function implementation say in a plugin. This is added to a > linked list for the specified signal. (I.e. Callee) > > The main concepts in this plugin hook definition are: > * Signals can define any type of function pointer so can return values > and accept any parameters without special data marshaling > * Each signal is uniquely identified as a member variable in a struct > called Hooks > * A signal is implemented as a linked list where each node has a > reference to a slot that has been connected to the signal > * A slot is a function pointer and a unique string identifier > > This differs a bit from the QT definition but i find it helpful to > describe the entities. > > Important things to note: > Multiple plugins are "chained" one after the other. I.e. It is the > responsibility of the plugin author to call any plugins that follow it > in the list. This gives the plugin authors a bit more control over how > their plugins inter-operate with other plugins, however it would be > STRONGLY recommended that they follow a standard procedure and just call > the next plugin after they have done their work. > > Basically, the idea is to provide the following structure and then most > of the work will involve manipulation of the linked lists. I.e. Querying > existing items in the LL, inserting new items before/after existing > items, removing items from the LL. > > This is not a proposed end product. It is just to propose an idea. There > are a few disadvantages with the way it is implemented right now: > * Too much boilerplate code for each signal definition > * The idea of chaining calls means the responsibility of calling the > next plugin ends up with the plugin developer which could be bad if a > plugin developer does not take due care, however it also provides them > with more flexibility (not sure if that is necessary). > > Now, i have NO experience with the current pass manager in GCC, but > would the passes be able to be managed using this same framework > assuming that each pass is given a unique identifier? > > Thanks, > Brendon. > > #include > #include > > /* GCC : Code */ > struct Hooks > { >/* Define the blah signal. */ >struct BlahFPWrap >{ > const char* name; > int (*fp)(struct BlahFPWrap* self, int i, char c, float f); > void* data; > > struct BlahFPWrap* next; > struct BlahFPWrap* prev; >}* blah; > >struct FooFPWrap >{ > const char* name; > void (*fp)(struct FooFPWrap* self); > void* data; &g
RE: GCC Plug-in Framework ready to port
Hi all, Just to mention that we nearly finished the developments of the new GCC ICI on top of the latest branch (Zbigniew Chamski is working on that) and are finishing the documentation on Wiki. It was enhanced based on the feedback of the current users while still keeping the core code of the plug-ins framework simple. We should make it public in a week or two, will provide several examples from industry and would be very happy to discuss further directions ... Will keep in touch, Grigori > -Original Message- > From: Diego Novillo [mailto:dnovi...@google.com] > Sent: Saturday, January 31, 2009 2:28 PM > To: Richard Guenther > Cc: Sean Callanan; Taras Glek; Basile STARYNKEVITCH; Grigori Fursin; Le-Chun > Wu; Brendon > Costa; Emmanuel Fleury; gcc@gcc.gnu.org > Subject: Re: GCC Plug-in Framework ready to port > > On Sat, Jan 31, 2009 at 08:24, Richard Guenther > wrote: > > On Sat, Jan 31, 2009 at 2:11 PM, Diego Novillo wrote: > >> > >> 1- Agree on a common API and document it in > >> http://gcc.gnu.org/wiki/GCC_PluginAPI > > > > Note that even for this taks being able to compare what the existing > > frameworks > > implement (with code) would help a lot. After all, source code tells more > > than 1000 "design" words ;) > > Sure, but we will be retrofitting existing implementations into a > unified API, and the various groups have gathered quite a bit of > experience already, so I expect the API to be fairly pragmatic. > > I also don't think this will be a very complicated API. > > > Diego.
RE: [plugins] Branch for plugins development created
Hi Diego, et al, As Sebastian mentioned INRIA finally signed copyright transfer form for FSF last year so our patches can be easily integrated into GCC. During this week, I will check the current GCC plugins branch and will send you more info about the current version of Interactive Compilation Interface to see how we can help with the developments and what we can reuse for the GCC plugins (we are naturally interested to test and extend this branch to simplify further ICI developments). I am working with Zbigniew Chamski and Cupertino Miranda to extend ICI so I put them in CC. The current version of ICI is actually synchronized with the latest GCC trunk, has a very small code-base and touches minimal parts of the compiler - it have been used in various projects within last 5-6 months to select optimization passes or change their orders per function to optimize performance/code-size for programs and Linux libraries at ARC, IBM and STMicro or test the new local releases of a compiler and instrument program for further analysis. I need to look through all the feedback we got from our users to update the Plugins Wiki with the practical usage scenarios and different plugins we developed - I will try to do it within next few days and will check the proposed GCC plugins API to sync with ICI... Take care and have a good week, Grigori > -Original Message- > From: Diego Novillo [mailto:dnovi...@google.com] > Sent: Friday, February 06, 2009 9:49 PM > To: gcc@gcc.gnu.org > Cc: Sean Callanan; Taras Glek; Le-Chun Wu; Basile STARYNKEVITCH; Grigori > Fursin; Gerald > Pfeifer > Subject: [plugins] Branch for plugins development created > > I have created the plugins branch (rev. 143989). As I offered before, > I will help maintain the branch synchronized with mainline and with > patch reviews. The branch can be checked out with > > $ svn co svn://gcc.gnu.org/svn/gcc/branches/plugins > > As usual, I created a wiki page for the branch > (http://gcc.gnu.org/wiki/plugins) and patched svn.html to document the > branch. > > Before I can accept patches, however, I need to make sure that > everyone has copyright assignments on file. I know that Taras, Basile > and Le-Chun do. Sean and Grigori, do you folks have copyright papers > already? > > I understand that Le-Chun has an initial patch for the branch based on > the API we've been discussing. If everyone agrees, I will review that > for inclusion. > > Gerald, is this patch to svn.html OK? I see that there are some stale > entries in there. I'll clean them up with a follow-up patch. > > > Thanks. Diego. > > Index: htdocs/svn.html > === > RCS file: /cvs/gcc/wwwdocs/htdocs/svn.html,v > retrieving revision 1.109 > diff -d -u -p -r1.109 svn.html > --- htdocs/svn.html 9 Dec 2008 15:27:03 - 1.109 > +++ htdocs/svn.html 6 Feb 2009 20:40:35 - > @@ -391,6 +391,11 @@ the command svn log --stop-on-copy >DWARF-4 is currently under development, so changes on this branch >will remain experimental until Version 4 is officially finalized. > > + plugins > + This branch adds plugin functionality to GCC. See the + href="http://gcc.gnu.org/wiki/plugins";>plugins wiki for > + details. > + > > > Architecture-specific
[plugins] Comparison of plugin mechanisms
Dear all, Zbigniew and I prepared a page on GCC Wiki comparing several current plugin mechanisms (some parts should be updated) with some suggestions to move forward: http://gcc.gnu.org/wiki/GCC_PluginComparison In case we mixed up or misunderstood something about other plugin efforts, update this page, please ... Basically, we currently see 3 complementary categories of GCC plugins, depending on the nature of the extension: production, experimentation/research, and new pass integration. Each category naturally calls for slightly different API features. Considering that there are already communities behind "production" and "experimental" plugins, we think that it would be better to merge two. We will try to prepare a small patch to support "experimental" plugins by the beginning of next week. In the mean time, would like to know your thoughts on that matter and how should we proceed forward !.. Cheers, Grigori & Zbigniew
RE: [plugins] Comparison of plugin mechanisms
Hi Basile et al, Thanks a lot for detailed explanations. In addition to Zbigniew's reply: I didn't specifically put MELT into new category (I actually just updated the page and added it to the other categories too), but I still keep the new pass integration plugin, since some of our colleagues would like to add new passes including new optimizations, but have difficulties to do that since there is no full documented API, they may not want to write it in C (as you mentioned) and they are concerned about support overheads if the internal API will be changing all the time. Providing standard API would be of great help for such cases, but it rises other issues such as opening GCC to proprietary plugins, etc which has been heavily discussed here for some time. So I personally think we should leave it for the future and just implement minimalistic Plugin API that will already make many current and prospective users happy and then we can gradually extend it ... By the way, Sean, what do you think about the proposed APIs? Will it suit your purposes or some changes are also required? I would like to add info about your plugin system but just don't have much knowledge about that ... Cheers, Grigori > -Original Message- > From: Basile STARYNKEVITCH [mailto:bas...@starynkevitch.net] > Sent: Friday, February 13, 2009 8:38 AM > To: Grigori Fursin > Cc: 'Diego Novillo'; gcc@gcc.gnu.org; 'Sean Callanan'; 'Taras Glek'; 'Le-Chun > Wu'; 'Gerald > Pfeifer'; 'Zbigniew Chamski'; 'Cupertino Miranda' > Subject: Re: [plugins] Comparison of plugin mechanisms > > Hello All, > > > Grigori Fursin wrote: > > Basically, we currently see 3 complementary categories of GCC plugins, > > depending > > on the nature of the extension: production, experimentation/research, and > > new pass > > integration. Each category naturally calls for slightly different API > > features. > > > > > I am not sure of the relevance of the "new pass integration plugins" > examplified by MELT. > [on the other hand, I do know Grigori and I believe he thought quite a > lot about plugins, which I didn't. I only implemented one particular > plugin machinery = MELT, knowing well that my approach is quite peculiar > both in its goals and its implementation. I never thought of MELT as a > universal plugin machinery). > > In my view, MELT fits quite precisely in the "production plugins" > definition, while indeed I expect it to be useful mostly for > "experimental/research" plugins. > > In my view also, the "new pass integration plugin" category should not > really exist, because it probably can fit inside one (or both) of the > above categories. > > MELT definitely claims to fit into the "production plugins" slot, > because MELT always was concerned by efficiency and most importantly > close integration of GCC internal structures. The major point of MELT > is its several idioms to fit into the *evolving* GCC internals API, > and I claim that the various MELT idioms (see my GROW paper) make > close integration into GCC internals possible, and perhaps even easy > (for each internal "feature" of GCC, it is really easy to code the > couple of MELT line to use it). > > Of course, MELT is mostly motivated by "experimental/research plugins", > in the sense that MELT will be mostly useful for experimental and > prototyping. I never thought that MELT would be useful for coding > definitive optimisation passes, but it should be useful to at least > prototype them. Actually, I really believe that for ordinary > optimisation, any plugin machinery is not used. In other words, I tend > to think that even in -O3 (or a future -O4) no plugin will be dlopen-ed > by default in GCC. [by the way, I believe that this last fact is > unfortunate; I would like plugins to be routinely used inside GCC, but I > do know that most of the GCC community disagree.] > > Also, I don't understand why "production plugins" or > "experimental/research plugins" could not be coded in another language > than C. For sure, they could probably be coded in a suitable subdialect > of C++ at least. > > I do like Grigori's plugin API proposal. (but again, I definitely do not > claim to be a plugin theorist, only a particular plugin implementor, > with MELT having specific needs & solutions.). > > I did not understood yet how exactly can Grigori's production plugin API > be used to add e.g. one plugin pass inside the pass manager. Eg how to > add one pass fooplugin_pass provided by a plugin just after the > pass_inline_parameters, or another p
RE: [plugins] Comparison of plugin mechanisms
Hi Le-chun and Taras, You are right that instead of categorizing the plugin API into "production", "research", etc, we can just introduce several layers of abstraction that will be useful for different purposes such as quick prototyping or tightly coupled plugins, etc. We will update the Wiki to reflect that and avoid misunderstandings ... Zbigniew and I are preparing a patch that will include a higher abstraction layer that will be relying on the lower-level abstraction you documented (we just need to clean it now and we should be able to do it by the end of this week). We also provide high-level routines for pass manipulation since it's also critical for us and our current users - we can already add new code analysis/instrumentation passes or select/de-select/reorder existing passes easily - we will just need to synchronize on that implementation altogether to avoid duplicate work ... Will keep in touch, Grigori > -Original Message- > From: Le-Chun Wu [mailto:l...@google.com] > Sent: Wednesday, February 18, 2009 1:35 AM > To: Grigori Fursin > Cc: Basile STARYNKEVITCH; Diego Novillo; gcc@gcc.gnu.org; Sean Callanan; > Taras Glek; Gerald > Pfeifer; Zbigniew Chamski; Cupertino Miranda > Subject: Re: [plugins] Comparison of plugin mechanisms > > Hi Grigori and Zbigniew, > > I just took a look at the wiki page you guys put together. While I > have some reservations about categorizing the plugin APIs into > "production", "research", and "new pass", I totally agreed with you > that the plugin support should be implemented in layers, starting from > the core support that provides basic APIs and exposes all the internal > GCC data structures, to more complicated support that allows more > abstract APIs and maybe drastic changes in GCC compilation pipeline. > > I agree with Basile that the support for pass management should not be > in a different category. I think allowing plugin modules to hook new > passes into GCC is an important requirement for plugins to be really > useful, so we should provide basic pass management support (such as > adding a new pass or replacing an existing pass) even in the core (or > "production") APIs. And in the advanced/extended (or "research") APIs, > we can allow the whole pass manager to be replaced (probably like what > you did in ICI). In our plugin prototype, we have implemented basic > support for adding new passes and defined APIs that allow the plugin > writers to specify where/when to hook in the new passes. I will modify > the GCC Plugin API wiki with our proposal so that people can comment > on it. > > You mentioned that you will be sending out a patch this week. Will > your patch contains the implementation for both the "production" and > "research" APIs? If your patch contains only the research APIs, we can > prepare an official patch for the "production" APIs (based on the > patch that I send out a while back ago) and send it out for review > this week (if no one else would like to do it, that is). > > Thanks, > > Le-chun > > > On Fri, Feb 13, 2009 at 1:26 AM, Grigori Fursin > wrote: > > Hi Basile et al, > > > > Thanks a lot for detailed explanations. > > > > In addition to Zbigniew's reply: > > I didn't specifically put MELT into new category (I actually just > > updated the page and added it to the other categories too), > > but I still keep the new pass integration plugin, since > > some of our colleagues would like to add new passes including > > new optimizations, but have difficulties to do that since there > > is no full documented API, they may not want to write it in C > > (as you mentioned) and they are concerned about support overheads > > if the internal API will be changing all the time. Providing > > standard API would be of great help for such cases, but it rises > > other issues such as opening GCC to proprietary plugins, etc > > which has been heavily discussed here for some time. So I personally > > think we should leave it for the future and just implement minimalistic > > Plugin API that will already make many current and prospective users > > happy and then we can gradually extend it ... > > > > By the way, Sean, what do you think about the proposed APIs? > > Will it suit your purposes or some changes are also required? > > I would like to add info about your plugin system but just > > don't have much knowledge about that ... > > > > Cheers, > > Grigori > > > >> -Original Message- > >> From: Basile STARYNKEVITCH [mailto:bas...@starynkevitch.net] > >> Sent: Fri
RE: GCC at Google Summer of Code'2009
Hi Manuel, Sure and thanks for the info - I know that students have to submit proposals, but maybe I misunderstood the concept: I have been talking to a few mentors and students (not GCC related) who got their proposals accepted in the last year's Google Summer of Code and they basically told me that the mentors listed many different proposals so that students could have a choice and then they submitted proposals together. But maybe it was the wrong way to do :( ... So, my idea was to sync on the potential proposals with GCC community so that students could have a choice. So, I converted the table to the bullet list format ... Thanks again for your info and sorry about misunderstanding, Grigori > -Original Message- > From: Manuel López-Ibáñez [mailto:lopeziba...@gmail.com] > Sent: Thursday, February 26, 2009 6:17 PM > To: Sebastian Pop > Cc: Grigori Fursin; gcc@gcc.gnu.org; Basile STARYNKEVITCH; Diego Novillo; > Taras Glek; Zbigniew > Chamski; Sean Callanan; Cupertino Miranda; Joseph S. Myers; Le-Chun Wu; > Albert Cohen; Michael > O'Boyle; Paul H J Kelly; Olivier Temam; Chengyong Wu; Ayal Zaks; Bilha > Mendelson; Mircea > Namolaru; Erven Rohou; Cosmin Oancea; David Edelsohn; Kenneth Zadeck > Subject: Re: GCC at Google Summer of Code'2009 > > Hi Grigori, > > About the wiki page http://gcc.gnu.org/wiki/SummerOfCode > > Perhaps the table format for specific project ideas is clearer than > the bullet list format. However, we should not have two formats. If > the table is preferred, I suggest that other people that have added > ideas in a bullet list format, update the wiki page and add their > ideas to the table. > > However, you know that project proposals are submitted by students, > don't you? The way you wrote it in the wiki suggests (to me) that > mentors submit projects and then students are recruited. I have > modified it to avoid this confusion. > > Also, you do not give contact information in the table, so what is the > point of listing who is interested? > > Furthermore, as far as I know, there is only one student per project, > and students may not wish to see their name listed in a wiki page if > their project was ultimately not accepted. So I do not see the point > of the column student. Students should contact the project contact or > the gcc list and ask for mentors. > > Cheers, > > Manuel. > > 2009/2/26 Sebastian Pop : > > Hi Grigori, > > > > On Thu, Feb 26, 2009 at 04:57, Grigori Fursin > > wrote: > >> Hello All, > >> > >> I just saw an announcement that a new Google Summer of Code'2009 > >> (http://code.google.com/soc) will be accepting project proposals > >> in a week or so. My colleagues and I would like to submit a few proposals > >> so wanted to ask if someone is interested in that to synchronize > >> submissions. > >> > >> Basically, within last 2 years we had some interesting results on > >> collective > >> program optimizations and predictive modeling from the MILEPOST project > >> and we > >> would like to extend the following projects to move our technology to the > >> community > >> (also based on the feedback from our users and GROW workshop participants): > >> > >> 1) extend GCC plugin framework/ICI/MILEPOST framework to enable fine-grain > >> transformation parameter tuning. Currently we can perform search for good > >> combinations of passes, their orders and parameter tuning on function level > >> and we would like to be able to do it for each individual transformation > >> to tune optimization heuristic for GRAPHITE loop transformations, loop > >> vectorization, > >> inlining, unrolling, etc. > >> > > > > I would be pleased to co-advise the student/s working on projects > > related to Graphite and loop transforms. > > > > Sebastian > >
RE: GCC at Google Summer of Code'2009
Sure, Diego! By the way, we just finished preparing the small patch for the high-level plugin API (that includes pass manipulation and parameter tuning) synchronized with the current plugin branch (on top of Le-Chun's patch) and should be able to send it tonight ... Cheers, Grigori > -Original Message- > From: Diego Novillo [mailto:dnovi...@google.com] > Sent: Friday, February 27, 2009 4:42 PM > To: Grigori Fursin > Cc: gcc@gcc.gnu.org; Basile STARYNKEVITCH; Taras Glek; Zbigniew Chamski; Sean > Callanan; > Cupertino Miranda; Joseph S. Myers; Le-Chun Wu; Sebastian Pop; Albert Cohen; > Michael O'Boyle; > Paul H J Kelly; Olivier Temam; Chengyong Wu; Ayal Zaks; Bilha Mendelson; > Mircea Namolaru; > Erven Rohou; Cosmin Oancea; David Edelsohn; Kenneth Zadeck > Subject: Re: GCC at Google Summer of Code'2009 > > On Thu, Feb 26, 2009 at 05:57, Grigori Fursin wrote: > > > I am fine to mentor a few of them (particularly from 1-3) but would like to > > see if someone > > is interested to help with that ?.. I added these topics to the GCC GSOC > > page: > > http://gcc.gnu.org/wiki/SummerOfCode > > and would be happy if you modify it or tell me if you are interested ... > > I am interested in 1-3. I expect to be spending some more time on the > plugins branch in the near future. As Manuel pointed out, it's the > student's responsibility for coming up with a project. We do publish > those projects that we would be interested in seeing completed. But > it's ultimately up to the student. > > > Diego.
RE: GCC at Google Summer of Code'2009
Sure, I moved my project suggestions to "other projects" section and added contact info ... Cheers, Grigori > -Original Message- > From: Manuel López-Ibáñez [mailto:lopeziba...@gmail.com] > Sent: Thursday, February 26, 2009 8:41 PM > To: Grigori Fursin > Cc: Sebastian Pop; gcc@gcc.gnu.org > Subject: Re: GCC at Google Summer of Code'2009 > > 2009/2/26 Grigori Fursin : > > Hi Manuel, > > > I have been talking to a few mentors and students (not GCC related) > > who got their proposals accepted in the last year's Google Summer of Code > > and they basically told me that the mentors listed many different proposals > > so that students could have a choice and then they submitted proposals > > together. But maybe it was the wrong way to do :( ... > > You got it right but this is not what it looked like when you wrote a > table called "2009 Proposals" with a blank column "Students?" > separated from a section called "Project Ideas". > > Also, as I said, if you want students to contact you (or someone) > directly, then you should give contact information. I think having > contact information (obfuscated email, link to wiki user page, IRC > name at #gcc, whatever) could be very useful to track who proposed > what. That is why I did not delete it. > > > So, my idea was to sync on the potential proposals with GCC community > > so that students could have a choice. So, I converted the table to the > > bullet > > list format ... > > This is perfectly fine. The only problem is that there were already > proposals in that page. > > Table or bullet points, I do not care, but both things are a bit > confusing. Nonetheless, there could be a list/table of specific > projects and another list/table of "general" ideas. I think it would > be useful to separate the two, if you wish to do so. > > Cheers, > > Manuel.
RE: GCC at Google Summer of Code'2009
Thank you for the info, Liang! We can sync off-line about potential project submissions ... Cheers, Grigori > -Original Message- > From: lpeng [mailto:pengli...@ict.ac.cn] > Sent: Thursday, March 05, 2009 7:29 AM > To: Grigori Fursin > Cc: gcc; cwu; fangshuangde; huangyuanjie > Subject: Re: GCC at Google Summer of Code'2009 > > > -Original Message- > > From: Grigori Fursin [mailto:gfur...@gmail.com] On Behalf Of Grigori Fursin > > Sent: Thursday, February 26, 2009 6:57 PM > > To: gcc@gcc.gnu.org > > Cc: 'Basile STARYNKEVITCH'; 'Diego Novillo'; 'Taras Glek'; 'Zbigniew > > Chamski'; 'Sean Callanan'; 'Cupertino Miranda'; 'Joseph S. Myers'; 'Le-Chun > > Wu'; 'Sebastian Pop'; 'Albert Cohen'; 'Michael O'Boyle'; 'Paul H J Kelly'; > > 'Olivier Temam'; 'Chengyong Wu'; 'Ayal Zaks'; 'Bilha Mendelson'; 'Mircea > > Namolaru'; 'Erven Rohou'; 'Cosmin Oancea'; 'David Edelsohn'; 'Kenneth > > Zadeck' > > Subject: GCC at Google Summer of Code'2009 > > > > Hello All, > > > > I just saw an announcement that a new Google Summer of Code'2009 > > (http://code.google.com/soc) will be accepting project proposals > > in a week or so. My colleagues and I would like to submit a few proposals > > so wanted to ask if someone is interested in that to synchronize > > submissions. > > > > 4) Extend GCC generic function cloning to be able to create > > static binaries adaptable to different architectures at run-time. > > Cupertino Miranda has been extending this work recently and we may > > need to sync on that ... > > > > I am fine to mentor a few of them (particularly from 1-3) but would like to > > see if someone > > is interested to help with that ?.. I added these topics to the GCC GSOC > > page: > > http://gcc.gnu.org/wiki/SummerOfCode > > and would be happy if you modify it or tell me if you are interested ... > > > > Thanks, > > Grigori > Hello Grigori, > I am a master student from Institute of Computing Technology of China, > my name is Liang Peng. I heard the news of Google Summer of Code'2009 from > your email, > and I am very interested in one of your projects:"Extend GCC generic function > cloning to be > able to create static binaries adaptable to different architectures at > run-time(including > multiple ISA generation)". > To data, my primary work is porting and optimizing GCC for > embedded-oriented cpu- > loongson232 in professor Chengyong Wu's group, and I have also got a > reasonable understanding > of function cloning and ICI, looking forward to participate in the work. > > Thanks > liang peng
RE: GCC at Google Summer of Code'2009
Hi Yanjie, Glad that you would like to extend GCC ICI/MILEPOST. We should sync with Diego and Sebastian about that project since they are interested as well ... In the mean time, me and Zbigniew are preparing the final release of the ICI2 for GCC 4.4 with the collaborative Wiki to continue developments - it should be ready with the official GCC 4.4 release ... Will keep in touch, Grigori > -Original Message- > From: huangyuan...@gmail.com [mailto:huangyuan...@gmail.com] On Behalf Of > Yuanjie Huang > Sent: Thursday, March 05, 2009 10:55 AM > To: Grigori Fursin; gcc-maillist > Cc: Chengyong Wu; fangshuan...@163.com; Liang Peng > Subject: Re: GCC at Google Summer of Code'2009 > > Hi Grigori, > I'm a graduate student at the Institute Of Computing Technology > Chinese Academy Of Sciences, and I'm interested in the Summer of Code > projects you list in the gcc wiki, especially the one to extend the > ICI/MILEPOST framework to enable fine-grain tunning. As compiler is my > research area, I've read about the framework before, and now I'm > looking forward to launch a SoC project on it. > > Cheers, > Yuanjie > > > -Original Message- > > From: Grigori Fursin [mailto:gfur...@gmail.com] On Behalf Of Grigori Fursin > > Sent: Thursday, February 26, 2009 6:57 PM > > To: gcc@gcc.gnu.org > > Cc: 'Basile STARYNKEVITCH'; 'Diego Novillo'; 'Taras Glek'; 'Zbigniew > > Chamski'; 'Sean Callanan'; 'Cupertino Miranda'; 'Joseph S. Myers'; 'Le-Chun > > Wu'; 'Sebastian Pop'; 'Albert Cohen'; 'Michael O'Boyle'; 'Paul H J Kelly'; > > 'Olivier Temam'; 'Chengyong Wu'; 'Ayal Zaks'; 'Bilha Mendelson'; 'Mircea > > Namolaru'; 'Erven Rohou'; 'Cosmin Oancea'; 'David Edelsohn'; 'Kenneth > > Zadeck' > > Subject: GCC at Google Summer of Code'2009 > > > > Hello All, > > > > I just saw an announcement that a new Google Summer of Code'2009 > > (http://code.google.com/soc) will be accepting project proposals > > in a week or so. My colleagues and I would like to submit a few proposals > > so wanted to ask if someone is interested in that to synchronize > > submissions. > > > > 4) Extend GCC generic function cloning to be able to create > > static binaries adaptable to different architectures at run-time. > > Cupertino Miranda has been extending this work recently and we may > > need to sync on that ... > > > > I am fine to mentor a few of them (particularly from 1-3) but would like to > > see if someone > > is interested to help with that ?.. I added these topics to the GCC GSOC > > page: > > http://gcc.gnu.org/wiki/SummerOfCode > > and would be happy if you modify it or tell me if you are interested ... > > > > Thanks, > > Grigori > > -- > Yuanjie Huang
RE: GCC at Google Summer of Code'2009
Dear all, Just a brief note that after a few months of redevelopment I finally opened a new collaborative website to continue Interactive Compilation Interface developments with a hope to make GCC not only a default open-source compiler but also a default compiler for academic and industrial research: http://ctuning.org/ici I would like to thank Zbigniew Chamski for a huge effort to move old developments to the new ICI2, to sync with GCC 4.4 and to sync with the current GCC plugin branch. The idea is that many current ICI features are not necessarily needed to the general GCC community but are now extensively used in academic research. So, they are currently implemented on top of GCC plugin branch and allow quick prototyping of new development and research ideas. If some ideas will be useful, using ICI and GCC plugin branch will allow to move them to the main branch much faster. So, I just hope that it will boost innovation for GCC to produce better, faster and smaller code. Also, if some of you are still interested to extend GCC with ICI for Google summer of code (if it's not too late) or even without it, I listed some topics here based on ICI users feedback during last 2 years: http://ctuning.org/wiki/index.php/CTools:ICI:Projects Finally, I may not be available for some time due to personal and professional reasons but we already have a small ICI research community so you are welcome to join our mailing list and discuss ICI issues there : http://ctuning.org/wiki/index.php/Community Take care and maybe see you at the GCC Summit (if my proposal is accepted ;) ...) Grigori = Grigori Fursin, INRIA http://fursin.net
RE: GCC at Google Summer of Code'2009
Hi Phil, Sorry I couldn't reply earlier to your email (have a few deadlines at the moment) so will reply here: I would be extremely interested to see the support for OpenCL and either GPU or CELL implemented in GCC. My personal interest here is to extend work on adaptive scheduling. A few years ago together with UPC colleagues I started a small project to do predictive code scheduling and data manipulation for heterogeneous processors. We used CUDA and CPU/GPU processors. The projects stalled but we managed to get some preliminary results - if you are interested you can find more info about run-time predictive scheduling at this paper and presentation: http://unidapt.org/index.php/Dissemination#JGVP2009 I will not be available much this summer due to personal constraints, but if you will manage to get support for this work, I would be interested to see if we can connect this framework with the Collective Optimization Framework at the end to automatically learn how to predict good scheduling strategy based on kernel and dataset parameters. This relates to your "Able to manage scheduling, compute and memory resources"... Take care and good luck, Grigori Paolo, Thanks for the feedback, I am not very experienced in compilers so it is hard to judge how long a task will take... By sharing I meant sharing of code between NVIDIA and GCC. It probably won't happen I guess. Here is my proposal for an OpenCL runtime with a target runtime as well. If you think it is too ambitious or not ambitious enough, I will change it. = Project Title: Make the OpenCL Platform Layer API and Runtime API for the Cell Processor and CPUs. Project Synopsis: The aim of this project is to create an implementation that supports the Platform Layer API and Runtime API of OpenCL that can target the Cell Processor and CPUs. The Platform Layer API is: -A hardware abstraction layer over diverse computational resources -An interface to query, select and initialize compute devices -An interface to create compute devices and work-queues The Runtime API is: -Able to execute compute kernels -Able to manage scheduling, compute and memory resources (I am confused as to the wording of this, does it mean: manage the scheduling of compute and memory resources?) (Source http://www.khronos.org/developers/library/overview/opencl_overview.pdf, page 13). This project will use the existing gcc and ppu-gcc/spu-gcc compilers for offline compilation of binary programs. Project Details: (Part 1) In this project I will make a C library and runtime that supports some of the functions listed here: http://www.khronos.org/registry/cl/api/1.0/cl.h Specifically I will add support for: clGetPlatformInfo - Get info about OpenCL clGetDeviceIDs - Get what devices are supported on system clGetDeviceInfo - Get info about a specific device clCreateContext - Create an OpenCL context clReleaseContext - Release an OpenCL context clCreateCommandQueue - Create a command-queue on a specific device clReleaseCommandQueue - Release a command-queue clCreateBuffer - Create a buffer object clEnqueueReadBuffer - Enqueue a read clEnqueueWriteBuffer - Enqueue a write clCreateProgramWithBinary - Create a program object from a pre-compiled binary. clReleaseProgram - Release a program object clCreateKernel - Create a kernel object clReleaseKernel - Release a kernel object clSetKernelArg - Set the kernel arguments clEnqueueNDRangeKernel - Enqueue a command to execute a kernel on a device clEnqueueTask - Enqueue a single work item clWaitForEvents - Wait for events to complete clReleaseEvent - Release an event This will allow for rudimentary launches of CPU and Cell kernels in a common interface. Any functions that are required for the above to work will also be added. The OpenCL compiler will not be implemented. (Part 2) Also, a runtime library for the target (CPU or Cell) must be created that includes the following intrinsics: Information Functions: (section 6.11.1 of http://www.khronos.org/registry/cl/specs/opencl-1.0.33.pdf) uint get_work_dim () size_t get_global_size (uint dimindx) size_t get_global_id (uint dimindx) size_t get_local_size (uint dimindx) size_t get_local_id (uint dimindx) size_t get_num_groups (uint dimindx) size_t get_group_id (uint dimindx) Synchronization Functions: (sections 6.11.9 - 6.11.10 of http://www.khronos.org/registry/cl/specs/opencl-1.0.33.pdf) void barrier (cl_mem_fence_flags flags) void mem_fence (cl_mem_fence_flags flags) void read_mem_fence (cl_mem_fence_flags flags) void write_mem_fence (cl_mem_fence_flags flags) Async Copies to/from Memory: (section 6.11.11 of http://www.khronos.org/registry/cl/specs/opencl-1.0.33.pdf) event_t async_work_group_copy (__local gentype *dst, const __global gentype *src,size_t num_elements, event_t event) event_t async_work_group_copy (__global gentype *dst,const __local gentype *src,size_t num_elements, event_t event) void
Open optimization repository updated
Hi all, Just wanted to mention that we updated Collective Optimization Database with various optimization cases (for Intel and AMD processors at the moment) for multiple benchmarks such as EEMBC, SPEC2006, etc and started collecting data from several users to compare different compilers including GCC, LLVM, Open64, Intel, etc. You can access it at: http://ctuning.org/cdatabase I would be very happy to have your feedback about this repository and if you would be interested to provide optimization data to this database to improve GCC and help end-users optimize their programs? Yours, Grigori Fursin Grigori Fursin, INRIA, France http://fursin.net/research
Re: LLVM as a gcc plugin?
Hi guys, Just saw this discussion so wanted to mention that we at HiPEAC are now interested to use both GCC as static compiler and LLVM as run-time infrastructure for research and several colleagues wanted to port ICI framework (the recent release is based on the "official" gcc plugin branch) to LLVM. We want to have both official gcc plugins and ICI addition on top of it since we have a relatively large community already around those tools and ICI plugins, and additional tools for automatic program optimization. I will unlikely be involved in that now because I just don't have time so I CCed this email to Andy Nisbet who has been interested to provide plugin system for LLVM, Zbigniew Chamski who supports ICI for GCC and also Albert Cohen and Ayal Zaks who are also coordinating those activities within HiPEAC. The idea is to make GCC and LLVM more attractive to the researchers (i.e. that it's easy to use compilers without knowing internals much) so that research ideas could go back to the compilers much faster improving GCC and LLVM ... Cheers, Grigori > On Jun 3, 2009, at 11:30 PM, Uros Bizjak wrote: > >Hello! > >Some time ago, there was a discussion about integrating LLVM and GCC >[1]. However, with plugin infrastructure in place, could LLVM be >plugged into GCC as an additional optimization plugin? > > >[1] http://gcc.gnu.org/ml/gcc/2005-11/msg00888.html > > > Hi Uros, > > I'd love to see this, but I can't contribute to it directly. I think the > plugin interfaces would need small > extensions, but there are no specific technical issues preventing it from > happening. LLVM has certainly progressed a > lot since that (really old) email went out :) > > -Chris
[GSoC] [plugins] [ici] function cloning + fine-grain optimizations/program instrumentation
Hi Yuanjie, Liang, et al, This email is about further GSoC'09 developments for plugins, generic function cloning, fine-grain optimizations and program instrumentation this summer. Considering that the basic infrastructure is now available I would like to agree on further developments based on the feedback I got during last 3 weeks so that we could extend the projects quickly. Though this email primarily concerns Yuanjie and Liang, I am sending this email to all the colleagues involved in the project or who has been interested at some point as well as GCC and cTuning mailing lists just to make everyone aware of the developments. This is a long email so if you are not interested in these projects, please skip it ... 1) Originally we thought to use stable GCC 4.4.0 with plugin/ICI support for GSoC (already prepared), however considering that GCC 4.5 will have plugin support and extended function cloning capabilities, we should eventually move all the developments to the trunk. Zbigniew mentioned that he will synchronize ICI with the current trunk fully within 2 weeks, so we can start working on GCC 4.4.0 (with plugins and ICI) until then (plugins shouldn't change much but some gluing with new GCC will be required) and then sync with GCC 4.5 + synced ICI. 2) We need to prepare a plugin that uses XML library (libxml2 for example - http://xmlsoft.org) and records basic information about compilation flow. I suggest that we record the following info per function for now (we can use filename gcc_compilation_flow..xml for example): * GCC version * Plugin version which has been used to record info * File name * function name ** Within inter-procedural stage, we can call a function name #IP# and besides IP passes also provide some global info such as which optimization flags/parameters has been used * function start line (source) and end line and other currently available ICI features from http://ctuning.org/wiki/index.php/CTools:ICI:List_of_features * function specific optimizations or code generation flags (if applicable - I think Mike Meissner's patch that enables function-specific flags has been included in GCC 4.5) * passes ** available fine-grain optimization within passes Except fine-grain optimizations, all the information should be already available and there are 2 ICI plugin (test1, test2) that show how to get this info... We should record this info per function to avoid large files for large projects since often we may want to control only a few functions. This can help with memory and cpu utilization when using libxml ... We should be able to control which functions to process using either a command line argument with a list of functions or an environment variable (which we can later convert into command line argument). If it's empty, all the functions are processed. 3) When we want to perform function cloning or use fine-grain optimization/instrumentation, we can use the same XML files created during the record stage (or prepare them manually/automatically using external tools) and add additional fields. We will need to perform function cloning using a new IP pass (as described in the GCC Summit presentation by Honza Hubicka). We can provide info about which functions to clone in XML file for a IP cloning pass, i.e. something like: generic_cloning libadapt, other libraries if needed such as hardware counters monitoring (if needed) foo 2 _clone gcc_adapt (this function will be called before the clone and will select which clone to use based on either machine description or monitoring of hardware counters or dataset features to enable online dynamic optimization for statically compiled programs, etc) boo 3 _clone Basically, when we create clones, we need to make the following substitution for a code: foo{ /* before cloning */ ... } foo{ /* after cloning */ switch (gcc_adapt(function_number)) { case 1: foo_clone1(..); break; case 2: foo_clone2(..); break; default: /* original code */ ... } Basically, when the generic_cloning pass is invoked, it will be communicating with a plugin asking for all the necessary information to clone 1 function. The plugin will send an "End" instruction when all the functions are processed so that compilation could continue ... We need to decide how to number functions (so that the selection is fast) and how to aggregate this info is we compile projects with multiple functions. Also, we can have a mode when we skip the function number in case we adapt for different architecture and compile clones with different -msse2, -msse3 flags ... After cloning is done, all the cloned functions should appear in recorded XML file. We can then optimize those clones using different flags or different passes or changing fine-grain optimizations. We can use OProfile or gprof to monitor performance however we may also need instrumentation capabilities to add calls to external time/hardware counters monitoring rout
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Hi all, I created the page on GCC Wiki with this info: http://gcc.gnu.org/wiki/GCC_Research Please, feel free to update or rewrite completely (if you feel that something is wrong, etc)... Hope it will be of any use ;) ... Cheers, Grigori -Original Message- From: Manuel López-Ibáñez [mailto:lopeziba...@gmail.com] Sent: Friday, April 16, 2010 6:51 PM To: Grigori Fursin Cc: Dorit Nuzman; gcc@gcc.gnu.org; erven.ro...@inria.fr; David Edelsohn Subject: Re: Notes from the GROW'10 workshop panel (GCC research opportunities workshop) On 16 April 2010 13:21, Grigori Fursin wrote: > > I think, the main problem for students and researchers is that they > see lots of stuff going on with GCC and on mailing lists but they may > be shy/scared/not sure where to start if they want to contribute > or even if they will be welcome to contribute. The reason is that > some of their ideas/work may not be necessarily immediately useful to the > community > and they may be concerned that they can get lots of aggressive, negative > feedback That is why mentoring could be helpful. Technical discussions by email sometimes appear harsh and dry to newcomers. Moreover, negative opinions are more vocal than positive ones. So something that most people think is a good idea or they are indifferent may only get negative feedback from a few. > no matter how useful they are in the future. However, such feedback can > immediately > drive away young and motivated students who can otherwise become really active > contributors (look at the GRAPHITE and students contributing to GCC now, for > example). > > So, what I think could be useful, is to try to agree on what can be > some general common suggestions/recommendations to students/researchers A short list out of the top of my head for proposing ideas in gcc mailing lists: * If you do not have the time/resources/people to implement your own idea, do not expect GCC developers dropping what they are doing to help you. Volunteers have very very limited time and paid developers are paid to do something else. In fact, asking GCC developers to do anything for you, no matter how trivial it seems to you, will likely result in negative feedback. Probably it is no trivial at all. * if your idea may potentially slow down the compiler, increase memory consumption, increase complexity, remove features, or change defaults, it will receive negative feedback. Guaranteed. If you are sure that this is not the case or that the benefits outweigh the drawbacks, but GCC developers disagree, discussion is not going to solve it. The only way is to implement your idea (or a working prototype) and give substantial experimental evidence in many scenarios/targets that you are right. * If you have a great idea implemented and provide a substantial patch, expect negative feedback. There are many ongoing projects in GCC. A patch that comes out of the blue and breaks those projects will not be welcome by the people working on those projects. * Your email/patch may not receive much feedback. This may happen if you provide your idea in an old thread (people stop reading long threads after a while), your subject line was not interesting/descriptive enough (I do not read all emails from the list), the main audience of your email just missed/overlooked it by chance (bad luck, busy period, vacations), your email was too long (people stopped reading before reaching the interesting part), ... The only feasible choice is to try again sometime later with an improved message. * There is also the IRC channels (http://gcc.gnu.org/wiki), which are more interactive, but the same rules apply to them. Specially being ignored despite people talking to each other. That is because people are working, and sometimes they have deadlines, urgent stuff to do, they want to go home early... * Read the gcc and the gcc-patches lists for a while to get to know how things work and who is who. I am sure there are many more little rules-of-thumb I can come up with. > who may want to contribute but not sure how to approach GCC community. > Maybe we can make a page on GCC Wiki with such recommendations or even Anyone can edit the wiki, so be my guest. > maybe make a separate pre-processing mailing list for > novel/crazy/future/unclassified > ideas so that only those of you who are interested in that can follow/discuss > them > and from time to time approach this mailing list with already mature ideas > to avoid bothering others who are distracted by such discussions on this > mailing list? An example of how *not* to get things done is this "maybe we" attitude. It is likely to get no feedback, negative feedback, or positive feedback that sounds a bit negative (like my "be my guest" above for the wiki page): * It does not specify who is "we". It could be understood as asking the reader to do something that t
RE: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)
Looks good! Thanks! By the way, I sent it to the HiPEAC mailing lists too ... Cheers, Grigori -Original Message- From: Manuel López-Ibáñez [mailto:lopeziba...@gmail.com] Sent: Tuesday, April 27, 2010 6:02 PM To: Grigori Fursin Cc: Dorit Nuzman; gcc@gcc.gnu.org; erven.ro...@inria.fr; David Edelsohn Subject: Re: Notes from the GROW'10 workshop panel (GCC research opportunities workshop) On 27 April 2010 14:27, Grigori Fursin wrote: > Hi all, > > I created the page on GCC Wiki with this info: > http://gcc.gnu.org/wiki/GCC_Research > > Please, feel free to update or rewrite completely > (if you feel that something is wrong, etc)... > I think that a verbatim copy of the email seems like something set on stone, when in fact it is just my opinion and not as well written as it could have been. I converted it to a set of bullet points so people can add/remove/edit more freely. Cheers, Manuel.
RE: [RFC] Cleaning up the pass manager
Hi Diego, Thanks a lot for doing this! I was a bit sad not to be able to continue this work on pass selection and reordering but I would really like to see GCC pass manager improved in the future. I also forwarded your email to the cTuning mailing list in case some of the ICI/MILEPOST GCC/cTuning CC users would want to provide more feedback. By the way, one of the main reasons why I started developing ICI many years ago was to be able to query GCC to tell me all available passes and then just use arbitrary selection and order of them for the whole program (IPO/LTO) or per function similar to what I could easily do with SUIF in my past research on empirical optimizations and what can be easily done in LLVM now. However, implementing it was really not easy because: * We have non-trivial (and not always fully documented) association between flags and passes, i.e. if I turn on unroll flag which turns on several passes, I can't later reproduce exactly the same behavior if I do not use any GCC flags but just try to turn on associated passes through pass manager. * I believe that original idea of the pass manager introduced in GCC 4.x was to keep a simple linked list of passes that are executed in a given order ONLY through documented functions (API) and that can be turned on or off through the attribute in the list - this was a great idea and was one of the reasons why I finally moved to GCC from Open64 in 2004. However, I was a bit surprised to see in GCC 4.some explicit if statements inside pass manager that enabled some passes (for LTO) - in my opinion, it kills the main strength of the pass manager and also resulted that we had troubles porting ICI to the new GCC 4.5. * Lack of a table with full dependency info for each pass that can tell you at each stage of compilation, which passes can be selected next. I started working on that at the end of last year to get such info semi-empirically and also through the associated attributes (we presented preliminary results at GROW'10: http://ctuning.org/dissemination/grow10-08.pdf section 3.1), however again it was just before I moved to the new job so I couldn't finish it ... * Well-known problem that we have some global variables shared between passes preventing arbitrary orders By the way, just to be clear, this is just a feedback based on the experience of my colleagues and myself and I do not want to say that these are the most important things for GCC right now (though I think they are in a long term) or that someone should fix it particularly since right now personally I am not working in this area, so if someone thinks that it's not important/useless/obvious, just skip it ;) ... I now see lots of effort going on to clean up GCC and to address some of the above issues so I think it's really great and I am sad that I can't help much at this stage. However, before moving to a new job, I released all the tools from my past research at cTuning.org so maybe someone will find them useful to continue addressing the above issues ... Cheers, Grigori By the way, here is some very brief feedback about why I needed for my reseafrom the R&D we did at the beginning of this year just before I moved to the new job: * -Original Message- From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Diego Novillo Sent: Tuesday, June 15, 2010 4:03 AM To: gcc@gcc.gnu.org Subject: [RFC] Cleaning up the pass manager I have been thinking about doing some cleanups to the pass manager. The goal would be to have the pass manager be the central driver of every action done by the compiler. In particular, the front ends should make use of it and the callgraph manager, instead of the twisted interactions we have now. Additionally, I would like to (at some point) incorporate some/most of the functionality provided by ICI (http://ctuning.org/wiki/index.php/CTools:ICI). I'm not advocating for integrating all of ICI, but leave enough hooks so such experimentations are easier to do. Initially, I'm going for some low hanging fruit: - Fields properties_required, properties_provided and properties_destroyed should Mean Something other than asserting whether they exist. - Whatever doesn't exist before a pass, needs to be computed. - Pass scheduling can be done by simply declaring a pass and presenting it to the pass manager. The property sets should be enough for the PM to know where to schedule a pass. - dump_file and dump_flags are no longer globals. Are there any particular pain points that people are currently experiencing that fit this? Thanks. Diego.
CFP (related to compilers): CGO 2011
(Apologies if you receive multiple copies of this announcement) CGO 2011 - CALL FOR PAPERS Ninth Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO 2011) April 2-6, 2011, Chamonix, France http://www.cgo.org The International Symposium on Code Generation and Optimization (CGO) brings together researchers and practitioners working on bridging the gap between software abstraction and hardware execution. The conference spans the spectrum from purely static to fully dynamic approaches, and from pure software-based methods to architectural features and support. Original contributions are solicited in areas including but not limited to the following: Code Generation and Optimization . Techniques for efficient execution of dynamically typed languages . Techniques for developing or targeting custom or special-purpose targets . Code generation for emerging programming models . Code transformations for energy efficiency . New or improved optimization algorithms, including profile-guided and feedback-directed optimization . Techniques for measuring and tuning optimization effectiveness . Intermediate representations enabling more powerful or efficient optimization Parallelism . Language features and runtime support for parallelism . Transformations for heterogeneous or specialized parallel targets, e.g. GPUs . Data distribution and synchronization . Virtualization support for multicore and/or heterogeneous computing . Thread extraction and thread level speculation Static and Dynamic Analysis . Profiling and instrumentation for power, memory, throughput or latency . Phase detection and analysis techniques . Efficient profiling and instrumentation techniques . Program characterization methods targeted at program optimization . Profile-guided optimization and re-optimization OS, Architecture, and Runtime Support . Architectural support for improved profiling, optimization and code generation . Integrated system design (HW/OS/VM/SW) for improved code generation, including custom or special-purpose processors . Memory management and garbage collection Security and Reliability . Code analysis and transformations to address security or reliability concerns Practical Experience . Real dynamic optimization and compilation systems for general purpose, embedded system and HPC platforms IMPORTANT DATES: Abstract deadline is September 15, 2010. Paper deadline is September 22, 2010. Please visit the conference website for paper format guidelines and submission instructions. Notification of acceptance will occur by November 10, 2010. General Chair Olivier Temam, INRIA Program Co-Chairs Carol Eidt, Microsoft Michael O'Boyle, University of Edinburgh Program Committee Vas Bala, IBM Francois Bodin, CAPS Enterprise and IRISA David Chase, Sun Anton Chernoff, AMD Jack Davidson, University of Virginia Lieven Eeckhout, Ghent University Grigori Fursin, EXATEC LAB, France Bjorn Franke, University of Edinburgh David Gregg, Trinity College, Dublin Thomas Gross, ETH Zurich Christophe Guillon, STMicroelectronics Rajiv Gupta, UC Riverside Anne Holler, VMWare Wei Hsu, University of Minnesota Robert Hundt, Google Paolo Ienne, EPFL, Lausanne Richard Johnson, NVIDIA Teresa Johnson, Hewlett Packard Andreas Krall, TU Vienna Tipp Moseley, Google Nacho Navarro, UPC Barcelona CJ Newburn, Intel Xipeng Shen, College of William and Mary Lee Smith, ARM Mary Lou Soffa, University of Virginia Uma Srinivasan, Intel Nathan Tallent, Rice University David Tarditi, Microsoft Christoph von Praun, Georg-Simon-Ohm Hochschule Nurnberg Richard Vuduc, Georgia Institute of Technology Ayal Zaks, IBM ** Dr. Grigori Fursin http://unidapt.org/people/gfursin **
RE: Plug-ins on Windows
>I view the current plug-in mechanism as a prototype. I think that we >should be working toward a much more robust mechanism, similar to >plug-ins for Eclipse, Firefox, MySQL, or other popular software stacks. >I certainly see no reason that plug-ins cannot work on any system that >has something roughly equivalent to dlopen (which Windows and OS X >certainly do). There's lots of prior art here. > >The key change we need to make is that instead of the current >"unstructured" approach where we essentially expose the entirety of the >GCC internals, we provide a stable, documented API that exposes a >portion of the internals. (It's fine with me if there's an expert mode >in which you can do anything you want, but it should be easy to stay >within the stable API if you want to do that.) I would start with an >API for "observing" (rather than modifying) the internal representation, > and add modification later. > >With observation alone, you could start to build interesting >static-checking tools, including, for example, domain-specific tools >that could check requirements of the Linux kernel. This would be a >powerful and exciting feature to have in GCC. Agree with this vision, but it will take some time I guess ;) ... Grigori
RE: Plug-ins on Windows
I don't disagree with your comments too, Manuel. I spent some years developing plugin framework for pass selection and reordering, and later we managed to get minimal hooks to mainline GCC based on our needs. Of course, I personally would like to see a coherent and stable API for most of the parts of GCC, but as you said it will not happen itself. Still I believe that through gradual API-zation of different parts of GCC, there will be an eventual convergence to the global and stable API ... Cheers, Grigori -Original Message- From: ctuning-discussi...@googlegroups.com [mailto:ctuning-discussi...@googlegroups.com] On Behalf Of Manuel Lopez-Ibanez Sent: Tuesday, July 06, 2010 6:42 PM To: Grigori Fursin Cc: ctuning-discussi...@googlegroups.com; Joern Rennecke; David Brown; gcc@gcc.gnu.org Subject: Re: Plug-ins on Windows On 6 July 2010 17:54, Grigori Fursin wrote: >>I view the current plug-in mechanism as a prototype. I think that we >>should be working toward a much more robust mechanism, similar to >>plug-ins for Eclipse, Firefox, MySQL, or other popular software stacks. >>I certainly see no reason that plug-ins cannot work on any system that >>has something roughly equivalent to dlopen (which Windows and OS X >>certainly do). There's lots of prior art here. >> >>The key change we need to make is that instead of the current >>"unstructured" approach where we essentially expose the entirety of the >>GCC internals, we provide a stable, documented API that exposes a >>portion of the internals. (It's fine with me if there's an expert mode >>in which you can do anything you want, but it should be easy to stay >>within the stable API if you want to do that.) I would start with an >>API for "observing" (rather than modifying) the internal representation, >> and add modification later. "We need to make"? Are there any plans (or even interest) on designing such an API in the future? My understanding is that we hope an API will slowly arise from the requirements of external tools. So plugins' authors should not wait for such an API but instead start proposing it, ideally, with patches. >>With observation alone, you could start to build interesting >>static-checking tools, including, for example, domain-specific tools >>that could check requirements of the Linux kernel. This would be a >>powerful and exciting feature to have in GCC. > > Agree with this vision, but it will take some time I guess ;) ... I don't think it requires time. It requires people asking for it and working on it. Current GCC developers do not need this and they will not work on this (as far as I know), so no one should be waiting for such tools to magically appear. However, now that the plugins framework is in-place, anyone is welcome to not only develop such tools but to propose changes that make developing such tools easier. The plugins framework is a statement of the willingness of the GCC community, despite the initial opposition from the FSF, to open the project to external contributions and usages. GCC internals have been (and are still being) renovated, and there is ongoing work to make GCC more modular. However, if there is no one proposing the particular changes required by external usages, those changes won't be made. This is also a warning against working around and second-guessing GCC. Please, don't. Instead contribute the changes (API, code, and documentation) required to make GCC more open and easier to use. I know that this is the hardest route, but it is the only one that makes us progress. Cheers, Manuel. -- You received this message because you are subscribed to the Google Groups "ctuning-discussions" group. To post to this group, send email to ctuning-discussi...@googlegroups.com. To unsubscribe from this group, send email to ctuning-discussions+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/ctuning-discussions?hl=en.
RE: A Framework for GCC Plug-ins
Hi Justin, Thanks for the info - nice work! I forwarded your email to cTuning mailing list because maybe some colleagues who are/have been working on Interactive Compilation Interface will be interested in this work too ... Cheers, Grigori -Original Message- From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Justin Seyster Sent: Friday, October 29, 2010 3:32 AM To: GCC Mailing List Subject: A Framework for GCC Plug-ins One of my research projects for the past few months has been a framework for writing GCC instrumentation plug-ins called InterAspect. I am releasing the project today, and since there is a general interest in plug-ins on this list, I wanted to send a quick announcement with a pointer to the web site: http://www.fsl.cs.sunysb.edu/interaspect/ The idea of compiler plug-ins has a lot of potential, which is why I hope this framework can make compiler plug-ins in general more accessible to people who don't have extensive background in GCC's internal workings. All the code for the project is available under the GPLv3. Also, I'll be presenting the project at the International Conference on Runtime Verification 2010, which is next week. Thanks a lot to everybody on the list who helped out with questions while I was working on this. --Justin
CASES'10 paper related to MILEPOST GCC
Dear all, In case someone is interested, the paper with details on feature extraction plugin used in MILEPOST GCC/cTuning CC (practical aggregation of semantical program properties for machine learning based optimization) by M.Namolaru et al from CASES'10 is now available on-line: http://fursin.net/wiki/index.php5?title=Research:Dissemination#MCFP2010 Yours, Grigori Fursin
CFP related to compilers: SMART 2011 (co-located with CGO 2011)
Apologies if you receive multiple copies of this call. CALL FOR PAPERS 5th Workshop on Statistical and Machine learning approaches to ARchitecture and compilaTion (SMART 2011) http://cTuning.org/workshop-smart2011 April 2nd or 3rd, 2011, Chamonix, France (co-located with CGO 2011 Conference) The rapid rate of architectural change and the large diversity of architecture features has made it increasingly difficult for compiler writers to keep pace with microprocessor evolution. This problem has been compounded by the introduction of multicores. Thus, compiler writers have an intractably complex problem to solve. A similar situation arises in processor design where new approaches are needed to help computer architects make the best use of new underlying technologies and to design systems well adapted to future application domains. Recent studies have shown the great potential of statistical machine learning and search strategies for compilation and machine design. The purpose of this workshop is to help consolidate and advance the state of the art in this emerging area of research. The workshop is a forum for the presentation of recent developments in compiler techniques and machine design methodologies based on space exploration and statistical machine learning approaches with the objective of improving performance, parallelism, scalability, and adaptability. Topics of interest include (but are not limited to): Machine Learning, Statistical Approaches, or Search applied to * Empirical Automatic Performance Tuning * Iterative Feedback-Directed Compilation * Self-tuning Programs, Libraries and Language Extensions * Dynamic Optimization/Split Compilation/Adaptive Execution * Speculative and Adaptive Parallelization * Low-power Optimizations * Adaptive Virtualization * Performance Modeling and Portability * Adaptive Processor and System Architecture * Architecture Simulation and Design Space Exploration * Collective Optimization * Self-tuning Computing Systems * Other Topics relevant to Intelligent and Adaptive Compilers/Architectures/OS Important Dates * Deadline for paper submission: February 7, 2011 * Decision notification: March 7, 2011 * Deadline for camera-ready papers: March 25, 2011 * Workshop: April 2 or 3, 2011 (half-day) Paper Submission Guidelines Submitted papers should be original and not published or submitted for publication elsewhere. Papers should use the LNCS format and should be 15 pages maximum. Manuscript preparation guidelines can be found at the LNCS website (http://www.springer.com/computer/lncs, go to "For Authors" and then "Information for LNCS Authors"). Papers must be submitted in the PDF using the workshop submission website: http://www.easychair.org/conferences/?conf=smart2011 In addition to normal technical papers, please consider submitting "position paper" (2 to 15 pages). For example, a position paper could include your thoughts on compiler evolution, future infrastructure technology needs, use of adaptive techniques for the Cloud, etc. An informal collection of the papers to be presented will be distributed at the workshop. All accepted papers will appear on the workshop website. Program Chair: Francois Bodin, CAPS Entreprise, France Organizers: Grigori Fursin, Exascale Computing Research Center, France John Cavazos, University of Delaware, USA Program Committee: Denis Barthou, University of Versailles, France Marcelo Cintra, University of Edinburgh, UK Rudolf Eigenmann, Purdue University, USA Robert Hundt, Google Inc, USA Engin Ipek, Microsoft Research, USA Allen D. Malony, University of Orgeon, USA Bilha Mendelson, IBM Haifa, Israel Michael O'Boyle, University of Edinburgh, UK Markus Puschel, ETH Zurich, Switzerland Lawrence Rauchwerger, Texas A&M University, USA Xipeng Shen, College of William & Mary, USA Christina Silvano, Politecnico di Milano, Italy Bronis R. de Supinski, LLNL, USA Chengyong Wu, ICT, China Qing Yi, University of Texas at San Antonio, USA Steering Committee: Francois Bodin, CAPS Entreprise, France John Cavazos, University of Delaware, USA Lieven Eeckhout, Ghent University, Belgium Grigori Fursin, Exascale Computing Research Center, France Michael O'Boyle, University of Edinburgh, UK David Padua, UIUC, USA Olivier Temam, INRIA, France Richard Vuduc, Georgia Tech, USA David Whalley, Florida State University, USA ********* Dr. Grigori Fursin http://unidapt.org/people/gfursin *
New GCC ICI v0.9.5 (bug fixes + new examples)
Hi all, Just a small note, that we released a new GCC-ICI (Interactive Compilation Interface) version 0.9.5. It allows function-level optimization and specialization by selecting or reordering only appropriate passes. It uses external plugins to monitor and improve default compiler optimization heuristic. The new ICI is used in the MILEPOST project to automatically learn how to optimize programs using machine learning. It is merged with the Program Feature Extractor from IBM Haifa and will soon be available under the GCC MILEPOST branch. More information can be found here: http://gcc-ici.sourceforge.net http://www.milepost.eu Yours, Grigori Fursin = Grigori Fursin, PhD Research Scientist, INRIA, France http://fursin.net/research
RE: New GCC ICI v0.9.5 (bug fixes + new examples)
Hi Taras, Thank you for your message! Our main work (GCC-ICI) is slightly orthogonal to plugins - our main goal at the moment is to improve or automatically tune optimization heuristic for evolving systems. However, naturally we implemented it as a plugin system and we would be happy to have a unified one instead of having different versions doing similar things. Particularly, our final goal is to show that we can make GCC (or other compilers) fully modular and then we will clearly need a unified plugin interface. Unfortunately, we don't have enough people to work on the plugin system itself at the moment - we now spend most of the time on statistical search techniques to optimize programs but I may have a few more people joining the project at the end of March so we will look at your plugin system then in more detail. By the way, I cc this email to 2 PhD candidates: Hugh Leather (Edinburgh University) and Cupertino Miranda (INRIA) who were interested in developing a unified plugin systems for GCC together with the ICI. I think Hugh even had another working prototype - If they are still interested, maybe they will comment on that ... So whenever we are ready to extend the current version of ICI for finer-grain transformations, we will get in touch to see if we can use the common plugin system !.. Take care, Grigori Grigori Fursin, PhD Research Scientist, INRIA, France http://fursin.net/research > -Original Message- > From: Taras Glek [mailto:[EMAIL PROTECTED] > Sent: Saturday, February 23, 2008 2:15 AM > To: Grigori Fursin > Cc: 'gcc' > Subject: Re: New GCC ICI v0.9.5 (bug fixes + new examples) > > Hi Grigori, > I work for Mozilla and recently we developed a few plugins for gcc and > improvised a plugin interface to gcc to support it. > So far we have been utilizing the C++ FE, but now I'm moving into > yanking data out of the middleend. I think we should collaborate on the > plugin interface and combine our efforts to push this upstream(or at > least into Linux distributions) > > The homepage for my plugin work is: > http://wiki.mozilla.org/Dehydra_GCC > > See my blog for more details http://blog.mozilla.com/tglek > > I'm looking forward to hearing from you. > > Taras > > Grigori Fursin wrote: > > Hi all, > > > > Just a small note, that we released a new GCC-ICI (Interactive > > Compilation Interface) version 0.9.5. It allows function-level > > optimization and specialization by selecting or reordering > > only appropriate passes. It uses external plugins to monitor > > and improve default compiler optimization heuristic. > > > > The new ICI is used in the MILEPOST project to automatically > > learn how to optimize programs using machine learning. It is > > merged with the Program Feature Extractor from IBM Haifa > > and will soon be available under the GCC MILEPOST branch. > > > > More information can be found here: > > http://gcc-ici.sourceforge.net > > http://www.milepost.eu > > > > Yours, > > Grigori Fursin > > > > = > > Grigori Fursin, PhD > > Research Scientist, INRIA, France > > http://fursin.net/research > > > > > >
GCC-ICI V1.0 & GCC-ICI-PLUGINS V1.0 release (for the MILEPOST GCC)
Dear all, Just wanted to mention that we released the new version of GCC ICI and GCC ICI Plugins that are used in the MILEPOST GCC (machine learning based research compiler). I will be at the GCC Summit next week presenting this work and will be happy to discuss our current and future research & developments, however you can already get some hints from our Summit paper: http://gcc-ici.sourceforge.net/papers/fmtp2008.pdf We hope to release MILEPOST GCC (including Machine Learning routines from IBM Haifa) in summer'08 as a GCC branch. Some more info is available here: http://unidapt.org/software.html#milepostgcc Yours, Grigori Fursin ==== Grigori Fursin, PhD Research Scientist, UNIDAPT Group, INRIA, France http://unidapt.org - tackling the complexity of future computing systems using machine learning
Help with identifing all cost models in GCC to make it adaptable ?..
Dear All, We continue developing adaptable GCC (MILEPOST GCC) and we plan to have more results this fall on selecting good optimization passes and their orders to improve program execution time, reduce code size and compilation time across different architectures automatically using statistical techniques and machine learning. As I mentioned during last GCC Summit, we are now ready to look at a finer-grain level optimizations, i.e. how to automatically tune all GCC optimization cost models within passes to decide whether to apply specific transformations and what should be their parameters. To do that, ideally we need to identify all the cost models within GCC with their dependencies, and add support to our Interactive Compilation Interface to be able to continuously monitor and bias their behavior to automatically (re)tune them on new architectures. I am afraid it will take me ages to find myself all the cost models within GCC, so I would be very grateful for any help in identifying those cost models (with the pass names and places in the source code) particularly that are known to be hard to tune manually or which may have complex interactions with other transformations (we should be able to capture those interactions automatically)! Ideally, we would like to make GCC a fully modular compiler which will be tuned (mostly) automatically to any particular architecture using a given set of related optimizations. We hope that it may simplify evolution of the compiler and it will be much easier to add new optimizations (even by end-user through plugins) since developers will have less problems thinking how to plug them into the current hardwired optimization heuristic. So I hope that with your help we can make GCC a best optimizing compiler, and not only for x86 but for any architecture ;) ... By the way, in case someone is interested, you can find some of our ideas on self-tuning MILEPOST GCC here: http://gcc-ici.sourceforge.net/papers/fmtp2008.pdf Sorry for bothering and looking forward to hearing from you, Grigori P.S. I will be on vacations soon so you can also contact Abid Malik from INRIA (in CC) who will help adding support to GCC-ICI to monitor optimization cost models ... Grigori Fursin, PhD Research Scientist, INRIA Saclay, France http://unidapt.org - tackling the complexity of future computing systems using machine learning
RE: Help with identifing all cost models in GCC to make it adaptable ?..
Thanks, David! That's a good start. I will need to find places in GCC where these parameters are used to monitor the behavior of transformations based on these parameters and be able to change the compiler decisions through ICI to learn how to tune those costs based on program execution time/code size... Also, I would prefer to concentrate on only most critical/important costs right now (for example, within register allocation, scheduling, etc) since this research is very time consuming ... Cheers, Grigori Grigori Fursin, PhD Research Scientist, INRIA, France http://fursin.net/research > -Original Message- > From: David Edelsohn [mailto:[EMAIL PROTECTED] > Sent: Thursday, July 31, 2008 9:44 PM > To: Grigori Fursin > Cc: 'gcc' > Subject: Re: Help with identifing all cost models in GCC to make it adaptable > ?.. > > Grigori, > > Many of the costs now are handled by GCC parameters. See > gcc/params.def accessed in the source code using PARAM_VALUE. > > Many other cost models use macros with "COST in their name, such as > > TARGET_RTX_COSTS / rtx_cost > BRANCH_COST (and LOGICAL_OP_NON_SHORT_CIRCUIT) > MEMORY_MOVE_COST > REGISTER_MOVE_COST > MAX_CONDITIONAL_EXECUTE > > David
Re: machine learning for loop unrolling
Hi, In addition to Kenneth's reply, here are a few references you may want to look at: Edwin Bonilla, "Predicting Good Compiler Transformations Using Machine Learning", MS Thesis, School of Informatics, University of Edinburgh, UK, October 2004. http://www.inf.ed.ac.uk/publications/thesis/online/IM040129.pdf It's about using machine learning to predict loop unrolling. F. Agakov, E. Bonilla, J. Cavazos, B. Franke, G. Fursin, M.F.P. O'Boyle, J. Thomson, M. Toussaint and C.K.I. Williams. Using Machine Learning to Focus Iterative Optimization. Proceedings of the 4th Annual International Symposium on Code Generation and Optimization (CGO), New York, NY, USA, March 2006 http://fursin.net/papers/abcp2006.pdf You may also want to look at our project on GCC Interactive Compilation Interface (GCC-ICI) to access internal GCC transformations to enable external optimizations particularly using machine learning (we are now working on a new version which should be available in mid/end of summer): http://gcc-ici.sourceforge.net http://www.hipeac.net/system/files?file=7_Fursin.pdf Hope it will be of any help, Grigori Fursin ========= Grigori Fursin, PhD Research Fellow, INRIA Futurs, France http://fursin.net/research Re: machine learning for loop unrolling From: Kenneth Hoste To: stefan dot ciobaca+gcc at gmail dot com Cc: GCC Date: Fri, 8 Jun 2007 21:04:05 +0200 Subject: Re: machine learning for loop unrolling References: <[EMAIL PROTECTED]> On 08 Jun 2007, at 16:31, Stefan Ciobaca wrote: Hello everyone, For my bachelor thesis I'm modifying gcc to use machine learning to predict the optimal unroll factor for different loops (inspired from this paper: http://www.lcs.mit.edu/publications/pubs/pdf/MIT-LCS- TR-938.pdf). Interesting. I'm using evolutionary algorithms for similar purposes in my current research... Of course, not all of these are relevant to gcc. I'm looking at ways to compute some of these features, hopefully the most relevant ones. If there is already some work done that I could use in order to compute some of these features, I'd be glad if you could tell me about it. Also, if anyone can think of some useful features, related to the target architecture or the loops structure, I'd be glad to hear about them. I'm afraid I can't help here, I'm not familiar at all with GCCs internals. Also, I'm interested in some benchmarks. Many of the research papers that describe compiler optimizations use the SPEC* benchmarks, but these are not free, so I'm looking for alternatives. Currently I'm looking into: - OpenBench - Botan - CSiBE - Polyhedron (thanks to richi of #gcc for the last 3) Do you know any other one that would be better? But I can help here. Polyhedron is Fortran-only, but are well-suited for timing experiments (i.e. they run long enough to have reasonable running times, but aren't too long either). CSiBE is more targetted to code size, I believe the runtimes are ridicously small. I'm not familiar with the other two. Some other possibilities: * MiDataSets (also fairly small when run only once, but the suite allows you to adjust the outer loop iteration count to increase runtimes) [http://sourceforge.net/projects/midatasets] * MediaBench / MediaBench II: multimedia workloads, which typically iterate over frames for example [http://euler.slu.edu/~fritts/ mediabench/] * BioMetricsWorkload [http://www.ideal.ece.ufl.edu/main.php?action=bmw] * BioPerf: gene sequence analysis, ... [http://www.bioperf.org/] * some other benchmarks commonly used when testing GCC [http:// www.suse.de/~gcctest] I've been using the above with GCC and most work pretty well (on x86). Here is how I'm thinking of conducting the experiment: - for each innermost loop: - compile with the loop unrolled 1x, 2x, 4x, 8x, 16x, 32x and measure the time the benchmark takes - write down the loop features and the best unroll factor - apply some machine learning technique to the above data to determine the correlations between loop features and best unroll factor Any idea which? There's a huge number of different techniques out there, choosing an appropiate one is critical to success. - integrate the result into gcc and measure the benchmarks again When using machine learning techniques to build some kind of model, a common technique is crossvalidation. Say you have 20 benchmarks, no matter which ones. You use the larger part of those (for example 15) to build the model (i.e. determine the correlations between loop features and best unroll factor), and then test performance of that on the other ones. The important thing is not to use the benchmarks you test with when using the machine learning technique. That way, you can (hopefully) show that the stu
run-time function adaptation for statically-compiled programs
Hi all, In case someone is interested, we just made a new patch available for gcc to enable run-time multiple option exploration and to enable run-time adaptation for various constraints on heterogeneous systems using function cloning. More information and a patch are avilable here: http://gcc.gnu.org/wiki/functionAdaptation Cheers, Grigori = Grigori Fursin, PhD Research Fellow, INRIA Futurs, France http://fursin.net/research
New GCC ICI (with plug-ins for dynamic pass reordering)
Hello all, In case some of you are interested, we produced a new version of the Interactive Compilation Interface (ICI) for GCC (the tarball for this release can be downloaded here: https://sourceforge.net/project/showfiles.php?group_id=180190&package_id=245842&release_id=539711) To remind, the main aim of the Interactive Compilation Interface (ICI) is to transform GCC into research compiler with minimal changes. In a new version we re-designed the ICI completely based on the valuable feedback from the users after the SMART'07 workshop and GCC HiPEAC'07 tutorial (many thanks to Cupertino Miranda who implemented most of it). The communication with the new ICI is now performed through a dynamically linked GCC plug-in written in C (however any high-level or scripting language can be used with a wrapper). Current implementation includes the ability to reorder or turn specific GCC passes on and off on a function level. We currently work to implement all remaining functions of the specified interface, add passes to extract program features and split analysis and optimization code to enable access to compiler transformations at a fine-grain level. More info is available at the GCC-ICI project website: http://gcc-ici.sourceforge.net I am also finishing/testing a new framework for transparent iterative compilation for GCC (works with compiler flags and pass reordering) which I plan release in about 1.5 months... Yours, Grigori Fursin P.S. I am at the GREPS'07 workshop and PACT'07 conference at the moment so if some of you are there and have questions, I will be happy to have a chat ;) ... ======== Grigori Fursin, PhD Research Scientist, INRIA Futurs, France http://fursin.net/research
RE: Enabling gcc optimization pass
Hi, Actually, just saw your post, so wanted to say that it is possible with the new GCC-ICI framework. You can find more info here: http://gcc-ici.sourceforge.net There is no checking for pass dependencies and we are working on it now ... Hope it is still of any help ;), Grigori Grigori Fursin, PhD Research Scientist, INRIA Futurs, France http://fursin.net/research -Original Message- From: gcc-owner at gcc dot gnu dot org [mailto:gcc-owner at gcc dot gnu dot org] On Behalf Of Rohit Arul Raj Sent: Thursday, July 26, 2007 10:30 AM To: gcc Subject: Enabling gcc optimization pass Hi all, I have 3 functions- fun1, fun2, fun3 in the same source file and i want to enable one or any of the gcc optimization pass to code in fun2 only, 1. Is it possible to implement this using function attributes or #pragms's? 2. What will be its side-effects? Regards, Rohit