Robert Dewar <[EMAIL PROTECTED]> writes: > Ian Lance Taylor wrote: > > >>We spend a lot of time printing out the results of compilation as > >> assembly language, only to have to parse it all again in the > >> assembler. > > I never like arguments which have loaded words like "lot" without > quantification. Just how long *is* spent in this step, is it really > significant?
[ It wasn't me that said the above, it was Richard Earnshaw. ] I haven't measured it for a long time, and I never measured it properly. And I don't know how significant it has to be to care about. I would expect that it is on the order of 1% of the time of a typical unoptimized compile+assembly, but I can't prove it today. A proper measurement is the amount of time spent formatting the output in the compiler, the amount of time spent making system calls to write out the output, the amount of time the assembler spends making system calls reading the input, and the amount of time the assembler spends preprocessing the assembler file, interpreting the strings, looking up instructions in hash tables, and parsing the operands. Profiling will show that the assembler stuff is a significant chunk of time taken by the assembler, so the questions are: how much time does gcc spend formatting and how much time do the system calls take? I just tried a simple unoptimized compile. -ftime-report said that final took 5% of the time (obviously final does more than formatting), and the assembler took 4% of the total user time, and system time took 16% of wall clock time. Cutting those numbers in half makes 1% seem not implausible to me, maybe even low. I'm considering an unoptimized compile because that is where the assembler makes the most difference--the compiler is faster and the assembler output probably tends to be longer, and also an unoptimized compile is when people care most about speed. For an optimizing compile, the assembler is obviously going to be less of a factor. Ian