--- David Miller <[EMAIL PROTECTED]> wrote:
> From: NightStrike <[EMAIL PROTECTED]>
> Date: Thu, 1 Nov 2007 22:34:33 -0400
> 
> > I think what is more important is the resulting
> binary -- does it
> > run faster?
> 
> The answer to this is situational dependant.
> 
> For example, for me, the speed of compilation at -O2
> is very important
> because I'm constantly doing full tree build
> regressions.
> 
> There are large groups of us who pine for
> compilation to be as fast
> as the old MIPS compilers were, and they were fully
> optimizing
> and even had a more advanced register allocator than
> GCC has now.
> 
I find it hard to fathom why the OP would be concerned
with compile and run times measured in minutes and
seconds. I don't know how long your full tree build
regressions take, but for me, a very small application
will take half an hour to compile, and a large one
could take all day.  But if by hand tuning my code,
and pushing my development tools to their limits, I
can have my application finish a task in minutes where
my predecessors' versions took hours (something I
commonly see, perhaps by chance, with the projects I
find myself working on), the savings of my clients'
users' time is greater than the cost of my time by
several orders of magnitude, so I don't mind waiting
for a build to finish if the end product is provably
correct.

There is much more to both compile time and run time
performance than how fast your development tools are. 
I expect more recent tools to take longer than the
tools I used even five years ago, simply because there
is much more for them to do; and as they get better, I
can use more demanding parts of the language (my
preferred language is C++) that simply weren't
practical a few years ago.  As I do this, then my
tools must work harder still.  It isn't only the
tools, but what you do with them ...

If I may state the obvious, an outstanding programmer
can easily make a mediocre development tool look good,
while a mediocre programmer can make even the best
tools look very bad.  That said, I often download open
source applications (all good quality), and the GCC
suite takes longer to build than all the rest combined
(that is, of the ones I download), and since that
finishes in but a few hours on my machine, I won't
worry about how fast gcc compiles code until it takes
many days to compile itself.  :-)

As you say, performance questions and answers depend
on the situation.  But I say, the single most
important question is, "Is the code correct?"  that
is, does it produce output that is provably correct. 
There is no point in having an insanely fast program
if it only, or even only generally, produces garbage. 
As important as performance is, the correctness of the
code is, to my mind, infinitely more important than
either compile or runtime performance!

I would encourage the good folk who work on GCC to
focus on making the code correct first, and only after
that can be proven, worry about making it faster. 
Really bad things can happen to real people if my
programs give incorrect results (think about things
like contaminant transport, dose/risk assessments,
&c., and how someone I have never met may suffer if my
application gives a consultant or civil servant
unreliable results).  When you think about the things
relevant to the work I do, you will understand why I
don't care if my build times are measured in hours or
days or even weeks as long as my clients' users' can
work more efficiently and obtain provably correct
results from my programs.  Computers are cheap these
days, so if I find myself too often waiting for a
build to complete, I'l just get another computer to
work on while I wait for the one doing the build to
finish.

I don't help develop GCC, but may I express to those
that do that I apreciate their efforts.

Cheers,

Ted

Reply via email to