> 
> > Before you go too far, why not
> 
> Because people claim that pgcc isn't even stable.  I wanted to get a
> bunch of fundamental code compiled under pgcc & then run it to see if I
> could break it.
> 
> Eventually, what I plan to do is to allow several different
> architectures to be installed on a machine, and to be able to switch
> between them (for new processes anyway) with something like a "mount"
> command. That way, benchmarking them against each other becomes easy.
> 
> People have told me that there are other Linux distributions out there
> that already support i586/i686 and so on, but I'm not going to "jump
> ship" over a single feature.  Red Hat has lots of other advantages, like
> a reputation for being very nicely done. :-)
> 

There are othere distributions who are using the marketing trick to
claim they are building it for i586 would be a more adequate
description.  Situation presently is:

-gcc 2.95.2 is 99% ready but not 100%.  It happenned only a couple
 times in dozens of builds I made with it but I caught it miscompiling
 or failing to compile things who compiled well with egcs.

-pgcc not being the official gcc is far less well tested than gcc and
 has far less people behind it than gcc.  There was a time Linux was
 only a couple dozens long-haired hackers (and a short haired one
 called Linus) whose maximum ambition was to use it as a DNS server.
 In a such context using pgcc (if it had existed) would have been
 legitimate.  Today Linux is pushing towards use in mission critical
 applications and distribution developpers can no longer play games
 with compilers who are not 110% proven.  For one side look at those
 distributions used by people who use Linux for mission critical
 applications.  If you want to know them take a look at what
 distributions are certified for mission critical software a la Oracle
 or DB2.  And now intersect this with the list of those distributions
 "optimized for Pentiums" or pgcc-compiled.

-It is naive to think that your PIII or Athlon will automatically
 behave better when the software is compiled for the plain Pentium
 than for 486 or 386.  Intuitively it seems so.  But consider this: if
 you want to copy a block of memory on a Pentium box then using a loop
 of loads and stores is several times faster than doing it through the
 adhoc instruction.  It is the opposite on the 386 and it happens that
 PentiumPros and above are faster if you do it the 386 way not the
 Pentium one.  This is a little example of why you should take the
 "optimized for Pentium" claims with a grain of salt.  On later
 processors it probably gives a gain over the whole of a program but
 don't assume it.  Also when I benchmarked gcc 2.95.2 on my K6 if my
 memory is good it was -mpentium who got the worst results, worse than
 -m386.  This is for another grain of salt.

-When I benchmarked gcc 2.95.2 versus egcs I noticed gcc 2.95.2 gave
 an appreciably faster code.  About 25% for FP intensive tests, around
 10% for integer tests.  But internal variation (ie same compiler but
 different architectural parameters) was several times smaller.  So
 presently discussion should be about if it is already possible for
 RedHat to switch to a better compiler not about compiling for
 different processors since this provides no real benefit.  According
 to Alan Cox gcc 2.96 and later _will_ provide real
 processor-dependent optimizations.  But not present gcc or egcs.

-- 
                        Jean Francois Martinez

Project Independence: Linux for the Masses
http://www.independence.seul.org

-- 
To unsubscribe:
mail -s unsubscribe [EMAIL PROTECTED] < /dev/null

Reply via email to