> > i've been able to upgrade my systems here through a number of µarches
> > (intel xeon 5[0456]00, 3[04]00, atom; amd phenom) that weren't around
> > when i first installed my systems, and i'm still using most of the original
> > binaries.  the hardware isn't forcing an upgrade.
> 
> But that's possible because the "real" hardware is RISC, and there is a
> software level managing compatibility (microcode). That's why too,
> hardware can be "patched".

the "real hardware" depends on the cisc layer.  a significant amount of
x86 performance depends on the fact that x86 isa code is very dense and is
used across much slower links than exist within a core.  

it's not clear to me that real cpu guys would call the guts of modern
intel/amd/whatever risc at all.  the µops don't exist at the level of any
traditional isa.

iirc, almost all isa -> µop translations are handled
by hardware for intel.  i shouldn't be so lazy and look this up
again.

> My point is more that when the optimization path is complex, the
> probability is the combined one (the product); hence it is useless to 
> expect a not neglibible gain for the total, specially when the "best
> case" for each single optimization is almost orthogonal to all the
> others.
> 
> Furthermore, I don't know for others, but I prefer correctness over
> speed. I mean, if a program is proved to be correct (and very few are),
> complex acrobatics from the compiler, namely in the "optimization" area,
> able to wreak havoc all the code assumptions, is something I don't buy.
> 
> I have an example with gcc4.4, compiling not my source, but D.E.K.'s
> TeX. 

i think you're mixing apples and oranges.  gcc has nothing to do with
whatever is running inside a procesor, microcode or not.

it is appalling that the gcc guys don't appear to do anything most people
would call validation or qa.  but that doesn't mean that everybody else
has such a poor quality record.

- erik

Reply via email to