In some sense you have to think in terms of three worlds:
1) what you call "compile-time static expressions" is one world which in
gcc is almost always done by the front ends.
2) the second world is what the optimizers can do. This is not
compile-time static expressions because that is what the front end has
already done.
3) there is run time.
My view on this is that optimization is just doing what is normally done
at run time but doing it early. From that point of view, we are if not
required, morally obligated to do thing in the same way that the
hardware would have done them. This is why i am so against richi on
wanting to do infinite precision. By the time the middle or the back
end sees the representation, all of the things that are allowed to be
done in infinite precision have already been done. What we are left
with is a (mostly) strongly typed language that pretty much says exactly
what must be done. Anything that we do in the middle end or back ends in
infinite precision will only surprise the programmer and make them want
to use llvm.
Kenny
On 04/08/2013 05:36 PM, Robert Dewar wrote:
On 4/8/2013 5:12 PM, Lawrence Crowl wrote:
(BTW, you *really* don't need to quote entire messages, I find
it rather redundant for the entire thread to be in every message,
we all have thread following mail readers!)
Correct me if I'm wrong, but the Ada standard doesn't require any
particular maximum evaluation precision, but only that you get an
exception if the values exceed the chosen maximum.
Right, that's at run-time, at compile-time for static expressions,
infinite precision is required.
But at run-time, all three of the modes we provide are
standard conforming.
In essence, you have moved some of the optimization from the back
end to the front end. Correct?
Sorry, I don't quite understand that. If you are syaing that the
back end could handle this widening for intermediate values, sure
it could, this is the kind of thing that can be done at various
different places.