Jeffrey A Law wrote:
> My suspicions appear to be correct.  This never triggers except for
> Ada code and it's relatively common in Ada code.  No surprise since
> I don't think any other front-end abuses TYPE_MAX_VALUE in the way
> the Ada front-end does.  This wouldn't be the first time we've had
> to hack up something in the generic optimizers to deal with the
> broken TYPE_MAX_VALUE.

What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
allowed by given type.  For range types it is clearily the upper
bound of the range.  Of course, upper bound should be representable,
so TYPE_MAX_VALUE <= (2**TYPE_PRECISION - 1) for unsigned types
and TYPE_MAX_VALUE <= (2**(TYPE_PRECISION - 1) - 1) for signed types.
However, if the language has non-trivial range types you can expect
strict inequality.  Note, that if you do not allow strict inequality
above, TYPE_MAX_VALUE would be redundant.

FYI GNU Pascal is using such representation for range types, so for
example:

type t = 0..5;

will give you TYPE_PRECISION equal to 32 (this is an old decision
which tries to increase speed at the cost of space, otherwise 8
would be enough) and TYPE_MAX_VALUE equal to 5.

GNU Pascal promotes arguments of operators, so that arithmetic take
place in "standard" types -- I belive Ada is doing the same.

BTW, setting TYPE_PRECISION to 3 for the type above used to cause
wrong code, so the way above was forced by the backend.

If you think that such behaviour is "abuse" then why to have sparate
TYPE_MAX_VALUE. How to represent range types so that optimizers
will know about allowed ranges (and use them!)? And how about debug
info?

-- 
                              Waldek Hebisch
[EMAIL PROTECTED] 

Reply via email to