[Sorry for the delay]

> > That's an old problem, which has already been discussed IIRC: should
> > TYPE_MAX_VALUE/TYPE_MIN_VALUE be constrained by TYPE_PRECISION and
> > TYPE_UNSIGNED?
>
> My feeling?  Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
> represent the set of values that an object of the type may hold.
> Any other definition effectively renders those values useless.
>
> ie, if an object can have the values 0..128 at runtime, then
> TYPE_MIN_VALUE/TYPE_MAX_VALUE must cover that entire range.
> 0..128.  If TYPE_MIN_VALUE/TYPE_MAX_VALUE only cover 0..127,
> then that's a bug.

I was actually referring to explicit constraints on TYPE_MAX_VALUE and 
TYPE_MIN_VALUE derived from TYPE_PRECISION and TYPE_UNSIGNED, for example 
that ceil(log2(TYPE_MAX_VALUE - TYPE_MIN_VALUE)) must be greater or equal to 
TYPE_PRECISION.

> I suspect we get this behavior from the Ada front-end as a
> side effect of the language and possibly the need to do
> runtime bounds checking on object values.   But that's no
> excuse for a front-end to lie about the bounds of an object.

I don't think the Ada front-end lies about the bounds of types, but it does 
virtually use the whole spectrum of TYPE_PRECISION, TYPE_MAX_VALUE and 
TYPE_MIN_VALUE settings, unlike the C-family of front-ends.

This problem was already raised when Diego contributed the VRP pass and Diego 
ajusted it to cope with Ada.  AFAIK Ada and VRP work fine on the 4.1 branch.

-- 
Eric Botcazou

Reply via email to