On 11/10/20 3:26 PM, Andrew MacLeod wrote:
On 11/10/20 5:41 AM, Aldy Hernandez wrote:
[Andrew, this doesn't fix a PR per se.  It's just something I stumbled
upon in my other -fstrict-enums fixes.  It passes tests.  What do you
think?  Should we push it?]

Imagine an enum in an -fstrict-enums world:

    enum foo { zero, one, two, three };

Building the inverse of [0,3] currently yields VARYING (instead of
[4,+INF]).  This is because the setter sees 0 and 3 as the extremes of
the type, and per the current code, calculates the inverse to VARYING.
BTW, it really should be UNDEFINED, but this is legacy code I'm afraid
to touch:

Didnt we conclude at one point that neither VARYING nor UNDEFINED can be inverted?  There was too much ambiguity about what it meant, so callers need to check for undefined and varying before they can use invert. This makes sure they get their expected results.

Right, but this is in the setter. We aren't inverting a varying, but creating the inverse of a range that happens to behave as a varying under the old rules. Per the last set of fixes (or even the normalization code we had before), [0,3] is not varying. [0,MAX] is varying. What I'm trying to avoid is ~[0,3] be VARYING. It should be [4,MAX].

in fact, I see this in invert():

gcc_assert (!undefined_p () && !varying_p ());


but it comes after the legacy check..  so maybe there is some attempt to preserve legacy behaviour..  dunno.

But you are correct that this is legacy behavior. I don't know if having legacy and multi-range behavior different in this regard is good, so I proposed this fix.

I'm a bit indifferent.  We could leave it as is.

Aldy

Reply via email to