On Tue, Sep 12, 2023 at 10:27:18AM +0000, Richard Biener wrote:
> On Mon, 11 Sep 2023, Jakub Jelinek wrote:
> > And, also I think it is undesirable when being asked for signed_type_for
> > of unsigned _BitInt(1) (which is valid) to get signed _BitInt(1) (which is
> > invalid, the standard only allows signed _BitInt(2) and larger), so the
> > patch returns 1-bit signed INTEGER_TYPE for those cases.
> 
> I think the last bit is a bit surprising - do the frontends use
> signed_or_unsigned_type_for and would they be confused if getting
> back an INTEGER_TYPE here?

I see a single c-family/c-pretty-print.cc use of signed_or_unsigned_type_for
and none of signed_type_for in the C/C++ FEs (unsigned_type_for is used in a
couple of spots, but that isn't affected), c_common_signed_type
or c_common_signed_or_unsigned_type is used more than that, but I still
think it is mostly used for warning stuff and similar or when called with
some specific types like sizetype.  I don't think the FE uses (or should
use) those functions to decide e.g. on types of expressions etc., that is
what common_type etc. are for.
And, for the very small precisions the distinction between BITINT_TYPE and
INTEGER_TYPE should be limited to just loads/stores from memory (in case
there are different rules for what to do with padding bits in those cases if
any) and on function arguments/return values, I think none of this is really
affected by those signed_type_for/c_common_signed_type results.

And by ensuring we never create 1-bit signed BITINT_TYPE e.g. the backends
don't need to worry about them.

But I admit I don't feel strongly about that.

Joseph, what do you think about this?

        Jakub

Reply via email to