https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989

--- Comment #25 from joseph at codesourcery dot com <joseph at codesourcery dot 
com> ---
On Wed, 26 Oct 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote:

> Seems LLVM currently only supports _BitInt up to 128, which is kind of useless
> for users, those sizes can be easily handled as bitfields and performing 
> normal
> arithmetics on them.

Well, it would be useful for users of 32-bit targets who want 128-bit 
arithmetic, since we only support __int128 for 64-bit targets.

> As for implementation, I'd like to brainstorm about it a little bit.
> I'd say we want a new tree code for it, say BITINT_TYPE.

OK.  The signed and unsigned types of each precision do need to be 
distinguished from all the existing kinds of integer types (including the 
ones used for bit-fields: _BitInt types aren't subject to integer 
promotions, whereas bit-fields narrower than int are).

In general the types operate like integer types (in terms of allowed 
operations etc.) so INTEGRAL_TYPE_P would be true for them.  The main 
difference at front-end level is the lack of integer promotions, so that 
arithmetic can be carried out directly on narrower-than-int operands (but 
a bit-field declared with a _BitInt type gets promoted to that _BitInt 
type, e.g. unsigned _BitInt(7):2 acts as unsigned _BitInt(7) in 
arithmetic).

Unlike the bit-field types, there's no such thing as a signed _BitInt(1); 
signed bit-precise integer types must havet least two bits.

> TYPE_PRECISION unfortunately is only 10-bit, that is not enough, so it 
> would need the full precision to be specified somewhere else.

That may complicate things because of code expecting TYPE_PRECISION to be 
meaningful for all integer types.  But that could be addressed without 
needing to review every use of TYPE_PRECISION by e.g. changing 
TYPE_PRECISION to check wherever the _BitInt precision is specified, and 
instead using e.g. TYPE_RAW_PRECISION for direct access to the tree field 
(so only lvalue uses of TYPE_PRECISION would then need updating, other 
accesses would automatically get the full precision).

> And have targetm specify the ABI
> details (size of a limb (which would need to be exposed to libgcc with
> -fbuilding-libgcc), unless it is everywhere the same whether the limbs are
> least significant to most significant or vice versa, and whether the highest
> limb is sign/zero extended or unspecified beyond the precision.

I haven't seen an ABI specified for any architecture supporting big-endian 
yet, but I'd tend to expect such architectures to use big-endian ordering 
for the _BitInt representation to be consistent with existing integer 
types.

> What about the large ones?

I think we can at least slightly simplify things by assuming for now 
_BitInt multiplication / division / modulo are unlikely to be used much 
for arguments large enough that Karatsuba or asymptotically faster 
algorithms become relevant; that is, that naive quadratic-time algorithms 
are sufficient for those operations.

Reply via email to