On Fri, Jan 03, 2025 at 12:03:06PM -0600, Robert Dubner wrote:
> > As has been noted, wide_int can be used for large integer arithmetic
> > within the compiler.  
> 
> My needs are modest; we use __int128 in only a few places in the host
> code.  If __int128 were supported by 32-bit GCC, we'd wouldn't be having
> this conversation.
> 
> But since __int128 isn't available, what I need is a drop-in replacement
> for __int128.  What I don't see, yet, is how to use wide_int as a such a
> replacement.
> 
> So, for example, in code that runs on the host I'll convert a string of
> numerals to __int128 by doing something like
> 
> __int128 value = 0;
> while(*p)
>       {
>       value = 10*value + *p++ & 0x0F;
>       }
> 
> And when I need to go back to a string, the loop likes like
>       {
>       ch = value % 10 + '0';
>       value /= 10;
>       }
> 
> If I could be pointed to a place to see how that's done, well, it'll save
> me a lot of looking through the existing code.

wide_int is compiler internal type, so it can handle all cases where you
need to evaluate something 128-bit (or any other precision up to right now
65534 bits) at compile time.
If you want to see e.g. parsing of strings into wide_int/widest_int, you
can look at the BITINT code in c-family/c-lex.cc (interpret_integer)
which handles parsing of decimal, octal, binary and hexadecimal large
numbers (e.g. to parse C23
123665200736552267030251260509823595017565674550605919957031528046448612553265933585158200530621522494798835713008069669675682517153375604983773077550946583958303386074349567uwb
and similar numbers).
Printing of wide_int into strings can be done in various ways,
e.g. using wide-int-print.cc APIs, or pp_printf with %wd/%wx etc.

But, if you need to do 128-bit or larger precision computations at runtime,
you really need some supported type with ABI support on the targets etc.
(currently e.g. int128_type_node on some targets or BITINT_TYPE with 128-bit
precision on even fewer targets can do), or e.g. lower it in the FE into say
pair of 64-bit integers (long long) which are supported everywhere; but then
you need to take care of lowering all the needed arithmetics in the FE and
decide in the FE also how it is passed, either as two long longs, or struct
containing those, or array containing those, ...

> 
> > For floating-point arithmetic, there are the
> > interfaces in real.h to handle floating point within GCC's internal
> > representation (note that some of the operations might leave a result
> with
> > extra internal precision; see how fold-const.cc:const_binop calls
> > real_arithmetic followed by real_convert to ensure correct rounding to
> the
> > desired format, for example).
> 
> Again, I am not sure we're talking about the same thing.  We have host
> code that uses _Float128 values. That's because there are COBOL compiler
> directives that need floating point values that can be bigger than IEEE
> binary64.  And the COBOL programmer can create source code like
> 
>     77 float-val USAGE FLOAT-BINARY-128 VALUE 1234.56789 .

Again, the question is if it needs to be supported everywhere, or just error
out on targets which don't have _Float128 (note, there are even targets
which don't support _Float32 and _Float64, e.g. pdp11/vax); and if the
support is only needed at compile time or at runtime too.
real.{h,cc} can support IEEE-like precisions and exponents in larger range
than binary128, I think often something like 160 bits precision.
But if one needs runtime support and especially libm functions, one needs
ABI for that and hw and/or library support.
_Float128 support isn't limited to lp64 targets (unlike __int128; which
needs right now lp64 support with the exception of amdgcn I think), but
still not supported on too many of them (and on various including x86
it is a software emulation rather than hw support).

        Jakub

Reply via email to