https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118206

--- Comment #9 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Ah, I wonder if the problem isn't with ll_and_mask -> ll_mask, which is 15 and
maybe picks the right top part, but with rl_and_mask.
That is -8 and xrl_bitpos is 4 and as lnprec is 16 and
rl_and_mask.get_precision () == 8, the
    rl_mask = wi::lshift (wide_int::from (rl_and_mask, lnprec, UNSIGNED),
                          xrl_bitpos);
computation picks the 0xf8 value and shifts it left by 4, so yields 3968 aka
0xf80
as rl_mask.  That is incorrect, it really should have been just 0x80.
So either we need to arrange for never using "negative" masks and would need to
ensure that if there is a shift count (like the rl_bitpos 4 in this case) that
we mask off any bits above rl_and_mask.get_precision () - rl_bitpos (ditto for
ll*) or need to mask it off later on.

BTW, another code formatting comment,
  xll_bitpos = ll_bitpos - lnbitpos, xrl_bitpos = rl_bitpos - lnbitpos;
Shouldn't we just
  xll_bitpos = ll_bitpos - lnbitpos;
  xrl_bitpos = rl_bitpos - lnbitpos;
?

Reply via email to