https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114405
--- Comment #3 from GCC Commits <cvs-commit at gcc dot gnu.org> --- The master branch has been updated by Jakub Jelinek <ja...@gcc.gnu.org>: https://gcc.gnu.org/g:982250b230967776f0da708a1572b78a38561e08 commit r14-9609-g982250b230967776f0da708a1572b78a38561e08 Author: Jakub Jelinek <ja...@redhat.com> Date: Fri Mar 22 09:22:04 2024 +0100 bitint: Some bitint store fixes [PR114405] The following patch fixes some bugs in the handling of stores to large/huge _BitInt bitfields. In the first 2 hunks we are processing the most significant limb of the actual type (not necessarily limb in the storage), and so we know it is either partial or full limb, so [1, limb_prec] bits rather than [0, limb_prec - 1] bits as the code actually assumed. So, those 2 spots are fixed by making sure if tprec is a multiple of limb_prec we actually use limb_prec bits rather than 0. Otherwise, it e.g. happily could create and use 0 precision INTEGER_TYPE even when it actually should have processed 64 bits, or for non-zero bo_bit could handle just say 1 bit rather than 64 bits plus 1 bit in the last hunk spot. In the last hunk we are dealing with the extra bits in the last storage limb, and the code was e.g. happily creating 65 bit precision INTEGER_TYPE, even when we really should use 1 bit precision in that case. Also, it used a wrong offset in that case. The large testcase covers all these cases. 2024-03-22 Jakub Jelinek <ja...@redhat.com> PR tree-optimization/114405 * gimple-lower-bitint.cc (bitint_large_huge::lower_mergeable_stmt): Set rprec to limb_prec rather than 0 if tprec is divisible by limb_prec. In the last bf_cur handling, set rprec to (tprec + bo_bit) % limb_prec rather than tprec % limb_prec and use just rprec instead of rprec + bo_bit. For build_bit_field_ref offset, divide (tprec + bo_bit) by limb_prec rather than just tprec. * gcc.dg/torture/bitint-66.c: New test.