https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015

            Bug ID: 111015
           Summary: __int128 bitfields optimized incorrectly to  the 64
                    bit operations
           Product: gcc
           Version: 13.2.1
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: rtl-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: pshevchuk at pshevchuk dot com
  Target Milestone: ---

godbolt: https://godbolt.org/z/r5d6ToY1z

Basically, a store of one half of a 70-bit bitfield gets completely optimized
away.
i.e. for 

struct Entry {
    unsigned left : 4;
    unsigned right : 4;
    uint128 key : KEY_BITS;
} data;

the code:

data.left = left;
data.right = right;
data.key = key & KEY_BITS_MASK;

produces the following (amd64):
        andl    $15, %ecx
        salq    $4, %rcx
        andl    $15, %edx
        orq     %rdx, %rcx
        movq    %rdi, %rax
        salq    $8, %rax
        orq     %rax, %rcx
        movq    %rcx, data(%rip)
        andw    $-16384, data+8(%rip)

critically, at no point is there any attempt to actually initialize data+8

The problem does not disappear if the bitfields gets moved around; it is,
however, very finicky with respect to the size of the bitfields.

-O1 -fstore-merging appears to be close to the smallest set of compilation
options at which it fails.

If you replace -O1 with the list of -O1 optimization from here:
https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html , it will start
working correctly, so probably we have a documentation issue

Reply via email to