https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112098

            Bug ID: 112098
           Summary: suboptimal optimization of inverted bit extraction
           Product: gcc
           Version: 13.2.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: middle-end
          Assignee: unassigned at gcc dot gnu.org
          Reporter: bruno at clisp dot org
  Target Milestone: ---

gcc optimizes quite well a bit extraction such as

---------------------- foo.c ----------------------
unsigned int foo (unsigned int x)
{
  return (x & 0x200 ? 0x10 : 0);
}
---------------------------------------------------

$ gcc -O2 -S foo.c && cat foo.s
...
        shrl    $5, %eax
        andl    $16, %eax
...
That is perfect: 2 arithmetic instructions.

However, for the inverted bit extraction
====================== foo.c ======================
unsigned int foo (unsigned int x)
{
  return (x & 0x200 ? 0 : 0x10);
}
===================================================

the resulting code has 4 arithmetic instructions:

$ gcc -O2 -S foo.c && cat foo.s
...
        shrl    $9, %eax
        xorl    $1, %eax
        andl    $1, %eax
        sall    $4, %eax
...

Very clearly, the last shift instruction could be saved by transforming this
code to

...
        shrl    $5, %eax
        xorl    $16, %eax
        andl    $16, %eax
...

clang 16 even replaces the "xorl $16, %eax" instruction with a "notl %eax". So,
the optimal instruction sequence is one of
...
        shrl    $5, %eax
        notl    %eax
        andl    $16, %eax
...
or
...
        notl    %eax
        shrl    $5, %eax
        andl    $16, %eax
...

$ gcc --version
gcc (GCC) 13.2.0
Copyright (C) 2023 Free Software Foundation, Inc.

This is for x86_64. But similar optimization opportunities exist for other CPUs
as well.
For example, arm:

...
        lsr     r0, r0, #9
        eor     r0, r0, #1
        and     r0, r0, #1
        lsl     r0, r0, #4
...
which can be optimized to
...
        lsr     r0, r0, #5
        eor     r0, r0, #16
        and     r0, r0, #16
...

Or for sparc64:

...
        and     %o0, 512, %o0
        cmp     %g0, %o0
        subx    %g0, -1, %o0
        sll     %o0, 4, %o0
        jmp     %o7+8
         srl    %o0, 0, %o0
...
which can be optimized to
...
        xnor    %g0, %o0, %o0
        srl     %o0, 5, %o0
        jmp     %o7+8
         and    %o0, 16, %o0
...

Reply via email to