http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56362



             Bug #: 56362

           Summary: bitfield refs over-optimized?

    Classification: Unclassified

           Product: gcc

           Version: 4.7.2

            Status: UNCONFIRMED

          Severity: normal

          Priority: P3

         Component: middle-end

        AssignedTo: unassig...@gcc.gnu.org

        ReportedBy: jay.kr...@cornell.edu





Our front end is a bit wierd.

We don't declare the fields of our structs.

We use bitfield refs to pick out the fields we know are there.





Something like this:





If in C you had the reasonable:





struct { int a,b,c,d } e;

 e.b => component_ref 





We have:

 struct /* size 16 bytes */ e;

 bitfield_ref(e, offset 4 bytes, size 4 bytes, type int)





 I have changed it so sometimes, instead: 

   *(int*)(&e + 4) 



e.g. when reading floating point fields,

but that defeats optimizations more.





so, then, while this mostly works, and generates better

code than the pointer offset + deref form, it does seem to

very occasionally not work.





I have not fully debugged this, at least not in years.





4.7.2/gcc/fold-const.c has this code:





      /* A bit-field-ref that referenced the full argument can be stripped.  */

      if (INTEGRAL_TYPE_P (TREE_TYPE (arg0))

      && TYPE_PRECISION (TREE_TYPE (arg0)) == tree_low_cst (arg1, 1)

      && integer_zerop (op2))

    return fold_convert_loc (loc, type, arg0);





I believe this is a bit too aggressive.

Such as when there is a sign extension implied.





I added these two conditions to make to make it less aggressive:

      && INTEGRAL_TYPE_P (type)

      && TYPE_UNSIGNED (type) == TYPE_UNSIGNED (TREE_TYPE (arg0))

Reply via email to