Hi Diego, This is a backport for two upstream patches into our 4.6 branch. I submitted the first patch by Julian a while ago for backport but Richard Earnshaw pointed out a problem with the first patch. The second patch from Joey fixes that problem. This was tested on x86 and ARM.
-Doug 2011-11-22 Doug Kwan <dougk...@google.com> Backport r171347 and r181549 from trunk. gcc/ 2011-03-23 Julian Brown <jul...@codesourcery.com> * expr.c (expand_expr_real_1): Only use BLKmode for volatile accesses which are not naturally aligned. 2011-11-20 Joey Ye <joey...@arm.com> * expr.c (expand_expr_real_1): Correctly handle strict volatile bitfield loads smaller than mode size. gcc/testsuite/ 2011-11-20 Joey Ye <joey...@arm.com> * gcc.dg/volatile-bitfields-1.c: New. Index: gcc/testsuite/gcc.dg/volatile-bitfields-1.c =================================================================== --- gcc/testsuite/gcc.dg/volatile-bitfields-1.c (revision 0) +++ gcc/testsuite/gcc.dg/volatile-bitfields-1.c (revision 0) @@ -0,0 +1,23 @@ +/* { dg-options "-fstrict-volatile-bitfields" } */ +/* { dg-do run } */ + +extern int puts(const char *); +extern void abort(void) __attribute__((noreturn)); + +typedef struct { + volatile unsigned short a:8, b:8; +} BitStruct; + +BitStruct bits = {1, 2}; + +void check(int i, int j) +{ + if (i != 1 || j != 2) puts("FAIL"), abort(); +} + +int main () +{ + check(bits.a, bits.b); + + return 0; +} Index: gcc/expr.c =================================================================== --- gcc/expr.c (revision 181550) +++ gcc/expr.c (working copy) @@ -9200,8 +9200,16 @@ && modifier != EXPAND_CONST_ADDRESS && modifier != EXPAND_INITIALIZER) /* If the field is volatile, we always want an aligned - access. */ - || (volatilep && flag_strict_volatile_bitfields > 0) + access. Do this in following two situations: + 1. the access is not already naturally + aligned, otherwise "normal" (non-bitfield) volatile fields + become non-addressable. + 2. the bitsize is narrower than the access size. Need + to extract bitfields from the access. */ + || (volatilep && flag_strict_volatile_bitfields > 0 + && (bitpos % GET_MODE_ALIGNMENT (mode) != 0 + || (mode1 != BLKmode + && bitsize < GET_MODE_SIZE (mode1) * BITS_PER_UNIT))) /* If the field isn't aligned enough to fetch as a memref, fetch it as a bit field. */ || (mode1 != BLKmode -- This patch is available for review at http://codereview.appspot.com/5434084