On Sunday 25 March 2007 10:43:57 Duncan Sands wrote:
> > Is this already fixed?

Here is my proposed fix.  The problem was due to zero-sized
bitfields, such as
        struct Z { int :0; } z;
Historically, no field was created for such a bitfield, and
getLLVMFieldFor would return ~0U, i.e no field found.  In my
patch "Better support for variable size struct fields" I added
an assertion that checks that getLLVMFieldFor always finds a
field, and of course it fires in this case.  How to solve this?
There are basically two choices: (1) accept that sometimes there
is no field, and return ~0U; (2) make sure that a field always
exists.  I chose (2) because it is more uniform, and means that
the rest of llvm-gcc doesn't need to know about special cases of
gcc fields that don't map to an LLVM field.  [You could object
that in C it is not possible to access such a field, so the rest
of the llvm-gcc code would not need to worry about seeing a ~0U
index; however we are not dealing with C, we are dealing with gimple
and many languages, and it is not clear to me that such a field can
never be referred to].

With this patch a gcc field is mapped to an LLVM field if and only if
it has a constant offset from the start of the struct.

While writing it I discovered some other related fun ways to crash the
compiler, so the patch fixes them too (this explains the large number
of attached testcases).  There are basically three changes:
(1) use DECL_SIZE to determine if a field has zero size, not TYPE_SIZE.
Using TYPE_SIZE here was almost certainly a bug (DECL_SIZE is used
everywhere else, and is what the gcc documentation says you should use),
though it could be a subtlety.  As a result of this change, a zero-width
bitfield is now considered to be a zero size field, which wasn't the case
before.
(2) if a zero-width field doesn't live inside an existing field, output a
zero width struct for it.  This has to be done, since such a bitfield could
be the only field, and at least one field is needed.  Thus the above example
maps to { {} }.  This part of the patch also fixes a long-standing bug in
which a bitfield following a zero-sized struct in a packed struct would cause
an assert failure.
(3) fix getLLVMFieldFor so it can handle the following possibilities:
(a) a zero-sized field follows padding (this means that CurFieldNo may need
to be advanced over the padding); (b) a zero-sized field lives inside a non-zero
sized field.

Bootstraps and passes "make check".  Doesn't cause any build failures in the
full testsuite (the run-time failures I'm seeing seem to be due to the current
InstCombine issues).

Best wishes,

Duncan.
// RUN: %llvmgcc %s -S -o -
struct Z { int :0; int :0; } z;
// RUN: %llvmgcc %s -S -o -
struct Z { int :0; } z;
// RUN: %llvmgcc %s -S -o -
struct Z { int :1; int :0; int :1; } z;
// RUN: %llvmgcc %s -S -o -
struct Z { int :1; int :0; int :1; } __attribute__ ((packed)) z;
// RUN: %llvmgcc %s -S -o -
struct W {};
struct Y {
  struct W w;
  int i:1;
} __attribute__ ((packed)) y;
Index: gcc/llvm-types.cpp
===================================================================
--- gcc/llvm-types.cpp	(revision 297)
+++ gcc/llvm-types.cpp	(working copy)
@@ -958,33 +958,23 @@
   ///
   unsigned getLLVMFieldFor(uint64_t FieldOffsetInBits, unsigned &CurFieldNo,
                            bool isZeroSizeField) {
-    if (!isZeroSizeField) {
-      // Skip over LLVM fields that start and end before the GCC field starts.
-      while (CurFieldNo < ElementOffsetInBytes.size() &&
-             getFieldEndOffsetInBytes(CurFieldNo)*8 <= FieldOffsetInBits)
-        ++CurFieldNo;
-      if (CurFieldNo < ElementOffsetInBytes.size())
-        return CurFieldNo;
-      // Otherwise, we couldn't find the field!
-      // FIXME: this works around a latent bug!
-      //assert(0 && "Could not find field!");
-      return ~0U;
-    }
+    // Skip over LLVM fields that start and end before the GCC field starts.
+    while (CurFieldNo < ElementOffsetInBytes.size() &&
+           getFieldEndOffsetInBytes(CurFieldNo)*8 + isZeroSizeField
+           <= FieldOffsetInBits)
+      ++CurFieldNo;
 
     // Handle zero sized fields now.  If the next field is zero sized, return
     // it.  This is a nicety that causes us to assign C fields different LLVM
     // fields in cases like struct X {}; struct Y { struct X a, b, c };
-    if (CurFieldNo+1 < ElementOffsetInBytes.size() &&
+    if (isZeroSizeField && CurFieldNo+1 < ElementOffsetInBytes.size() &&
         ElementSizeInBytes[CurFieldNo+1] == 0) {
       return ++CurFieldNo;
     }
-    
-    // Otherwise, if this is a zero sized field, return it.
-    if (CurFieldNo < ElementOffsetInBytes.size() &&
-        ElementSizeInBytes[CurFieldNo] == 0) {
+
+    if (CurFieldNo < ElementOffsetInBytes.size())
       return CurFieldNo;
-    }
-    
+
     // Otherwise, we couldn't find the field!
     assert(0 && "Could not find field!");
     return ~0U;
@@ -1081,8 +1071,6 @@
 void TypeConverter::DecodeStructBitField(tree_node *Field, 
                                          StructTypeConversionInfo &Info) {
   unsigned FieldSizeInBits = TREE_INT_CST_LOW(DECL_SIZE(Field));
-  if (FieldSizeInBits == 0)   // Ignore 'int:0', which just affects layout.
-    return;
 
   // Get the starting offset in the record.
   unsigned StartOffsetInBits = getFieldOffsetInBits(Field);
@@ -1095,10 +1083,10 @@
     // it.
     unsigned LastFieldBitOffset = Info.ElementOffsetInBytes.back()*8;
     unsigned LastFieldBitSize   = Info.ElementSizeInBytes.back()*8;
-    assert(LastFieldBitOffset < StartOffsetInBits &&
+    assert(LastFieldBitOffset <= StartOffsetInBits &&
            "This bitfield isn't part of the last field!");
     if (EndBitOffset <= LastFieldBitOffset+LastFieldBitSize &&
-        LastFieldBitOffset+LastFieldBitSize >= StartOffsetInBits)
+        LastFieldBitOffset+LastFieldBitSize > StartOffsetInBits)
       return;  // Already contained in previous field!
   }
   
@@ -1130,7 +1118,10 @@
 
   // Figure out the LLVM type that we will use for the new field.
   const Type *NewFieldTy;
-  if (NumBitsToAdd <= 8)
+  if (NumBitsToAdd == 0) {
+    std::vector<const Type*> Elts;
+    NewFieldTy = StructType::get(Elts, false);
+  } else if (NumBitsToAdd <= 8)
     NewFieldTy = Type::Int8Ty;
   else if (NumBitsToAdd <= 16)
     NewFieldTy = Type::Int16Ty;
@@ -1213,8 +1204,7 @@
     if (TREE_CODE(Field) == FIELD_DECL &&
         TREE_CODE(DECL_FIELD_OFFSET(Field)) == INTEGER_CST) {
       unsigned FieldOffsetInBits = getFieldOffsetInBits(Field);
-      tree FieldType = TREE_TYPE(Field);
-      
+
       // If this is a bitfield, we may want to adjust the FieldOffsetInBits to
       // produce safe code.  In particular, bitfields will be loaded/stored as
       // their *declared* type, not the smallest integer type that contains
@@ -1229,8 +1219,8 @@
       
       // Figure out if this field is zero bits wide, e.g. {} or [0 x int].  Do
       // not include variable sized fields here.
-      bool isZeroSizeField = !TYPE_SIZE(FieldType) ||
-        integer_zerop(TYPE_SIZE(FieldType));
+      bool isZeroSizeField = !DECL_SIZE(Field) ||
+        integer_zerop(DECL_SIZE(Field));
 
       unsigned FieldNo = 
         Info.getLLVMFieldFor(FieldOffsetInBits, CurFieldNo, isZeroSizeField);
_______________________________________________
llvm-commits mailing list
llvm-commits@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits

Reply via email to