rnk added a comment.
In https://reviews.llvm.org/D24289#598169, @rsmith wrote:
> This is causing warnings to fire for headers shared between C and C++, where
> the "give the enum an unsigned underlying type" advice doesn't work, and
> where the code in question will never be built for the MS ABI. It seems
> really hard to justify this being on by default.
>
> I'm going to turn it off by default for now, but we should probably consider
> turning it back on by default when targeting the MS ABI (as a "your code is
> wrong" warning rather than a "your code is not portable" warning).
Seems reasonable. I asked to make it off by default, but got argued down to
putting it under -Wall.
> Yeah, suggesting adding an underlying type to the enum to solve this problem
> seems like a bad idea, since that fundamentally changes the nature of the
> enum -- typically allowing it to store a lot more values, and making putting
> it in a bitfield a bad idea.
Any time you use a bitfield it stores fewer values than the original integer
type. I don't see how enums are special here. Even if I can store values in the
enum not representable as a bitwise combination of enumerators, people usually
don't, and adding an underlying type doesn't change the conventions of normal
C++ code.
Do you have any better suggestions for people that want this code to do the
right thing when built with MSVC?
enum E /* : unsigned */ { E0, E1, E2, E3 };
struct A { E b : 2; };
int main() {
A a;
a.b = E3;
return a.b; // comes out as -1 without the underlying type
}
Widening the bitfield wastes a bit. Making the bitfield a plain integer and
cast in and out of the enum type, but that obscures the true type of the
bitfield. So, I still support this suggestion.
Repository:
rL LLVM
https://reviews.llvm.org/D24289
_______________________________________________
cfe-commits mailing list
[email protected]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits