On 3/25/19 11:14 AM, Jeff Law wrote:
On 3/25/19 11:04 AM, Martin Sebor wrote:
On 3/25/19 10:07 AM, Jeff Law wrote:
On 3/24/19 6:21 PM, Martin Sebor wrote:
The error issued when the aligned attribute argument is too big
to be represented is incorrect: it says the maximum alignment
is 1U << 31 when it should actually be 1 << 28. This was a typo
introduced when the error message was enhanced earlier in GCC 9.
The test I added to verify the fix for the typo exposed another
bug introduced in the same commit as the incorrect value in
the error message: assuming that the attribute aligned argument
fits in SHWI.
The attached patch corrects both problems. It has been tested
on x86_64-linux. I will commit it as obvious sometime this week
unless there are any objections or suggestions for changes.
Martin
PS I have a couple of questions related to the affected code:
1) Does GCC support building with compilers where int is not 32
bits wide, or where BITS_PER_UNIT is not 3? (I.e., either is
less or more?)
We've certainly supported 16 bit ints in the past. H8/300 would be an
example. It defaults to 16 bit ints. But I don't think we've tested
that in a very long time -- my tester is only testing with -mint32.
Look for INT_TYPE_SIZE in config/*/*.h
We've never supported bootstrapping in that mode, just crosses.
Thanks, that's what I was after: whether GCC can build natively
with such a compiler where sizeof (int) != 32. Sounds like it
can't, i.e., HOST_BITS_PER_INT is always at least 32. Or do
you suppose it's always exactly 32?
At least 32 for HOST_BITS_PER_INT, I think we've supported 64
HOST_BITS_PER_INT as well.
HOST_BITS_PER_INT is defined indirectly via CHAR_BIT * SIZEOF_INT, both
of which are determined during the configure phase.
AVR is probably the most interesting as it even has an flag to make
"int" be 8 bits. It's probably the best tested target in this space.
BITS_PER_UNIT is more of a hardware characteristic. It's generally 8.
THough I thought one of the TI chips defined it to 32. I suspect you
weren't really looking for BITS_PER_UNIT here.
I think using BITS_PER_UNIT here is actually another bug in
the function: it should be using CHAR_BITS instead, like so:
if (log2align >= HOST_BITS_PER_INT - exact_log2 (CHAR_BITS))
{
error ("requested alignment %qE exceeds maximum %u",
align, 1U << (HOST_BITS_PER_INT - exact_log2 (CHAR_BITS) - 1));
return -1;
}
Isn't CHAR_BIT a function of the host, not the target? In which case I
don't think it's right since supported alignments are primarily a target
property.
Yes, both CHAR_BIT and HOST_BITS_PER_INT are host constants, and
so is the maximum alignment GCC can represent -- it stores its
log2 in a 6-bit field. From tree-core.h:
/* TYPE_ALIGN in log2; this has to be large enough to hold values
of the maximum of BIGGEST_ALIGNMENT and MAX_OFILE_ALIGNMENT,
the latter being usually the larger. For ELF it is 8<<28,
so we need to store the value 32 (not 31, as we need the zero
as well), hence six bits. */
unsigned align : 6;
But I think BITS_PER_UNIT is used here because the alignment GCC
stores is the bit alignment on the target. Even so, six bits can
fit an alignment of up to 2 ^ 63 and there should be no reason to
cap it at 2 ^ 31 just because of the ELF limit. Stack variables
could (in theory at least) be aligned more strictly. But there
are many assumptions throughout GCC code(*) that the actual byte
alignment value fits into an int that I'm not sure that taking
advantage of that sixth bit would justify the effort involved
in changing this.
[*] E.g., the definition of TYPE_ALIGN:
#define TYPE_ALIGN(NODE) \
(TYPE_CHECK (NODE)->type_common.align \
? ((unsigned)1) << ((NODE)->type_common.align - 1) : 0)
Thanks. I was concerned about the test I added breaking on
systems that don't define __INT64_TYPE__, but I see other tests
that assume that __INT64_TYPE__ exists, so if it does break, it
won't be the only one.
I wouldn't stress too much about it. The AVR guys might come back with
a patch to adjust the test. But again, I wouldn't worry about it until
they do.
Ack.
Thanks
Martin