[Bug c/95189] New: memcmp being wrongly stripped (regression)

2020-05-18 Thread gcc at pkh dot me
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95189

Bug ID: 95189
   Summary: memcmp being wrongly stripped (regression)
   Product: gcc
   Version: 10.1.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: gcc at pkh dot me
  Target Milestone: ---

Given the following C code:

% cat a.c
#include 
static const float z[1] = {0};
int f(float x) { return memcmp(&x, z, sizeof(x)); }

GCC 10 generates this on x86-64:

% gcc -Wall -O2 -c a.c && objdump -d -Mintel a.o

a.o: file format elf64-x86-64


Disassembly of section .text:

 :
   0:   f3 0f 11 44 24 fc   movss  DWORD PTR [rsp-0x4],xmm0
   6:   0f b6 44 24 fc  movzx  eax,BYTE PTR [rsp-0x4]
   b:   c3  ret

This doesn't happen if "= {0}" is removed from the z initialization (wtf?).
It also doesn't happen with -O1.

[Bug middle-end/95189] [10/11 Regression] memcmp being wrongly stripped like strcmp

2020-06-09 Thread gcc at pkh dot me
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95189

--- Comment #5 from gcc at pkh dot me ---
I'd like to point out that this regression impacts badly a production app.
We're using this pattern to compare an input vector of floats to a vector of
zeros, but the comparison always returns 0 now, which basically breaks
everything. We do have a workaround (removing the "= {0}"), but I don't think
this is an isolate case.

Maybe the priority could be raised?

[Bug regression/80208] New: DJGPP max object file alignment regression

2017-03-27 Thread gcc at pkh dot me
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80208

Bug ID: 80208
   Summary: DJGPP max object file alignment regression
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: regression
  Assignee: unassigned at gcc dot gnu.org
  Reporter: gcc at pkh dot me
  Target Milestone: ---

Using DJGPP, requested alignment is not clamped anymore:

[/tmp]☭ echo 'int __attribute__ ((aligned (16))) x;' > a.c &&
i686-pc-msdosdjgpp-cc -c a.c
[/tmp]☭ echo 'int __attribute__ ((aligned (32))) x;' > a.c &&
i686-pc-msdosdjgpp-cc -c a.c
a.c:1:36: error: alignment of ‘x’ is greater than maximum object file alignment
16
 int __attribute__ ((aligned (32))) x;
^
[/tmp]☠ 

This is a regression since r205040
(https://github.com/gcc-mirror/gcc/commit/f8f7421ff48c9a90a63281bb09ff67d4f56755cf).
We hit that issue in the FFmpeg project where such alignment is requested in
random places (because we sometimes have AVX2 optimizations, which shouldn't
concern a FreeDOS configuration).

Here is a simple patch that fixes the issue:

--- gcc/varasm.c2017-03-26 20:10:03.082212374 +0200
+++ gcc/varasm.c2017-03-26 20:10:00.079320606 +0200
@@ -1005,8 +1005,8 @@
   if (align > MAX_OFILE_ALIGNMENT)
 {
-  error ("alignment of %q+D is greater than maximum object "
+  warning (0, "alignment of %q+D is greater than maximum object "
 "file alignment %d", decl,
 MAX_OFILE_ALIGNMENT/BITS_PER_UNIT);
   align = MAX_OFILE_ALIGNMENT;
 }

The fallback to MAX_OFILE_ALIGNMENT is still present so it's enough to fix the
problem.

[Bug c/107890] New: UB on integer overflow impacts code flow

2022-11-27 Thread gcc at pkh dot me via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107890

Bug ID: 107890
   Summary: UB on integer overflow impacts code flow
   Product: gcc
   Version: 12.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: gcc at pkh dot me
  Target Milestone: ---

Following is a code that is sensible to a signed integer overflow. I was under
the impression that this kind of undefined behavior essentially meant that the
value of that integer could become unreliable. But apparently this is not
limited to the value of said integer, it can also dramatically impact the code
flow.

Here is the pathological code:

#include 
#include 
#include 

uint8_t tab[0x1ff + 1];

uint8_t f(int32_t x)
{
if (x < 0)
return 0;
int32_t i = x * 0x1ff / 0x;
if (i >= 0 && i < sizeof(tab)) {
printf("tab[%d] looks safe because %d is between [0;%d[\n", i, i,
(int)sizeof(tab));
return tab[i];
}

return 0;
}

int main(int ac, char **av)
{
return f(atoi(av[1]));
}

Triggering an overflow actually enters the printf/dereference scope, violating
the protective condition and thus causing a crash:

% cc -Wall -O2 overflow.c -o overflow && ./overflow 5000
tab[62183] looks safe because 62183 is between [0;512[
zsh: segmentation fault (core dumped)  ./overflow 5000

I feel extremely uncomfortable about an integer overflow actually impacting
something else than the integer itself. Is it expected or is this a bug?