https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115025
Bug ID: 115025 Summary: prime computation performance regression, x86, between gcc-14 and gcc-13 on skylake platform Product: gcc Version: 14.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: colin.king at intel dot com Target Milestone: --- Created attachment 58163 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=58163&action=edit reproducer source code I'm seeing a ~7% performance regression in gcc-14 compared to gcc-13, using gcc on Ubuntu 24.04 computing prime numbers: Versions: gcc version 13.2.0 (Ubuntu 13.2.0-23ubuntu4) gcc version 14.0.1 20240412 (experimental) [master r14-9935-g67e1433a94f] (Ubuntu 14-20240412-0ubuntu1) cking@skylake:~$ CFLAGS="" gcc-13 -O2 reproducer-prime.c -lm cking@skylake:~$ ./a.out 473.04 prime ops per sec cking@skylake:~$ CFLAGS="" gcc-14 -O2 reproducer-prime.c -lm cking@skylake:~$ ./a.out 439.86 prime ops per sec Attached is the reproducer. Note that the use of __attribute__((optimize("-O3"))) and/or __builtin_expect((x), 0) does not affect the performance regression. The original issue appeared when regression testing stress-ng cpu prime number stressor [1]. I've managed to extract the attached reproducer from the original code (see attached). Attached are the reproducer C source and disassembled object code. References: [1] https://github.com/ColinIanKing/stress-ng/blob/master/stress-cpu.c