On Tue, 4 Oct 2022 at 18:43, Richard Sandiford <richard.sandif...@arm.com> wrote: > > Philipp Tomsich <philipp.toms...@vrull.eu> writes: > > This brings the extensions detected by -mcpu=native on Ampere-1 systems > > in sync with the defaults generated for -mcpu=ampere1. > > > > Note that some kernel versions may misreport the presence of PAUTH and > > PREDRES (i.e., -mcpu=native will add 'nopauth' and 'nopredres'). > > > > gcc/ChangeLog: > > > > * config/aarch64/aarch64-cores.def (AARCH64_CORE): Update > > Ampere-1 core entry. > > > > Signed-off-by: Philipp Tomsich <philipp.toms...@vrull.eu> > > > > --- > > Ok for backport? > > > > gcc/config/aarch64/aarch64-cores.def | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/gcc/config/aarch64/aarch64-cores.def > > b/gcc/config/aarch64/aarch64-cores.def > > index 60299160bb6..9090f80b4b7 100644 > > --- a/gcc/config/aarch64/aarch64-cores.def > > +++ b/gcc/config/aarch64/aarch64-cores.def > > @@ -69,7 +69,7 @@ AARCH64_CORE("thunderxt81", thunderxt81, thunderx, > > V8A, (CRC, CRYPTO), thu > > AARCH64_CORE("thunderxt83", thunderxt83, thunderx, V8A, (CRC, > > CRYPTO), thunderx, 0x43, 0x0a3, -1) > > > > /* Ampere Computing ('\xC0') cores. */ > > -AARCH64_CORE("ampere1", ampere1, cortexa57, V8_6A, (), ampere1, 0xC0, > > 0xac3, -1) > > +AARCH64_CORE("ampere1", ampere1, cortexa57, V8_6A, (F16, RCPC, RNG, AES, > > SHA3), ampere1, 0xC0, 0xac3, -1) > > The fact that you had include RCPC here shows that there was a bug > in the definition of Armv8.3-A. I've just pushed a fix for that. > > Otherwise, this seems to line up with the LLVM definition, except > that this definition enables RNG/AEK_RAND whereas the LLVM one doesn't > seem to. Which one's right (or is it me that's wrong)?
I just rechecked, and the latest documents (in correspondence to the /proc/cpuinfo-output) confirm that FEAT_RNG is implemented. LLVM needs to be updated to reflect that RNG is implemented. > > Thanks, > Richard > > > > /* Do not swap around "emag" and "xgene1", > > this order is required to handle variant correctly. */ > > AARCH64_CORE("emag", emag, xgene1, V8A, (CRC, CRYPTO), > > emag, 0x50, 0x000, 3)