[llvm-branch-commits] [compiler-rt] release/19.x: Forward declare OSSpinLockLock on MacOS since it's not shipped on the system. (#101392) (PR #101432)

2024-08-02 Thread Tim Northover via llvm-branch-commits

TNorthover wrote:

Looks safe enough to me. The added declaration matches the one in the header. I 
don't think it's actually gone yet in the latest Xcode 16 beta public SDK, but 
presumably it's happening soon.

https://github.com/llvm/llvm-project/pull/101432
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm-branch] r259247 - Merging r259228:

2016-01-29 Thread Tim Northover via llvm-branch-commits
Author: tnorthover
Date: Fri Jan 29 16:00:06 2016
New Revision: 259247

URL: http://llvm.org/viewvc/llvm-project?rev=259247&view=rev
Log:
Merging r259228:

r259228 | tnorthover | 2016-01-29 11:18:46 -0800 (Fri, 29 Jan 2016) | 13 lines

ARM: don't mangle DAG constant if it has more than one use

The basic optimisation was to convert (mul $LHS, $complex_constant) into
roughly "(shl (mul $LHS, $simple_constant), $simple_amt)" when it was expected
to be cheaper. The original logic checks that the mul only has one use (since
we're mangling $complex_constant), but when used in even more complex
addressing modes there may be an outer addition that can pick up the wrong
value too.

I *think* the ARM addressing-mode problem is actually unreachable at the
moment, but that depends on complex assessments of the profitability of
pre-increment addressing modes so I've put a real check in there instead of an
assertion.


Modified:
llvm/branches/release_38/   (props changed)
llvm/branches/release_38/lib/Target/ARM/ARMISelDAGToDAG.cpp
llvm/branches/release_38/test/CodeGen/ARM/shifter_operand.ll

Propchange: llvm/branches/release_38/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan 29 16:00:06 2016
@@ -1,3 +1,3 @@
 /llvm/branches/Apple/Pertwee:110850,110961
 /llvm/branches/type-system-rewrite:133420-134817
-/llvm/trunk:155241,257645,257648,257730,257775,257791,257875,257886,257902,257905,257925,257929-257930,257940,257942,257977,257979,257997,258168,258184,258207,258221,258273,258325,258406,258416,258428,258436,258471,258690,258729,258891,258971,259236
+/llvm/trunk:155241,257645,257648,257730,257775,257791,257875,257886,257902,257905,257925,257929-257930,257940,257942,257977,257979,257997,258168,258184,258207,258221,258273,258325,258406,258416,258428,258436,258471,258690,258729,258891,258971,259228,259236

Modified: llvm/branches/release_38/lib/Target/ARM/ARMISelDAGToDAG.cpp
URL: 
http://llvm.org/viewvc/llvm-project/llvm/branches/release_38/lib/Target/ARM/ARMISelDAGToDAG.cpp?rev=259247&r1=259246&r2=259247&view=diff
==
--- llvm/branches/release_38/lib/Target/ARM/ARMISelDAGToDAG.cpp (original)
+++ llvm/branches/release_38/lib/Target/ARM/ARMISelDAGToDAG.cpp Fri Jan 29 
16:00:06 2016
@@ -747,7 +747,7 @@ bool ARMDAGToDAGISel::SelectLdStSOReg(SD
 
   // If Offset is a multiply-by-constant and it's profitable to extract a shift
   // and use it in a shifted operand do so.
-  if (Offset.getOpcode() == ISD::MUL) {
+  if (Offset.getOpcode() == ISD::MUL && N.hasOneUse()) {
 unsigned PowerOfTwo = 0;
 SDValue NewMulConst;
 if (canExtractShiftFromMul(Offset, 31, PowerOfTwo, NewMulConst)) {
@@ -1422,7 +1422,7 @@ bool ARMDAGToDAGISel::SelectT2AddrModeSo
 
   // If OffReg is a multiply-by-constant and it's profitable to extract a shift
   // and use it in a shifted operand do so.
-  if (OffReg.getOpcode() == ISD::MUL) {
+  if (OffReg.getOpcode() == ISD::MUL && N.hasOneUse()) {
 unsigned PowerOfTwo = 0;
 SDValue NewMulConst;
 if (canExtractShiftFromMul(OffReg, 3, PowerOfTwo, NewMulConst)) {

Modified: llvm/branches/release_38/test/CodeGen/ARM/shifter_operand.ll
URL: 
http://llvm.org/viewvc/llvm-project/llvm/branches/release_38/test/CodeGen/ARM/shifter_operand.ll?rev=259247&r1=259246&r2=259247&view=diff
==
--- llvm/branches/release_38/test/CodeGen/ARM/shifter_operand.ll (original)
+++ llvm/branches/release_38/test/CodeGen/ARM/shifter_operand.ll Fri Jan 29 
16:00:06 2016
@@ -239,3 +239,20 @@ define void @test_well_formed_dag(i32 %i
   store i32 %add, i32* %addr
   ret void
 }
+
+define { i32, i32 } @test_multi_use_add(i32 %base, i32 %offset) {
+; CHECK-LABEL: test_multi_use_add:
+; CHECK-THUMB: movs [[CONST:r[0-9]+]], #28
+; CHECK-THUMB: movt [[CONST]], #1
+
+  %prod = mul i32 %offset, 65564
+  %sum = add i32 %base, %prod
+
+  %ptr = inttoptr i32 %sum to i32*
+  %loaded = load i32, i32* %ptr
+
+  %ret.tmp = insertvalue { i32, i32 } undef, i32 %sum, 0
+  %ret = insertvalue { i32, i32 } %ret.tmp, i32 %loaded, 1
+
+  ret { i32, i32 } %ret
+}


___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm-branch] r275918 - Merging r275866:

2016-07-18 Thread Tim Northover via llvm-branch-commits
Author: tnorthover
Date: Mon Jul 18 16:36:33 2016
New Revision: 275918

URL: http://llvm.org/viewvc/llvm-project?rev=275918&view=rev
Log:
Merging r275866:

r275866 | tnorthover | 2016-07-18 11:28:52 -0700 (Mon, 18 Jul 2016) | 6 lines

CodeGenPrep: use correct function to determine Global's alignment.

Elsewhere (particularly computeKnownBits) we assume that a global will be
aligned to the value returned by Value::getPointerAlignment. This is used to
boost the alignment on memcpy/memset, so any target-specific request can only
increase that value.


Modified:
llvm/branches/release_39/lib/CodeGen/CodeGenPrepare.cpp
llvm/branches/release_39/test/CodeGen/ARM/memfunc.ll

Modified: llvm/branches/release_39/lib/CodeGen/CodeGenPrepare.cpp
URL: 
http://llvm.org/viewvc/llvm-project/llvm/branches/release_39/lib/CodeGen/CodeGenPrepare.cpp?rev=275918&r1=275917&r2=275918&view=diff
==
--- llvm/branches/release_39/lib/CodeGen/CodeGenPrepare.cpp (original)
+++ llvm/branches/release_39/lib/CodeGen/CodeGenPrepare.cpp Mon Jul 18 16:36:33 
2016
@@ -1780,7 +1780,7 @@ bool CodeGenPrepare::optimizeCallInst(Ca
   // forbidden.
   GlobalVariable *GV;
   if ((GV = dyn_cast(Val)) && GV->canIncreaseAlignment() &&
-  GV->getAlignment() < PrefAlign &&
+  GV->getPointerAlignment(*DL) < PrefAlign &&
   DL->getTypeAllocSize(GV->getValueType()) >=
   MinSize + Offset2)
 GV->setAlignment(PrefAlign);

Modified: llvm/branches/release_39/test/CodeGen/ARM/memfunc.ll
URL: 
http://llvm.org/viewvc/llvm-project/llvm/branches/release_39/test/CodeGen/ARM/memfunc.ll?rev=275918&r1=275917&r2=275918&view=diff
==
--- llvm/branches/release_39/test/CodeGen/ARM/memfunc.ll (original)
+++ llvm/branches/release_39/test/CodeGen/ARM/memfunc.ll Mon Jul 18 16:36:33 
2016
@@ -386,6 +386,8 @@ entry:
 @arr5 = weak global [7 x i8] c"\01\02\03\04\05\06\07", align 1
 @arr6 = weak_odr global [7 x i8] c"\01\02\03\04\05\06\07", align 1
 @arr7 = external global [7 x i8], align 1
+@arr8 = internal global [128 x i8] undef
+@arr9 = weak_odr global [128 x i8] undef
 define void @f9(i8* %dest, i32 %n) {
 entry:
   call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* getelementptr inbounds 
([7 x i8], [7 x i8]* @arr1, i32 0, i32 0), i32 %n, i32 1, i1 false)
@@ -395,6 +397,8 @@ entry:
   call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* getelementptr inbounds 
([7 x i8], [7 x i8]* @arr5, i32 0, i32 0), i32 %n, i32 1, i1 false)
   call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* getelementptr inbounds 
([7 x i8], [7 x i8]* @arr6, i32 0, i32 0), i32 %n, i32 1, i1 false)
   call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* getelementptr inbounds 
([7 x i8], [7 x i8]* @arr7, i32 0, i32 0), i32 %n, i32 1, i1 false)
+  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* getelementptr inbounds 
([128 x i8], [128 x i8]* @arr8, i32 0, i32 0), i32 %n, i32 1, i1 false)
+  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* getelementptr inbounds 
([128 x i8], [128 x i8]* @arr9, i32 0, i32 0), i32 %n, i32 1, i1 false)
 
   unreachable
 }
@@ -417,6 +421,11 @@ entry:
 ; CHECK: arr5:
 ; CHECK-NOT: .p2align
 ; CHECK: arr6:
+; CHECK: .p2align 4
+; CHECK: arr8:
+; CHECK: .p2align 4
+; CHECK: arr9:
+
 ; CHECK-NOT: arr7:
 
 declare void @llvm.memmove.p0i8.p0i8.i32(i8* nocapture, i8* nocapture, i32, 
i32, i1) nounwind


___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm-branch] r293103 - Merging rr293088:

2017-01-25 Thread Tim Northover via llvm-branch-commits
Author: tnorthover
Date: Wed Jan 25 16:10:07 2017
New Revision: 293103

URL: http://llvm.org/viewvc/llvm-project?rev=293103&view=rev
Log:
Merging rr293088:

r293088 | tnorthover | 2017-01-25 12:58:26 -0800 (Wed, 25 Jan 2017) | 5 lines

SDag: fix how initial loads are formed when splitting vector ops.

Later code expects the vector loads produced to be directly
concatenable, which means we shouldn't pad anything except the last load
produced with UNDEF.


Modified:
llvm/branches/release_40/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
llvm/branches/release_40/test/CodeGen/ARM/vector-load.ll

Modified: 
llvm/branches/release_40/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
URL: 
http://llvm.org/viewvc/llvm-project/llvm/branches/release_40/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp?rev=293103&r1=293102&r2=293103&view=diff
==
--- llvm/branches/release_40/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp 
(original)
+++ llvm/branches/release_40/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp 
Wed Jan 25 16:10:07 2017
@@ -3439,7 +3439,10 @@ SDValue DAGTypeLegalizer::GenWidenVector
   LD->getPointerInfo().getWithOffset(Offset),
   MinAlign(Align, Increment), MMOFlags, AAInfo);
   LdChain.push_back(L.getValue(1));
-  if (L->getValueType(0).isVector()) {
+  if (L->getValueType(0).isVector() && NewVTWidth >= LdWidth) {
+// Later code assumes the vector loads produced will be mergeable, so 
we
+// must pad the final entry up to the previous width. Scalars are
+// combined separately.
 SmallVector Loads;
 Loads.push_back(L);
 unsigned size = L->getValueSizeInBits(0);

Modified: llvm/branches/release_40/test/CodeGen/ARM/vector-load.ll
URL: 
http://llvm.org/viewvc/llvm-project/llvm/branches/release_40/test/CodeGen/ARM/vector-load.ll?rev=293103&r1=293102&r2=293103&view=diff
==
--- llvm/branches/release_40/test/CodeGen/ARM/vector-load.ll (original)
+++ llvm/branches/release_40/test/CodeGen/ARM/vector-load.ll Wed Jan 25 
16:10:07 2017
@@ -251,3 +251,13 @@ define <4 x i32> @zextload_v8i8tov8i32_f
 %zlA = zext <4 x i8> %lA to <4 x i32>
ret <4 x i32> %zlA
 }
+
+; CHECK-LABEL: test_silly_load:
+; CHECK: ldr {{r[0-9]+}}, [r0, #24]
+; CHECK: vld1.8 {d{{[0-9]+}}, d{{[0-9]+}}}, [r0:128]!
+; CHECK: vldr d{{[0-9]+}}, [r0]
+
+define void @test_silly_load(<28 x i8>* %addr) {
+  load volatile <28 x i8>, <28 x i8>* %addr
+  ret void
+}


___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] 6259fbd - AArch64: add apple-a14 as a CPU

2021-01-19 Thread Tim Northover via llvm-branch-commits

Author: Tim Northover
Date: 2021-01-19T14:04:53Z
New Revision: 6259fbd8b69531133d24b5367a6a2cd9b183ce48

URL: 
https://github.com/llvm/llvm-project/commit/6259fbd8b69531133d24b5367a6a2cd9b183ce48
DIFF: 
https://github.com/llvm/llvm-project/commit/6259fbd8b69531133d24b5367a6a2cd9b183ce48.diff

LOG: AArch64: add apple-a14 as a CPU

This CPU supports all v8.5a features except BTI, and so identifies as v8.5a to
Clang. A bit weird, but the best way for things like xnu to detect the new
features it cares about.

Added: 


Modified: 
llvm/include/llvm/Support/AArch64TargetParser.def
llvm/lib/Target/AArch64/AArch64.td
llvm/lib/Target/AArch64/AArch64Subtarget.cpp
llvm/lib/Target/AArch64/AArch64Subtarget.h
llvm/unittests/Support/TargetParserTest.cpp

Removed: 




diff  --git a/llvm/include/llvm/Support/AArch64TargetParser.def 
b/llvm/include/llvm/Support/AArch64TargetParser.def
index 5f36b0eecff9..332fb555e824 100644
--- a/llvm/include/llvm/Support/AArch64TargetParser.def
+++ b/llvm/include/llvm/Support/AArch64TargetParser.def
@@ -189,6 +189,8 @@ AARCH64_CPU_NAME("apple-a12", ARMV8_3A, 
FK_CRYPTO_NEON_FP_ARMV8, false,
  (AArch64::AEK_FP16))
 AARCH64_CPU_NAME("apple-a13", ARMV8_4A, FK_CRYPTO_NEON_FP_ARMV8, false,
  (AArch64::AEK_FP16 | AArch64::AEK_FP16FML))
+AARCH64_CPU_NAME("apple-a14", ARMV8_5A, FK_CRYPTO_NEON_FP_ARMV8, false,
+ (AArch64::AEK_FP16 | AArch64::AEK_FP16FML))
 AARCH64_CPU_NAME("apple-s4", ARMV8_3A, FK_CRYPTO_NEON_FP_ARMV8, false,
  (AArch64::AEK_FP16))
 AARCH64_CPU_NAME("apple-s5", ARMV8_3A, FK_CRYPTO_NEON_FP_ARMV8, false,

diff  --git a/llvm/lib/Target/AArch64/AArch64.td 
b/llvm/lib/Target/AArch64/AArch64.td
index 165939e6252b..15c7130b24f3 100644
--- a/llvm/lib/Target/AArch64/AArch64.td
+++ b/llvm/lib/Target/AArch64/AArch64.td
@@ -854,6 +854,38 @@ def ProcAppleA13 : SubtargetFeature<"apple-a13", 
"ARMProcFamily", "AppleA13",
  HasV8_4aOps
  ]>;
 
+def ProcAppleA14 : SubtargetFeature<"apple-a14", "ARMProcFamily", "AppleA14",
+ "Apple A14", [
+ FeatureAggressiveFMA,
+ FeatureAlternateSExtLoadCVTF32Pattern,
+ FeatureAltFPCmp,
+ FeatureArithmeticBccFusion,
+ FeatureArithmeticCbzFusion,
+ FeatureCrypto,
+ FeatureDisableLatencySchedHeuristic,
+ FeatureFPARMv8,
+ FeatureFRInt3264,
+ FeatureFuseAddress,
+ FeatureFuseAES,
+ FeatureFuseArithmeticLogic,
+ FeatureFuseCCSelect,
+ FeatureFuseCryptoEOR,
+ FeatureFuseLiterals,
+ FeatureNEON,
+ FeaturePerfMon,
+ FeatureSpecRestrict,
+ FeatureSSBS,
+ FeatureSB,
+ FeaturePredRes,
+ FeatureCacheDeepPersist,
+ FeatureZCRegMove,
+ FeatureZCZeroing,
+ FeatureFullFP16,
+ FeatureFP16FML,
+ FeatureSHA3,
+ HasV8_4aOps
+ ]>;
+
 def ProcExynosM3 : SubtargetFeature<"exynosm3", "ARMProcFamily", "ExynosM3",
 "Samsung Exynos-M3 processors",
 [FeatureCRC,
@@ -1147,6 +1179,7 @@ def : ProcessorModel<"apple-a10", CycloneModel, 
[ProcAppleA10]>;
 def : ProcessorModel<"apple-a11", CycloneModel, [ProcAppleA11]>;
 def : ProcessorModel<"apple-a12", CycloneModel, [ProcAppleA12]>;
 def : ProcessorModel<"apple-a13", CycloneModel, [ProcAppleA13]>;
+def : ProcessorModel<"apple-a14", CycloneModel, [ProcAppleA14]>;
 
 // watch CPUs.
 def : ProcessorModel<"apple-s4", CycloneModel, [ProcAppleA12]>;

diff  --git a/llvm/lib/Target/AArch64/AArch64Subtarget.cpp 
b/llvm/lib/Target/AArch64/AArch64Subtarget.cpp
index 2a4a5954e4b6..71b2bb196486 100644
--- a/llvm/lib/Target/AArch64/AArch64Subtarget.cpp
+++ b/llvm/lib/Target/AArch64/AArch64Subtarget.cpp
@@ -122,6 +122,7 @@ void AArch64Subtarget::initializeProperties() {
   case AppleA11:
   case AppleA12:
   case AppleA13:
+  case AppleA14:
 CacheLineSize = 64;
 PrefetchDistance =

[llvm-branch-commits] [clang] 152df3a - arm64: count Triple::aarch64_32 as an aarch64 target and enable leaf frame pointers

2020-12-03 Thread Tim Northover via llvm-branch-commits

Author: Tim Northover
Date: 2020-12-03T11:09:44Z
New Revision: 152df3add156b68aca7bfb06b62ea85fa127f3b1

URL: 
https://github.com/llvm/llvm-project/commit/152df3add156b68aca7bfb06b62ea85fa127f3b1
DIFF: 
https://github.com/llvm/llvm-project/commit/152df3add156b68aca7bfb06b62ea85fa127f3b1.diff

LOG: arm64: count Triple::aarch64_32 as an aarch64 target and enable leaf frame 
pointers

Added: 


Modified: 
clang/lib/Driver/ToolChain.cpp
clang/test/Driver/frame-pointer-elim.c
llvm/include/llvm/ADT/Triple.h
llvm/lib/BinaryFormat/MachO.cpp

Removed: 




diff  --git a/clang/lib/Driver/ToolChain.cpp b/clang/lib/Driver/ToolChain.cpp
index 0330afdcec48..85ab05cb7021 100644
--- a/clang/lib/Driver/ToolChain.cpp
+++ b/clang/lib/Driver/ToolChain.cpp
@@ -1075,9 +1075,9 @@ SanitizerMask ToolChain::getSupportedSanitizers() const {
   getTriple().isAArch64())
 Res |= SanitizerKind::CFIICall;
   if (getTriple().getArch() == llvm::Triple::x86_64 ||
-  getTriple().isAArch64() || getTriple().isRISCV())
+  getTriple().isAArch64(64) || getTriple().isRISCV())
 Res |= SanitizerKind::ShadowCallStack;
-  if (getTriple().isAArch64())
+  if (getTriple().isAArch64(64))
 Res |= SanitizerKind::MemTag;
   return Res;
 }

diff  --git a/clang/test/Driver/frame-pointer-elim.c 
b/clang/test/Driver/frame-pointer-elim.c
index fd74da7768eb..83dbf3816b68 100644
--- a/clang/test/Driver/frame-pointer-elim.c
+++ b/clang/test/Driver/frame-pointer-elim.c
@@ -97,6 +97,8 @@
 // RUN:   FileCheck --check-prefix=KEEP-NON-LEAF %s
 // RUN: %clang -### -target x86_64-scei-ps4 -S -O2 %s 2>&1 | \
 // RUN:   FileCheck --check-prefix=KEEP-NON-LEAF %s
+// RUN: %clang -### -target aarch64-apple-darwin -arch arm64_32 -S %s 2>&1 | \
+// RUN:   FileCheck --check-prefix=KEEP-NON-LEAF %s
 
 // RUN: %clang -### -target powerpc64 -S %s 2>&1 | \
 // RUN:   FileCheck --check-prefix=KEEP-ALL %s

diff  --git a/llvm/include/llvm/ADT/Triple.h b/llvm/include/llvm/ADT/Triple.h
index 6bfdfe691c2e..4e1a9499bf81 100644
--- a/llvm/include/llvm/ADT/Triple.h
+++ b/llvm/include/llvm/ADT/Triple.h
@@ -714,7 +714,17 @@ class Triple {
 
   /// Tests whether the target is AArch64 (little and big endian).
   bool isAArch64() const {
-return getArch() == Triple::aarch64 || getArch() == Triple::aarch64_be;
+return getArch() == Triple::aarch64 || getArch() == Triple::aarch64_be ||
+   getArch() == Triple::aarch64_32;
+  }
+
+  /// Tests whether the target is AArch64 and pointers are the size specified 
by
+  /// \p PointerWidth.
+  bool isAArch64(int PointerWidth) const {
+assert(PointerWidth == 64 || PointerWidth == 32);
+if (!isAArch64())
+  return false;
+return isArch64Bit() ? PointerWidth == 64 : PointerWidth == 32;
   }
 
   /// Tests whether the target is MIPS 32-bit (little and big endian).

diff  --git a/llvm/lib/BinaryFormat/MachO.cpp b/llvm/lib/BinaryFormat/MachO.cpp
index 2b9eb8025521..0901022a6141 100644
--- a/llvm/lib/BinaryFormat/MachO.cpp
+++ b/llvm/lib/BinaryFormat/MachO.cpp
@@ -55,7 +55,7 @@ static MachO::CPUSubTypeARM getARMSubType(const Triple &T) {
 }
 
 static MachO::CPUSubTypeARM64 getARM64SubType(const Triple &T) {
-  assert(T.isAArch64() || T.getArch() == Triple::aarch64_32);
+  assert(T.isAArch64());
   if (T.isArch32Bit())
 return (MachO::CPUSubTypeARM64)MachO::CPU_SUBTYPE_ARM64_32_V8;
   if (T.getArchName() == "arm64e")
@@ -84,9 +84,7 @@ Expected MachO::getCPUType(const Triple &T) {
   if (T.isARM() || T.isThumb())
 return MachO::CPU_TYPE_ARM;
   if (T.isAArch64())
-return MachO::CPU_TYPE_ARM64;
-  if (T.getArch() == Triple::aarch64_32)
-return MachO::CPU_TYPE_ARM64_32;
+return T.isArch32Bit() ? MachO::CPU_TYPE_ARM64_32 : MachO::CPU_TYPE_ARM64;
   if (T.getArch() == Triple::ppc)
 return MachO::CPU_TYPE_POWERPC;
   if (T.getArch() == Triple::ppc64)



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] 45de421 - AArch64: use correct operand for ubsantrap immediate.

2020-12-09 Thread Tim Northover via llvm-branch-commits

Author: Tim Northover
Date: 2020-12-09T10:17:16Z
New Revision: 45de42116e3f588bbead550ab8667388ba4f10ae

URL: 
https://github.com/llvm/llvm-project/commit/45de42116e3f588bbead550ab8667388ba4f10ae
DIFF: 
https://github.com/llvm/llvm-project/commit/45de42116e3f588bbead550ab8667388ba4f10ae.diff

LOG: AArch64: use correct operand for ubsantrap immediate.

I accidentally pushed the wrong patch originally.

Added: 
llvm/test/CodeGen/AArch64/GlobalISel/ubsantrap.ll

Modified: 
llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp

Removed: 




diff  --git a/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp 
b/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
index 0834b0313453..4126017c6fbd 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
@@ -4879,7 +4879,7 @@ bool 
AArch64InstructionSelector::selectIntrinsicWithSideEffects(
 break;
   case Intrinsic::ubsantrap:
 MIRBuilder.buildInstr(AArch64::BRK, {}, {})
-.addImm(I.getOperand(0).getImm() | ('U' << 8));
+.addImm(I.getOperand(1).getImm() | ('U' << 8));
 break;
   }
 

diff  --git a/llvm/test/CodeGen/AArch64/GlobalISel/ubsantrap.ll 
b/llvm/test/CodeGen/AArch64/GlobalISel/ubsantrap.ll
new file mode 100644
index ..2b72381be35e
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/ubsantrap.ll
@@ -0,0 +1,11 @@
+; RUN: llc -mtriple=arm64-apple-ios %s -o - -global-isel -global-isel-abort=1 
| FileCheck %s
+
+define void @test_ubsantrap() {
+; CHECK-LABEL: test_ubsantrap
+; CHECK: brk #0x550c
+; CHECK-GISEL: brk #0x550c
+  call void @llvm.ubsantrap(i8 12)
+  ret void
+}
+
+declare void @llvm.ubsantrap(i8)



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits