================
@@ -2993,6 +2993,22 @@ let Predicates = [HasSVE_or_SME] in {
   defm : unpred_loadstore_bitcast<nxv2i64>;
   defm : unpred_loadstore_bitcast<nxv2f64>;
 
+  // Allow using LDR/STR to avoid the predicate dependence.
+  let Predicates = [IsLE, AllowMisalignedMemAccesses] in
+    foreach Ty = [ nxv16i8, nxv8i16, nxv4i32, nxv2i64, nxv8f16, nxv4f32, 
nxv2f64, nxv8bf16 ] in {
+      let AddedComplexity = 2 in {
+        def : Pat<(Ty (load (am_sve_indexed_s9 GPR64sp:$base, simm9:$offset))),
+                  (LDR_ZXI GPR64sp:$base, simm9:$offset)>;
+        def : Pat<(store Ty:$val, (am_sve_indexed_s9 GPR64sp:$base, 
simm9:$offset)),
+                  (STR_ZXI ZPR:$val, GPR64sp:$base, simm9:$offset)>;
+      }
----------------
rj-jesus wrote:

Ah, I see! Thanks very much, that makes sense! What if I absorb the current 
patterns into the loop so that we still have unconventional loads/stores 
grouped together, and add `HasSVE_or_SME` (from the parent definition) to the 
predicates? Or do you have a better suggestion?

https://github.com/llvm/llvm-project/pull/127837
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to