Uros Bizjak 于2025年1月17日周五 15:05写道:
> Is there a reason to have operand 0 with "nonimmediate_operand"
> predicate? If you have to generate a register temporary and then
> unconditionally copy it to the output, it is better to use
> "register_operand" predicate and leave middle end to do the copy f
Hi,
For shld/shrd_ndd_2 insn, the spiltter outputs wrong pattern that
mixed parallel for clobber and set. Separate out the set to dest
from parallel to fix it.
Bootstrapped & regtested on x86-64-pc-linux-gnu.
Ok for trunk?
gcc/ChangeLog:
PR target/118510
* config/i386/i386.md (
From: Lingling Kong
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_expand_int_cfmovcc): Expand
to cfcmov pattern.
* config/i386/i386-opts.h (enum apx_features): New.
* config/i386/i386-protos.h (ix86_expand_int_cfmovcc): Define.
* config/i386/i386.cc (
From: Lingling Kong
Hi,
Appreciated to Richard's review, the v5 patch contaings below change:
1. Separate the maskload/maskstore emit out from noce_emit_cmove, add
a new function emit_mask_load_store in optabs.cc.
2. Follow the operand order of maskload and maskstore optab and takes
cond as pre
h have no impact for the
test in [2]. We will keep monitoring those cmove issues.
Uros Bizjak 于2025年1月7日周二 17:34写道:
>
> On Tue, Jan 7, 2025 at 8:37 AM Hongyu Wang wrote:
> >
> > Hi,
> >
> > For later processors, the pipeline went deeper so the penalty for
>
Hi,
For later processors, the pipeline went deeper so the penalty for
untaken branch can be larger than before. Add a new parameter
br_mispredict_scale to describe the penalty, and adopt to
noce_max_ifcvt_seq_cost hook to allow longer sequence to be
converted with cmove.
This improves cpu2017 544
Ping^2
Hongyu Wang 于2024年11月21日周四 11:04写道:
>
> Gently ping, it would be appreciate if anyone can help review this.
> We hope this patch will not miss GCC15 for complete support on APX.
>
> Kong, Lingling 于2024年11月14日周四 09:50写道:
>
> >
> > Hi,
> >
> >
Gently ping, it would be appreciate if anyone can help review this.
We hope this patch will not miss GCC15 for complete support on APX.
Kong, Lingling 于2024年11月14日周四 09:50写道:
>
> Hi,
>
> Many thanks to Richard for the suggestion that conditional load is like a
> scalar instance of maskload_opta
Jakub Jelinek 于2024年11月15日周五 16:20写道:
>
> On Fri, Nov 15, 2024 at 04:04:55PM +0800, Hongyu Wang wrote:
> > Following the discussion in pr116738, the insn for UNSPEC_IEEE_MAXMIN
> > actually matches the behavior of if_then_else, so remove the UNSPEC and
> > rewr
Hi,
Following the discussion in pr116738, the insn for UNSPEC_IEEE_MAXMIN
actually matches the behavior of if_then_else, so remove the UNSPEC and
rewrite related pattern with if_then_else.
Bootstrapped & regtested on x86-64-pc-linux-gnu.
Ok for trunk?
gcc/ChangeLog:
* config/i386/i386-
Hi,
For cstorebf4 it uses comparison_operator for BFmode compare, which is
incorrect when directly uses ix86_expand_setcc as it does not canonicalize
the input comparison to correct the compare code by swapping operands.
Since the original code without AVX10.2 calls emit_store_flag_force, who
actu
Uros Bizjak 于2024年11月7日周四 15:22写道:
>
> On Thu, Nov 7, 2024 at 6:58 AM Hongyu Wang wrote:
> >
> > Hi,
> >
> > We recently supports cbranchbf4 with AVX10_2 native bf16 comi
> > instructions, so do similar to cstorebf4.
> >
> > Bootstrapped &
Hi,
We recently supports cbranchbf4 with AVX10_2 native bf16 comi
instructions, so do similar to cstorebf4.
Bootstrapped & regtested on x86_64-pc-linux-gnu.
Ok for trunk?
gcc/ChangeLog:
* config/i386/i386.md (cstorebf4): Use vcomsbf16 under
TARGET_AVX10_2_256 and -fno-trapping-m
From: Levy Hsu
This patch enables the use of the VCOMSBF16 instruction from AVX10.2 for
efficient BF16 comparisons.
Bootstrapped & regtested on x86-64-pc-linux-gnu.
Ok for trunk?
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_expand_branch): Handle BFmode
when TARGET_AVX10_2
Andi Kleen 于2024年8月5日周一 06:31写道:
>
> > BTW, I noticed that in LLVM there is FP8 support for ARM currently
> > undergoing. I will have a look on it to see if everything is mature.
>
> There's even FP8 work for ARM work under way for gcc, see
> https://gcc.gnu.org/pipermail/gcc-patches/2024-August/6
Richard Biener 于2024年7月26日周五 19:45写道:
>
> On Fri, Jul 26, 2024 at 10:50 AM Hongyu Wang wrote:
> >
> > Hi,
> >
> > When introducing munroll-only-small-loops, the option was marked as
> > Target Save and added to -O2 default which makes attribute(optimize)
Hi,
When introducing munroll-only-small-loops, the option was marked as
Target Save and added to -O2 default which makes attribute(optimize)
resets target option and causing error when cmdline has O1 and
funciton attribute has O2 and other target options. Mark this option
as Optimization to fix.
> Could you just git revert 6d0b7b69d143025f271d0041cfa29cf26e6c343b?
We can still deal with BFmode permutation the same way as HFmode, so
the change in ix86_vectorize_vec_perm_const can be preserved.
Hongtao Liu 于2024年7月15日周一 09:40写道:
>
> On Sat, Jul 13, 2024 at 3:44 PM Hongyu Wa
Hi,
According to the instruction spec of AVX512BF16, the convert from float
to BF16 is not a simple truncation. It has special handling for
denormal/nan, even for normal float it will add an extra bias according
to the least significant bit for bf number. This means we cannot use the
vcvtne2ps2bf1
Hi,
For APX ccmp, current infrastructure will always generate cstore for
the ccmp flag user, like
cmpe%rcx, %r8
ccmpnel %rax, %rbx
seta%dil
add %rcx, %r9
add %r9, %rdx
testb %dil, %dil
je .L2
For such case, the legacy
apx spec, the mismatched
pushp/popp pair does confused the fast-forwarding logic and turns off
the PPX optimization. We just need to make sure every pushp for a
certain reg has corresponding popp for that reg.
Richard Biener 于2024年7月2日周二 16:18写道:
>
> On Tue, Jul 2, 2024 at 5:24 AM Hongyu Wan
Hi,
According to APX spec, the pushp/popp pairs should be matched,
otherwise the PPX hint cannot take effect and cause performance loss.
In the ix86_expand_epilogue, there are several optimizations that may
cause the epilogue using mov to restore the regs. Check if PPX applied
and prevent usage o
Hi,
This patch adjusts several new feature check in ix86_option_override_interal
that directly use TARGET_* instead of TARGET_*_P (opts->ix86_isa_flags),
which caused cmdline option overrides target_attribute isa flag.
Bootstrapped && regtested on x86_64-pc-linux-gnu.
Ok for trunk?
gcc/ChangeLo
Thanks, this it the patch I'm going to check-in.
Richard Sandiford 于2024年6月13日周四 17:04写道:
>
> Hongyu Wang writes:
> > Hi,
> >
> > In cfgexpand, there is an optimization for branch which tests
> > targetm.gen_ccmp_first == NULL. However for target like x86-64,
Sorry for breaking the original logic, and very appreciate for your
patch!! It does makes the logic more clear on top of opts and
opts_set.
I think the function name can be like ix86_unroll_flag_adjust instead
of ix86_override_options_after_change_1, like the previous 2 functions
which declares th
> Perhaps the constraint can be slightly optimized to avoid repeating
> (,) pairs.
>
> ",m,"
> "C ,,"
Yes, will check-in with this change. Thanks!
Uros Bizjak 于2024年6月13日周四 14:06写道:
>
> On Thu, Jun 13, 2024 at 3:44 AM Hongyu Wang wrote:
> &
Hi,
In cfgexpand, there is an optimization for branch which tests
targetm.gen_ccmp_first == NULL. However for target like x86-64, the
hook was implemented but it does not indicate that ccmp was enabled.
Add a new target hook TARGET_HAVE_CCMP and replace the middle-end
check for the existance of ge
Thanks for the advice, updated patch in attachment.
Bootstrapped/regtested on x86-64-pc-linux-gnu. Ok for trunk?
Uros Bizjak 于2024年6月12日周三 18:12写道:
>
> On Wed, Jun 12, 2024 at 12:00 PM Uros Bizjak wrote:
> >
> > On Wed, Jun 12, 2024 at 5:12 AM Hongyu Wang wro
Hi,
For CTEST, we don't have conditional AND so there's no optimization
opportunity to write a new ctest pattern. Emit ctest when ccmp did
comparison to const 0 to save bytes.
Bootstrapped & regtested under x86-64-pc-linux-gnu.
Ok for trunk?
gcc/ChangeLog:
* config/i386/i386.md (@ccmp)
ns first. The costs are not
+ meaningful for failed expansions. */
+
+ if (ret2 && (!ret || cost2 < cost1))
{
*prep_seq = prep_seq_2;
*gen_seq = gen_seq_2;
--
2.31.1
Richard Sandiford 于2024年6月5日周三 17:21写道:
>
> Hongyu Wang writes:
> > CC'd R
Current target apxf check does not specify sub-features that assembler
supports, so the check with older binutils will fail at assemble stage
for new apx features like NF,CCMP or CFCMOV. Adjust the assembler check
for latest apx subfeatures.
Bootstrapped & regtested on x86-64-pc-linux-gnu with bin
Gently ping :)
Hi Richard, Is it OK to adopt the ccmp change? Or did you know who can
help to review this part?
Thanks.
Hongyu Wang 于2024年5月23日周四 16:27写道:
>
> Gently ping for this :)
> Hi Richard, Is it OK to adopt the ccmp change? Or did you know who can
> help to review this pa
Gently ping for this :)
Hi Richard, Is it OK to adopt the ccmp change? Or did you know who can
help to review this part?
Thanks.
Hongyu Wang 于2024年5月15日周三 16:25写道:
>
> CC'd Richard for ccmp part as previously it is added only for aarch64.
> The original logic will not interr
Richard Biener 于2024年5月16日周四 15:05写道:
>
> On Thu, May 16, 2024 at 8:25 AM Hongyu Wang wrote:
> >
> > Hi,
> >
> > In ix86_override_options_after_change, calls to ix86_default_align
> > and ix86_recompute_optlev_based_flags will cause mism
Hi,
In ix86_override_options_after_change, calls to ix86_default_align
and ix86_recompute_optlev_based_flags will cause mismatched target
opt_set when doing cl_optimization_restore. Move them back to
ix86_option_override_internal to solve the issue.
Bootstrapped & regtested on x86_64-pc-linux-gnu
t cmp supports but ccmp not,
so ret/ret2 will all be valid when comparing cost.
Thanks in advance.
Hongyu Wang 于2024年5月15日周三 16:22写道:
>
> For general ccmp scenario, the tree sequence is like
>
> _1 = (a < b)
> _2 = (c < d)
> _3 = _1 & _2
>
> current ccmp expandin
APX CCMP feature implements conditional compare which executes compare
when EFLAGS matches certain condition.
CCMP introduces default flags value (dfv), when conditional compare does
not execute, it will directly set the flags according to dfv.
The instruction goes like
ccmpeq {dfv=sf,of,cf,zf}
For general ccmp scenario, the tree sequence is like
_1 = (a < b)
_2 = (c < d)
_3 = _1 & _2
current ccmp expanding will try to swap compare order for _1 and _2,
compare the cost/cost2 between compare _1 and _2 first, then return the
sequence with lower cost.
For x86 ccmp, we don't support FP com
The ccmp insn itself doesn't support fp compare, but x86 has fp comi
insn that changes EFLAG which can be the scc input to ccmp. Allow
scalar fp compare in ix86_gen_ccmp_first except ORDERED/UNORDERD
compare which can not be identified in ccmp.
gcc/ChangeLog:
* config/i386/i386-expand.cc
html
Hongyu Wang (3):
[APX CCMP] Support APX CCMP
[APX CCMP] Adjust startegy for selecting ccmp candidates
[APX CCMP] Support ccmp for float compare
gcc/ccmp.cc| 12 +-
gcc/config/i386/i386-expand.cc | 164 +
gcc/config/i386/
The latest APX spec announced removal of SHA/KEYLOCKER evex promotion [1],
which means the SHA/KEYLOCKER insn does not support EGPR when APX
enabled. Update the corresponding constraints to their EGPR-disabled
counterparts.
Bootstrapped and regtested on x86-64-pc-linux-gnu.
Ok for trunk?
[1].htt
Thanks for fixing this! Didn't notice that the pointer conversion can
cause this issue...
Was it possible to use local array like
char a[64] = (char *)p
__asm__ volatile ("ldtilecfg\t%X0" :: "m" (a)));
If not, for the two patterns we can use "m" instead of "jm" as APX
supports EGPR extension for
I'm going to check-in this if no objection
Hongyu Wang 于2024年1月9日周二 15:14写道:
>
> Hi,
>
> This patch adds missing description for inline asm behavior and related
> compiler switch for APX.
>
> Ok for gcc-wwwdocs?
>
> ---
> htdocs/gcc-14/changes.html | 6 +++
Thanks, this is the patch I'm going to check-in
Hongtao Liu 于2024年1月10日周三 16:02写道:
>
> On Tue, Jan 9, 2024 at 3:09 PM Hongyu Wang wrote:
> >
> > Hi,
> >
> > For APX, the inline asm behavior was not mentioned in any document
> > before. Add description
Hi,
This patch adds missing description for inline asm behavior and related
compiler switch for APX.
Ok for gcc-wwwdocs?
---
htdocs/gcc-14/changes.html | 6 ++
1 file changed, 6 insertions(+)
diff --git a/htdocs/gcc-14/changes.html b/htdocs/gcc-14/changes.html
index e3a68998..73a90d30 1006
Hi,
For APX, the inline asm behavior was not mentioned in any document
before. Add description for it.
Ok for trunk?
gcc/ChangeLog:
* config/i386/i386.opt: Adjust document.
* doc/invoke.texi: Add description for
-mapx-inline-asm-use-gpr32.
---
gcc/config/i386/i386.opt |
Hi,
The supported sub-features for APX was missing in option document and
target attribute section. Add those missing ones.
Ok for trunk?
gcc/ChangeLog:
* config/i386/i386.opt: Add supported sub-features.
* doc/extend.texi: Add description for target attribute.
---
gcc/config/i
Hi,
As Coudert points out, this test fails on darwin as it does not
support _Decimal64, so require dfp for it.
Pushed as obvious fix.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr112943.c: Require dfp.
---
gcc/testsuite/gcc.target/i386/pr112943.c | 2 +-
1 file changed, 1 insertion(+)
Hi,
Currently move_max follows the tuning feature first, but ideally it
should sync with prefer-vector-width when it is explicitly set to keep
vector move and operation with same vector size.
Bootstrapped/regtested on x86-64-pc-linux-gnu{-m32,}
OK for trunk?
gcc/ChangeLog:
PR target/11
> > +__int128 u128_2 = (9223372036854775808 << 4) * foo0_u8_0; /* {
> > dg-warning "integer constant is so large that it is unsigned" "so large" }
> > */
>
> Just you can use (9223372036854775807LL + (__int128) 1) instead of
> 9223372036854775808
> to avoid the warning.
> The testcase will I
Hi,
The ashl/lshr/ashr expanders calls ix86_expand_binary_operator, while
they will be called for some post-reload split, and TARGET_APX_NDD is
required for these calls to avoid force-load to memory at postreload
stage.
Bootstrapped/regtested on x86-64-pc-linux-gnu{-m32,}
Ok for master?
gcc/Cha
gcc/ChangeLog:
* config/i386/i386.md (*3_1): Extend with a new
alternative to support NDD for SI/DI rotate, and adjust output
template.
(*si3_1_zext): Likewise.
(*3_1): Likewise for QI/HI modes.
(rcrsi2): Likewise, and use nonimmediate_operand for op
For TImode shifts, they are splitted by splitter functions, which assume
operands[0] and operands[1] to be the same. For the NDD alternative the
assumption may not be true so add split functions for NDD to emit the NDD
form instructions, and omit the handling of !64bit target split.
Although the N
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_fixup_binary_operands_no_copy):
Add use_ndd parameter and parse it.
* config/i386/i386-protos.h (ix86_fixup_binary_operands_no_copy):
Change define.
* config/i386/i386.md (sub3): Add new
Similar to LSHIFT, rshift do not need to omit $1 for NDD form.
gcc/ChangeLog:
* config/i386/i386.md (ashr3_cvt): Extend with new
alternatives to support NDD, and adjust output templates.
(*ashr3_1): Likewise for SI/DI mode.
(*lshr3_1): Likewise.
(*si3_1_zex
For left shift, there is an optimization TARGET_DOUBLE_WITH_ADD that shl
1 can be optimized to add. As NDD form of add requires src operand to
be register since NDD cannot take 2 memory src, we currently just keep
using NDD form shift instead of add.
The optimization TARGET_SHIFT1 will try to remo
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386.md: (addsi_1_zext): Add new alternatives for
NDD and adjust output templates.
(*add_2): Likewise.
(*addsi_2_zext): Likewise.
(*add_3): Likewise.
(*addsi_3_zext): Likewise.
(*adddi_4): Li
For shld/shrd insns, the old pattern use match_dup 0 as its shift src and use
+r*m as its constraint. To support NDD we added new define_insns to handle NDD
form pattern with extra input and dest operand to be fixed in register.
gcc/ChangeLog:
* config/i386/i386.md (x86_64_shld_ndd): New
gcc/ChangeLog:
* config/i386/i386.md (*movcc_noc): Extend with new constraints
to support NDD.
(*movsicc_noc_zext): Likewise.
(*movsicc_noc_zext_1): Likewise.
(*movqicc_noc): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/i386/apx-ndd-cmov.c: New
From: Kong Lingling
For NDD form AND insn, there are three splitter fixes after extending legacy
patterns.
1. APX NDD does not support high QImode registers like ah, bh, ch, dh, so for
some optimization splitters that generates highpart zero_extract for QImode
need to be prohibited under NDD pat
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_expand_unary_operator): Add use_ndd
parameter and adjust for NDD.
* config/i386/i386-protos.h: Add use_ndd parameter for
ix86_unary_operator_ok and ix86_expand_unary_operator.
* config/i
From: Kong Lingling
Similar to *add3_doubleword, operands[1] may not equal to operands[0] so
extra move and earlyclobber are required.
gcc/ChangeLog:
* config/i386/i386.md (*sub3_doubleword): Add new alternative for
NDD, adopt '&' modifier to NDD dest and emit move when operands
From: Kong Lingling
Legacy adc patterns are commonly adopted to TImode add, when extending TImode
add to NDD version, operands[0] and operands[1] can be different, so extra move
should be emitted if those patterns have optimization when adding const0_rtx.
For TImode insn, there could be register
From: Kong Lingling
Similar to AND insn, two splitters need to be adjusted to prevent
misoptimizaiton for NDD OR/XOR.
Also adjust *one_cmplsi2_2_zext and its corresponding splitter that will
generate xor insn.
gcc/ChangeLog:
* config/i386/i386.md (3): Add new alternative for NDD
NDD uses evex prefix, so when segment prefix is also applied, the instruction
could excceed its 15byte limit, especially adding immediates. This could happen
when "e" constraint accepts any UNSPEC_TPOFF/UNSPEC_NTPOFF constant and it will
add the offset to segment register, which will be encoded usi
From: Kong Lingling
APX NDD provides an extra destination register operand for several gpr
related legacy insns, so a new alternative can be adopted to operand1
with "r" constraint.
This first patch supports NDD for add instruction, and keeps to use lea
when all operands are registers since lea
From: Kong Lingling
For *one_cmplsi2_2_zext, it will be splitted to xor, so its NDD form will be
added together with xor NDD support.
gcc/ChangeLog:
* config/i386/i386.md (one_cmpl2): Add new constraints for NDD
and adjust output template.
(*one_cmpl2_1): Likewise.
) == ISA_APX_NDD instead of
checking alternative at asm output stage.
Bootstrapped & regtested on x86_64-pc-linux-gnu{-m32,} and sde.
Ok for master?
Hongyu Wang (7):
[APX NDD] Disable seg_prefixed memory usage for NDD add
[APX NDD] Support APX NDD for left shift insns
[APX NDD] Support
Uros Bizjak 于2023年12月5日周二 18:46写道:
>
> On Tue, Dec 5, 2023 at 3:29 AM Hongyu Wang wrote:
> >
> > Under APX NDD, previous TImode allocation will have issue that it was
> > originally allocated using continuous pair, like rax:rdi, rdi:rdx.
> >
> > This will cau
From: Kong Lingling
For NDD form AND insn, there are three splitter fixes after extending legacy
patterns.
1. APX NDD does not support high QImode registers like ah, bh, ch, dh, so for
some optimization splitters that generates highpart zero_extract for QImode
need to be prohibited under NDD pat
For TImode shifts, they are splitted by splitter functions, which assume
operands[0] and operands[1] to be the same. For the NDD alternative the
assumption may not be true so add split functions for NDD to emit the NDD
form instructions, and omit the handling of !64bit target split.
Although the N
From: Kong Lingling
Similar to AND insn, two splitters need to be adjusted to prevent
misoptimizaiton for NDD OR/XOR.
Also adjust *one_cmplsi2_2_zext and its corresponding splitter that will
generate xor insn.
gcc/ChangeLog:
* config/i386/i386.md (3): Add new alternative for NDD
From: Kong Lingling
APX NDD provides an extra destination register operand for several gpr
related legacy insns, so a new alternative can be adopted to operand1
with "r" constraint.
This first patch supports NDD for add instruction, and keeps to use lea
when all operands are registers since lea
From: Kong Lingling
Similar to *add3_doubleword, operands[1] may not equal to operands[0] so
extra move is required.
gcc/ChangeLog:
* config/i386/i386.md (*sub3_doubleword): Add new alternative for
NDD, and emit move when operands[0] not equal to operands[1].
(*sub3_doub
From: Kong Lingling
Legacy adc patterns are commonly adopted to TImode add, when extending TImode
add to NDD version, operands[0] and operands[1] can be different, so extra move
should be emitted if those patterns have optimization when adding const0_rtx.
NDD instructions will automatically zero
gcc/ChangeLog:
* config/i386/i386.md (*3_1): Extend with a new
alternative to support NDD for SI/DI rotate, and adjust output
template.
(*si3_1_zext): Likewise.
(*3_1): Likewise for QI/HI modes.
(rcrsi2): Likewise, and use nonimmediate_operand for op
For left shift, there is an optimization TARGET_DOUBLE_WITH_ADD that shl
1 can be optimized to add. As NDD form of add requires src operand to
be register since NDD cannot take 2 memory src, we currently just keep
using NDD form shift instead of add.
The optimization TARGET_SHIFT1 will try to remo
For shld/shrd insns, the old pattern use match_dup 0 as its shift src and use
+r*m as its constraint. To support NDD we added new define_insns to handle NDD
form pattern with extra input and dest operand to be fixed in register.
gcc/ChangeLog:
* config/i386/i386.md (x86_64_shld_ndd): New
Similar to LSHIFT, rshift do not need to omit $1 for NDD form.
gcc/ChangeLog:
* config/i386/i386.md (ashr3_cvt): Extend with new
alternatives to support NDD, and adjust output templates.
(*ashr3_1): Likewise for SI/DI mode.
(*lshr3_1): Likewise.
(*si3_1_zex
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_fixup_binary_operands_no_copy):
Add use_ndd parameter and parse it.
* config/i386/i386-protos.h (ix86_fixup_binary_operands_no_copy):
Change define.
* config/i386/i386.md (sub3): Add new
gcc/ChangeLog:
* config/i386/i386.md (*movcc_noc): Extend with new constraints
to support NDD.
(*movsicc_noc_zext): Likewise.
(*movsicc_noc_zext_1): Likewise.
(*movqicc_noc): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/i386/apx-ndd-cmov.c: New
NDD uses evex prefix, so when segment prefix is also applied, the instruction
could excceed its 15byte limit, especially adding immediates. This could happen
when "e" constraint accepts any UNSPEC_TPOFF/UNSPEC_NTPOFF constant and it will
add the offset to segment register, which will be encoded usi
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386.md: (addsi_1_zext): Add new alternatives for
NDD and adjust output templates.
(*add_2): Likewise.
(*addsi_2_zext): Likewise.
(*add_3): Likewise.
(*addsi_3_zext): Likewise.
(*adddi_4): Li
Under APX NDD, previous TImode allocation will have issue that it was
originally allocated using continuous pair, like rax:rdi, rdi:rdx.
This will cause issue for all TImode NDD patterns. For NDD we will not
assume the arithmetic operations like add have dependency between dest
and src1, then writ
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_expand_unary_operator): Add use_ndd
parameter and adjust for NDD.
* config/i386/i386-protos.h: Add use_ndd parameter for
ix86_unary_operator_ok and ix86_expand_unary_operator.
* config/i
linux-gnu{-m32,} and sde.
OK for trunk?
Hongyu Wang (8):
[APX NDD] Restrict TImode register usage when NDD enabled
[APX NDD] Disable seg_prefixed memory usage for NDD add
[APX NDD] Support APX NDD for left shift insns
[APX NDD] Support APX NDD for right shift insns
[APX NDD] Support APX ND
From: Kong Lingling
For *one_cmplsi2_2_zext, it will be splitted to xor, so its NDD form will be
added together with xor NDD support.
gcc/ChangeLog:
* config/i386/i386.md (one_cmpl2): Add new constraints for NDD
and adjust output template.
(*one_cmpl2_1): Likewise.
Hi,
On linux x86-64, -fomit-frame-pointer was by default enabled so the
push2pop2 tests cfi scans are based on it. On other target with
-fno-omit-frame-pointer the cfi scan will be wrong as the frame pointer
is pushed at first. Add -fomit-frame-pointer to these tests that related
to cfi scan.
OK
Hi,
The push2/pop2 operand order does not match the binutils implementation
for AT&T syntax that it will first push operands[2] then operands[1].
Correct it by reverse operand order for AT&T syntax.
Bootstrapped/regtested on x86-64-linux-pc-gnu{-m32,}
Ok for master?
gcc/ChangeLog:
* co
anks for the suggestion.
Updated patch with just 1 new UNSPEC and removed cfa handling.
Hongtao Liu 于2023年11月20日周一 14:46写道:
>
> On Fri, Nov 17, 2023 at 3:26 PM Hongyu Wang wrote:
> >
> > Intel APX PPX feature has been released in [1].
> >
> > PPX stands for Push-Pop Accelera
Intel APX PPX feature has been released in [1].
PPX stands for Push-Pop Acceleration. PUSH/PUSH2 and its corresponding POP
can be marked with a 1-bit hint to indicate that the POP reads the
value written by the PUSH from the stack. The processor tracks these marked
instructions internally and fast
Similar to LSHIFT, rshift should also emit $1 for NDD form with CX_REG as
operands[1].
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_can_use_ndd_p): Add LSHIFTRT
and RSHIFTRT.
* config/i386/i386.md (ashr3_cvt): Extend with new
alternatives to support NDD, and a
For shld/shrd insns, the old pattern use match_dup 0 as its shift src and use
+r*m as its constraint. To support NDD we added new define_insns to handle NDD
form pattern with extra input and dest operand to be fixed in register.
gcc/ChangeLog:
* config/i386/i386.md (x86_64_shld_ndd): New
From: Kong Lingling
Similar to AND insn, two splitters need to be adjusted to prevent
misoptimizaiton for NDD OR/XOR.
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_can_use_ndd_p): Add IOR/XOR
support.
* config/i386/i386.md (3): Add NDD alternative and adjust
From: Kong Lingling
For NDD form AND insn, there are three splitter fixes after extending legacy
patterns.
1. APX NDD does not support high QImode registers like ah, bh, ch, dh, so for
some optimization splitters that generates highpart zero_extract for QImode
need to be prohibited under NDD pat
gcc/ChangeLog:
* config/i386/i386.md (*movcc_noc): Extend with new constraints
to support NDD.
(*movsicc_noc_zext): Likewise.
(*movsicc_noc_zext_1): Likewise.
(*movqicc_noc): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/i386/apx-ndd-cmov.c: New
From: Kong Lingling
Similar to *add3_doubleword, operands[1] may not equal to operands[0] so
extra move is required.
gcc/ChangeLog:
* config/i386/i386.md (*sub3_doubleword): Add ndd constraints, and
emit move when operands[0] not equal to operands[1].
(*sub3_doubleword_z
From: Kong Lingling
Legacy adc patterns are commonly adopted to TImode add, when extending TImode
add to NDD version, operands[0] and operands[1] can be different, so extra move
should be emitted if those patterns have optimization when adding const0_rtx.
gcc/ChangeLog:
* config/i386/i3
For left shift, there is an optimization TARGET_DOUBLE_WITH_ADD that shl
1 can be optimized to add. As NDD form of add requires src operand to
be register since NDD cannot take 2 memory src, we currently just keep
using NDD form shift instead of add.
The optimization TARGET_SHIFT1 will try to remo
From: Kong Lingling
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_can_use_ndd_p): Add NOT
support.
* config/i386/i386.md (one_cmpl2): Add NDD constraints, adjust
output template.
(*one_cmpl2_1): Likewise.
(*one_cmplqi2_1): Likewise.
(*o
1 - 100 of 300 matches
Mail list logo