On 6/19/25 9:17 AM, Kito Cheng wrote:
I guess we should implement an auto generated document for mcpu and
mtune document like what we do for -march.
Yea, probably. The more of that stuff that's auto generated the better.
It's easily forgotten.
jeff
The selftests had a bunch of memory leaks that showed up in make
selftest-valgrind as a result of not using auto_vec or other
explicitly calling release. Replacing vec with auto_vec makes the
problem go away. The auto_vec_vec helper is made constructable from a
vec so that objects returned from fu
Using auto_vec ensures that the buffer is always free'd when the
function returns.
PR gcc/gcov-profile 120634
gcc/ChangeLog:
* prime-paths.cc (trie::paths): Use auto_vec.
---
gcc/prime-paths.cc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/gcc/prime-paths.c
On 6/18/25 3:07 AM, Sosutha Sethuramapandian wrote:
longlong.h for RISCV should define count_leading_zeros and
count_trailing_zeros and COUNT_LEADING_ZEROS_0 when ZBB is enabled.
The following patch patch fixes the bug reported in,
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110181 gcc.gnu
My tester has been flagging these regressions since the default cost
model was committed, along with several others
unix/-march=rv64gc_zba_zbb_zbs_zicond: gcc:
gcc.target/riscv/rvv/vsetvl/avl_single-37.c -O2 scan-assembler-times
\\.L[0-9]+\\:\\s+addi\\s+\\s*[a-x0-9]+,\\s*[a-x0-9]+,\\s*[0-
Hi all,
CLDEMOTE is not enabled on clients according to SDM. SDM only mentioned
it will be enabled on Xeon and Atom servers, not clients. Remove them
since Alder Lake (where it is introduced).
Also will backport this patch to GCC12/13/14/15 with some tweaks in
texi change.
Ok for trunk?
Thx,
Ha
Yes, I would like to do it.
Dongyan Chen
在 2025/6/19 23:17, Kito Cheng 写道:
I guess we should implement an auto generated document for mcpu and
mtune document like what we do for -march.
Dongyan, do you have interest to implement that? :)
On Thu, Jun 19, 2025 at 10:02 PM Jeff Law wrote:
We
Note: This patch is currently in discussion on llvm-project's side and
may have minor tweaks. Once that's done, the patch will be redone by
applying upstream changes.
Wern
On 13/6/25 12:40 pm, Wern Lim wrote:
Given a partially misaligned memory read for a large number of bytes
(e.g., we allocat
On 6/19/25 9:15 AM, Paul-Antoine Arras wrote:
*** Context ***
The Haifa scheduler is able to recognise some cases where a dependence D
between two instructions -- a producer P and a consumer C -- can be
broken. In particular, if C contains a memory reference M and P is an
increment, then M
Hi,
These patches fixes a memory leak in the prime paths, and some in the
selftests that show up in make selftest-valgrind. After applying these
patches on my x86-64-linux-gnu system and make selftest-valgrind:
-fself-test: 7665942 pass(es) in 8.943705 seconds
==802130==
==802130== HEAP SUMMARY:
On Fri, 13 Jun 2025, Marek Polacek wrote:
> doesn't need any changes, I think. Another is "modified existing functions
> to preserve the const-ness of the type placed into the function", I don't
> what this is talking about.
It's a duplicate of the entry "added qualifier preserving macros for
b
On 6/19/25 7:43 AM, Alexandre Oliva wrote:
If the frame size grows to nonzero, arm_frame_pointer_required may
flip to true under -fstack-clash-protection -fnon-call-exceptions, and
that may disable the fp2sp elimination part-way through lra.
If pseudos had got assigned to the frame pointer reg
Ping patch. This is the explanation of the changes in the set of 5 patches to
change the internal names of the power5, power6, etc. switches from the
instruction that adds the new feature to the power processor level:
I.e. change:
TARGET_POPCNTB to TARGET_POWER5
The external switch rema
Ping patch. This is patch 1 of 5 that changes the internal names of the
power5, power6, etc. switches from the instruction that adds the new feature to
the power processor level:
I.e. change:
TARGET_POPCNTB to TARGET_POWER5
The external switch remains the same, just the name used intern
Ping patch. This is patch #3 in the set of 5 patches to change the internal
names of the power5, power6, etc. switches from the instruction that adds the
new feature to the power processor level:
I.e. change:
TARGET_POPCNTB to TARGET_POWER5
The external switch remains the same, just the
Ping patch. This is patch #4 in the set of 5 patches to change the internal
names of the power5, power6, etc. switches from the instruction that adds the
new feature to the power processor level:
I.e. change:
TARGET_POPCNTB to TARGET_POWER5
The external switch remains the same, just the
Ping patch. This is patch #2 in the set of 5 patches to change the internal
names of the power5, power6, etc. switches from the instruction that adds the
new feature to the power processor level:
I.e. change:
TARGET_POPCNTB to TARGET_POWER5
The external switch remains the same, just the
Ping patch. This is patch #5 in the set of 5 patches to change the internal
names of the power5, power6, etc. switches from the instruction that adds the
new feature to the power processor level:
I.e. change:
TARGET_POPCNTB to TARGET_POWER5
The external switch remains the same, just the
On 2025-06-16 18:08, Qing Zhao wrote:
gcc/ChangeLog:
* tree-object-size.cc (access_with_size_object_size): Handle pointers
with counted_by.
This should probably just say "Update comment for .ACCESS_WITH_SIZE.".
(collect_object_sizes_for): Likewise.
gcc/testsuite/Chan
On 2025-06-19 12:07, Siddhesh Poyarekar wrote:
On 2025-06-16 18:08, Qing Zhao wrote:
gcc/ChangeLog:
* tree-object-size.cc (access_with_size_object_size): Handle pointers
with counted_by.
This should probably just say "Update comment for .ACCESS_WITH_SIZE.".
(collect_object_sizes
On Thu, Jun 19, 2025 at 9:37 AM Uros Bizjak wrote:
>
> On Wed, Jun 18, 2025 at 4:12 PM Cui, Lili wrote:
> >
> >
> >
> > > -Original Message-
> > > From: Uros Bizjak
> > > Sent: Wednesday, June 18, 2025 9:22 PM
> > > To: Cui, Lili
> > > Cc: gcc-patches@gcc.gnu.org; Liu, Hongtao ;
> > > h
This patch aligns unwinder with the recent chnages in Linux SME signal
behaviour.
Regression checked on AArch64 and no regressions have been found.
OK for trunk?
base commit: 20f59301851
---
Richard Sandiford (1):
aarch64: Adapt unwinder to linux's SME signal behaviour
gcc/doc/sourcebuild.t
LGTM
On Thu, Jun 19, 2025 at 6:27 PM wrote:
>
> From: Pan Li
>
> This patch would like to introduce the combine of vec_dup + vminu.vv
> into vminu.vx on the cost value of GR2VR. The late-combine will take
> place if the cost of GR2VR is zero, or reject the combine if non-zero
> like 1, 2, 15 in
On Thu, Jun 19, 2025 at 02:14:26PM +0100, Stafford Horne wrote:
> When working on PR120587 I found that the ce1 pass was not able to
> properly optimize branches on OpenRISC. This is because of the early
> splitting of "compare" and "branch" instructions during the expand pass.
>
> Convert the cb
Thanks, I found that it can also be solved by changing the default mtune
in file "configure" of riscv-gnu-toolchain and I will prepare a PR to
riscv-gnu-toolchain repo.
Dongyan Chen
在 2025/6/19 15:55, Kito Cheng 写道:
Thanks, pushed with one minor change.
Robin has mentioned that maybe we
Changes from v1:
1. Split into smaller patches with more comments.
2. Do not change gimple-lower-bitint.cc. Optimizations
that relies on large/huge _BitInts being extended in memory
will be posted in other series. Also, the bitint_extend
state is not cached outside of gimple-lower-biti
abi_limb_mode and limb_mode were asserted to be the same when
the target has different endianness for limbs in _BitInts
and words in objects. Otherwise, this assertion also held when the
TYPE_PRECISION of _BitInt type being laid out is less than or equal
to the precision of abi_limb_mode.
But in
For targets that have extended bitints, we need to ensure these
are extended before writing to the memory, including via atomic
exchange.
gcc/c-family/ChangeLog:
* c-common.cc (resolve_overloaded_atomic_exchange): Extend
_BitInts before atomic exchange if needed.
(resolve_
For targets that set the "extended" flag in TARGET_C_BITINT_TYPE_INFO,
we assume small _BitInts to be internally extended after arithmetic
operations. In this case, an extra extension during RTL expansion
can be avoided.
gcc/ChangeLog:
* expr.cc (expand_expr_real_1): Do not call
r
In LoongArch psABI, large _BitInt(N) (N > 64) objects are only
extended to fill the highest 8-byte chunk that contains any used bit,
but the size of such a large _BitInt type is a multiple of their
16-byte alignment. So there may be an entire unused 8-byte
chunk that is not filled by extension, an
On Thu, Jun 19, 2025 at 07:13:07PM +0800, Yang Yujie wrote:
> On Thu, Jun 19, 2025 at 12:32:57PM GMT, Jakub Jelinek wrote:
> > As mentioned in another mail, please follow what aarch64 is doing here (at
> > least unless you explain how it violates your psABI):
> > if (n <= 8)
> > info->limb_mo
Since MOVE_MAX defines the maximum number of bytes that an instruction
can move quickly between memory and registers, use it to get the widest
vector mode in vector loop when inlining memcpy and memset.
gcc/
PR target/120708
* config/i386/i386-expand.cc (ix86_expand_set_or_cpymem): Use
MOVE_MAX t
Bootstrap with COBOL included is currently broken for 32-bit-default Solaris
configurations. There are three issues:
gcc/cobol/lexio.cc: In static member function ‘static std::FILE*
cdftext::lex_open(const char*)’:
gcc/cobol/lexio.cc:1527:55: error: format ‘%d’ expects argument of type ‘int’,
b
On Thu, Jun 19, 2025 at 01:18:10PM GMT, Jakub Jelinek wrote:
> On Thu, Jun 19, 2025 at 07:13:07PM +0800, Yang Yujie wrote:
> > On Thu, Jun 19, 2025 at 12:32:57PM GMT, Jakub Jelinek wrote:
> > > As mentioned in another mail, please follow what aarch64 is doing here (at
> > > least unless you explain
If the frame size grows to nonzero, arm_frame_pointer_required may
flip to true under -fstack-clash-protection -fnon-call-exceptions, and
that may disable the fp2sp elimination part-way through lra.
If pseudos had got assigned to the frame pointer register before that,
they have to be spilled, a
Hi!
The following patch implements the constexpr structured bindings part of
the P2686R4 paper, so the [dcl.pre], [dcl.struct.bind], [dcl.constinit]
and first hunk in [dcl.constexpr] changes.
The paper doesn't have a feature test macro and the constexpr structured
binding part of it seems more-les
From: Richard Sandiford
SME uses a lazy save system to manage ZA. The idea is that,
if a function with ZA state wants to call a "normal" function,
it can leave its state in ZA and instead set up a lazy save buffer.
If, unexpectedly, that normal function contains a nested use of ZA,
that nested u
I guess we should implement an auto generated document for mcpu and
mtune document like what we do for -march.
Dongyan, do you have interest to implement that? :)
On Thu, Jun 19, 2025 at 10:02 PM Jeff Law wrote:
>
>
>
> On 6/19/25 1:55 AM, Kito Cheng wrote:
> > Thanks, pushed with one minor chan
On Thu, 19 Jun 2025 13:53:06 +0200
Jakub Jelinek wrote:
> On Thu, Jun 19, 2025 at 01:38:06PM +0200, Rainer Orth wrote:
> > --- a/gcc/cobol/genapi.cc
> > +++ b/gcc/cobol/genapi.cc
> > @@ -957,7 +957,7 @@ parser_compile_ecs( const std::vector > {
> > SHOW_PARSE_HEADER
> > char ach[64
*** Context ***
The Haifa scheduler is able to recognise some cases where a dependence D
between two instructions -- a producer P and a consumer C -- can be
broken. In particular, if C contains a memory reference M and P is an
increment, then M can be replaced with its incremented version M' w
On Jun 19, 2025, Alexandre Oliva wrote:
> Or maybe the requirements for this testcase should be stated as
> arm_arch_v7? I'd have to add arm_arch_v7 to
> check_effective_target_arm_arch_FUNC_ok et al, if there aren't reasons
> why it's not there, but I'd be happy to do that, and use dg-add-optio
When working on PR120587 I found that the ce1 pass was not able to
properly optimize branches on OpenRISC. This is because of the early
splitting of "compare" and "branch" instructions during the expand pass.
Convert the cbranch* instructions from define_expand to
define_insn_and_split. This dal
After commit 2dcc6dbd8a0 ("emit-rtl: Use simplify_subreg_regno to
validate hardware subregs [PR119966]") the OpenRISC port is broken
again.
Add extend* iinstruction patterns for the SR_F pseudo registers to avoid
having to use the subreg conversions which no longer work.
gcc/ChangeLog:
P
This is a small series to fix If-Conversion on OpenRISC after the build broken
with recent subreg changes.
Stafford Horne (2):
or1k: Implement *extendbisi* to fix ICE in convert_mode_scalar
[PR120587]
or1k: Improve If-Conversion by delaying cbranch splits
gcc/config/or1k/or1k.cc |
On 6/19/25 1:55 AM, Kito Cheng wrote:
Thanks, pushed with one minor change.
Robin has mentioned that maybe we could name it generic-in-order, but
I think this could be a follow up patch if we want, I would like to
have -mtune=generic even though we added that since clang/LLVM already
provided
The `far_branch` attribute only ever takes the values 0 or 1, so make it
a `no/yes` valued string attribute instead.
gcc/ChangeLog:
* config/aarch64/aarch64.md (far_branch): Replace 0/1 with
no/yes.
(aarch64_bcond): Handle rename.
(aarch64_cbz1): Likewise.
Add the `+cmpbr` option to enable the FEAT_CMPBR architectural
extension.
gcc/ChangeLog:
* config/aarch64/aarch64-option-extensions.def (cmpbr): New
option.
* config/aarch64/aarch64.h (TARGET_CMPBR): New macro.
* doc/invoke.texi (cmpbr): New option.
---
gcc/config
AArch64: CMPBR support
New changes in this series:
* Moved 55d981eb91a (adding `%j`/`%J` format specifiers) before
6cc06968320 (adding rules for generating CB instructions). Every
commit in the series should now produce a correct compiler.
* Reduce excessive diff context by not passing `--func
Extract the hardcoded values for the minimum PC-relative displacements
into named constants and document them.
gcc/ChangeLog:
* config/aarch64/aarch64.md (BRANCH_LEN_P_128MiB): New constant.
(BRANCH_LEN_N_128MiB): Likewise.
(BRANCH_LEN_P_1MiB): Likewise.
(BRANCH_LE
Add rules for lowering `cbranch4` to CBB/CBH/CB when
CMPBR extension is enabled.
gcc/ChangeLog:
* config/aarch64/aarch64.md (BRANCH_LEN_P_1Kib): New constant.
(BRANCH_LEN_N_1Kib): Likewise.
(cbranch4): Emit CMPBR instructions if possible.
(cbranch4): New expand rul
Move the rules for CBZ/TBZ to be above the rules for
CBB/CBH/CB. We want them to have higher priority
because they can express larger displacements.
gcc/ChangeLog:
* config/aarch64/aarch64.md (aarch64_cbz1): Move
above rules for CBB/CBH/CB.
(*aarch64_tbz1): Likewise.
gcc/
Make the formatting of the RTL templates in the rules for branch
instructions more consistent with each other.
gcc/ChangeLog:
* config/aarch64/aarch64.md (cbranch4): Reformat.
(cbranchcc4): Likewise.
(condjump): Likewise.
(*compare_condjump): Likewise.
(aar
Commit the test file `cmpbr.c` before rules for generating the new
instructions are added, so that the changes in codegen are more obvious
in the next commit.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp: Add `cmpbr` to the list of extensions.
* gcc.target/aarch64/cmpbr.c: N
The rules for conditional branches were spread throughout `aarch64.md`.
Group them together so it is easier to understand how `cbranch4`
is lowered to RTL.
gcc/ChangeLog:
* config/aarch64/aarch64.md (condjump): Move.
(*compare_condjump): Likewise.
(aarch64_cb1): Likewise.
The CB family of instructions does not support using the CS or CC
condition codes; instead the synonyms HS and LO must be used. GCC has
traditionally used the CS and CC names. To work around this while
avoiding test churn, add new `j` and `J` format specifiers; they will be
used in the next commit
Give the `define_insn` rules used in lowering `cbranch4` to RTL
more descriptive and consistent names: from now on, each rule is named
after the AArch64 instruction that it generates. Also add comments to
document each rule.
gcc/ChangeLog:
* config/aarch64/aarch64.md (condjump): Rename to
On Wed, Jun 18, 2025 at 4:12 PM Cui, Lili wrote:
>
>
>
> > -Original Message-
> > From: Uros Bizjak
> > Sent: Wednesday, June 18, 2025 9:22 PM
> > To: Cui, Lili
> > Cc: gcc-patches@gcc.gnu.org; Liu, Hongtao ;
> > hongjiu...@intel.com
> > Subject: Re: [PATCH] x86: Fix shrink wrap separate
Thanks, pushed with one minor change.
Robin has mentioned that maybe we could name it generic-in-order, but
I think this could be a follow up patch if we want, I would like to
have -mtune=generic even though we added that since clang/LLVM already
provided -mtune=generic :)
> diff --git
> a/gcc/t
Le 18/06/2025 à 23:22, Jerry D a écrit :
On 6/18/25 2:02 PM, Mikael Morin wrote:
From: Mikael Morin
Regression-tested on x86_64-pc-linux-gnu.
OK for master?
Was there a PR for this? or something you just ran into?
I'm not aware of any PR. I was trying to create a testcase exercising
t
On Thu, Jun 19, 2025 at 9:01 AM Hongtao Liu wrote:
>
> On Wed, Jun 18, 2025 at 6:38 PM H.J. Lu wrote:
> >
> > commit ef26c151c14a87177d46fd3d725e7f82e040e89f
> > Author: Roger Sayle
> > Date: Thu Dec 23 12:33:07 2021 +
> >
> > x86: PR target/103773: Fix wrong-code with -Oz from pop to
> In an internal application I noticed that the ipa-inliner is quite
> sensitive to AFDO counts and that seems to make the performance worse.
> Did you notice this? This was before some of your changes. I will try
> again.
The cases I looked into were mixture of late inlining and ipa-cp cloning
be
On Thu, Jun 19, 2025 at 05:59:09PM +0800, Yang Yujie wrote:
> In LoongArch psABI, large _BitInt(N) (N > 64) objects are only
> extended to fill the highest 8-byte chunk that contains any used bit,
> but the size of such a large _BitInt type is a multiple of their
> 16-byte alignment. So there may
On Thu, Jun 19, 2025 at 05:59:10PM +0800, Yang Yujie wrote:
> --- a/gcc/expr.cc
> +++ b/gcc/expr.cc
> @@ -11268,6 +11268,10 @@ expand_expr_real_1 (tree exp, rtx target,
> machine_mode tmode,
>tree ssa_name = NULL_TREE;
>gimple *g;
>
> + type = TREE_TYPE (exp);
> + mode = TYPE_MODE (typ
From: Pan Li
The will be one ICE when expand pass, the bt similar as below.
during RTL pass: expand
red.c: In function 'main':
red.c:20:5: internal compiler error: in require, at machmode.h:323
20 | int main() {
| ^~~~
0x2e0b1d6 internal_error(char const*, ...)
../../../gcc/
Karl Meakin writes:
> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> index be5a97294dd..1d4ae73a963 100644
> --- a/gcc/config/aarch64/aarch64.cc
> +++ b/gcc/config/aarch64/aarch64.cc
> @@ -944,16 +944,50 @@ static const char *
> svpattern_token (enum aarch64_svpatter
Looks good to me. Sorry for any inconvenience.
Cheers,
Roger
> -Original Message-
> From: Hongtao Liu
> Sent: 19 June 2025 08:01
> To: H.J. Lu
> Cc: GCC Patches ; Uros Bizjak ;
> Hongtao Liu ; Roger Sayle
>
> Subject: Re: [PATCH v4] x86: Enable *mov_(and|or) only for -Oz
>
> On Wed,
This patch adds support for C23's _BitInt for LoongArch.
>From the LoongArch psABI[1]:
> _BitInt(N) objects are stored in little-endian order in memory
> and are signed by default.
>
> For N ≤ 64, a _BitInt(N) object have the same size and alignment
> of the smallest fundamental integral type tha
For targets that treat small _BitInts like the fundamental
integral types, we should allow their machine modes to be promoted
in the same way.
gcc/ChangeLog:
* explow.cc (promote_function_mode): Add a case for
small/medium _BitInts.
(promote_mode): Same.
---
gcc/explow.cc
libgcc/ChangeLog:
* config.host: Remove unused code. Include LoongArch-specific
tmake_files after the OS-specific ones.
---
libgcc/config.host | 31 ---
1 file changed, 12 insertions(+), 19 deletions(-)
diff --git a/libgcc/config.host b/libgcc/config.h
On Wed, Jun 18, 2025 at 09:41:25PM +0300, Dimitar Dimitrov wrote:
> On Wed, Jun 18, 2025 at 04:06:14PM +0100, Stafford Horne wrote:
> > On Sat, Jun 07, 2025 at 06:53:28PM +0300, Dimitar Dimitrov wrote:
> > > On Sat, Jun 07, 2025 at 11:38:46AM +0100, Stafford Horne wrote:
> > > > On Fri, Jun 06, 202
On Thu, Jun 19, 2025 at 05:59:06PM +0800, Yang Yujie wrote:
> For targets that have extended bitints, we need to ensure these
> are extended before writing to the memory, including via atomic
> exchange.
>
> gcc/c-family/ChangeLog:
>
> * c-common.cc (resolve_overloaded_atomic_exchange): Ext
From: Pan Li
Add asm dump check and run test for vec_duplicate + vminu.vv
combine to vminu.vx, with the GR2VR cost is 0, 2 and 15.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-u16.c: Add asm check.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-u32.c: Ditto.
From: Pan Li
This patch would like to combine the vec_duplicate + vminu.vv to the
vminu.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if th
From: Pan Li
This patch would like to introduce the combine of vec_dup + vminu.vv
into vminu.vx on the cost value of GR2VR. The late-combine will take
place if the cost of GR2VR is zero, or reject the combine if non-zero
like 1, 2, 15 in test. There will be two cases for the combine:
Case 0:
On Thu, Jun 19, 2025 at 05:59:04PM +0800, Yang Yujie wrote:
> For targets that treat small _BitInts like the fundamental
> integral types, we should allow their machine modes to be promoted
> in the same way.
>
> gcc/ChangeLog:
>
> * explow.cc (promote_function_mode): Add a case for
>
On Thu, Jun 19, 2025 at 05:59:05PM +0800, Yang Yujie wrote:
> abi_limb_mode and limb_mode were asserted to be the same when
> the target has different endianness for limbs in _BitInts
> and words in objects. Otherwise, this assertion also held when the
> TYPE_PRECISION of _BitInt type being laid o
Le 18/06/2025 à 16:51, Richard Biener a écrit :
On Wed, Jun 18, 2025 at 11:23 AM Mikael Morin wrote:
From: Mikael Morin
Hello,
I'm proposing here an interpretor/simulator of the gimple IR.
It proved useful for me to debug complicated testcases, where
the misbehaviour is not obvious if you j
From: Pan Li
Add asm dump check test for vec_duplicate + vminu.vv combine to
vminu.vx, with the GR2VR cost is 0, 1 and 2.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vx_vf/vx-4-u16.c: Add asm check
for vminu.vx combine.
* gcc.target/riscv/rvv/autovec/vx_vf/vx
Am 14.06.25 um 12:45 schrieb Georg-Johann Lay:
This patch introduces an ICE in lra-eliminations.cc:1200 for
an existing test case.
Please ignore this mail. I missed that
https://gcc.gnu.org/pipermail/gcc-patches/2025-June/685870.html
fixes the problem.
Johann
In $builddir/gcc:
$ make -k
On Thu, Jun 19, 2025 at 05:59:08PM +0800, Yang Yujie wrote:
> +/* Implement TARGET_C_BITINT_TYPE_INFO.
> + Return true if _BitInt(N) is supported and fill its details into *INFO.
> */
> +bool
> +loongarch_bitint_type_info (int n, struct bitint_info *info)
> +{
> + if (n <= 8)
> +info->limb
Le 18/06/2025 à 23:50, Thomas Koenig a écrit :
Hi Mikael,
Regression-tested on x86_64-pc-linux-gnu.
OK for master?
Just wondering... how does this relate to the recent fix of PR120483
by Andre? Is this also a regression? If so, maybe a backport would be
in order.
Best regads
Thoma
On Thu, Jun 19, 2025 at 12:32:57PM GMT, Jakub Jelinek wrote:
> As mentioned in another mail, please follow what aarch64 is doing here (at
> least unless you explain how it violates your psABI):
> if (n <= 8)
> info->limb_mode = QImode;
> else if (n <= 16)
> info->limb_mode = HImode;
>
On Thu, Jun 19, 2025 at 01:38:06PM +0200, Rainer Orth wrote:
> --- a/gcc/cobol/genapi.cc
> +++ b/gcc/cobol/genapi.cc
> @@ -957,7 +957,7 @@ parser_compile_ecs( const std::vector {
> SHOW_PARSE_HEADER
> char ach[64];
> -snprintf(ach, sizeof(ach), " Size is %ld; retval is %p",
> +
> Am 19.06.2025 um 08:53 schrieb Jakub Jelinek :
>
> On Thu, Jun 19, 2025 at 08:33:10AM +0200, Richard Biener wrote:
>> How does this interact with -mincoming-stack-boundary? Is this, and thus
>> when we need stack realignment, visible here? Do we know whether we need
>> to realign the stack
On Wed, Jun 18, 2025 at 6:38 PM H.J. Lu wrote:
>
> commit ef26c151c14a87177d46fd3d725e7f82e040e89f
> Author: Roger Sayle
> Date: Thu Dec 23 12:33:07 2021 +
>
> x86: PR target/103773: Fix wrong-code with -Oz from pop to memory.
>
> added "*mov_and" and extended "*mov_or" to transform
> "
Hi YunZe:
Generally I am open minded to accept vendor extensions, however this
patch set really introduces too much pattern...
- NUM_INSN_CODES (defined in insn-codes.h) become 83625 from 48573. (+72%)
- Total line of insn-emit-*.cc becomes 1749362 from 1055750. (+65%)
- Total line of insn-recog
The attached patch does some cleanup to the memory allocation
description, which I mainly started as I wondered myself about some
details - especially about the pool_size feature.
It also includes the documentation about omp::allocator::* by Alex.
And, as I proposed by then (cf. below), it mov
87 matches
Mail list logo