Tamar Christina <tamar.christ...@arm.com> writes:
> Hi All,
>
> The patch series will adjust how zeros are created.  In principal it doesn't
> matter the exact lane size a zero gets created on but this makes the tests a
> bit fragile.
>
> This preparation patch will update the testsuite to accept multiple variants
> of ways to create vector zeros to accept both the current syntax and the one
> being transitioned to in the series.
>
> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
>
> Ok for master?
>
> Thanks,
> Tamar
>
> gcc/testsuite/ChangeLog:
>
>       * gcc.target/aarch64/ldp_stp_18.c: Update zero regexpr.
>       * gcc.target/aarch64/memset-corner-cases.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_bf16.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_f16.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_f32.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_f64.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_s16.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_s32.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_s64.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_s8.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_u16.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_u32.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_u64.c: Likewise.
>       * gcc.target/aarch64/sme/acle-asm/revd_u8.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acge_f16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acge_f32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acge_f64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acgt_f16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acgt_f32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acgt_f64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acle_f16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acle_f32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/acle_f64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/aclt_f16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/aclt_f32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/aclt_f64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/bic_s8.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/bic_u8.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/cmpuo_f16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/cmpuo_f32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/cmpuo_f64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_f16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_f32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_f64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_s16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_s32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_s64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_s8.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_u16.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_u32.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_u64.c: Likewise.
>       * gcc.target/aarch64/sve/acle/asm/dup_u8.c: Likewise.
>       * gcc.target/aarch64/sve/const_fold_div_1.c: Likewise.
>       * gcc.target/aarch64/sve/const_fold_mul_1.c: Likewise.
>       * gcc.target/aarch64/sve/dup_imm_1.c: Likewise.
>       * gcc.target/aarch64/sve/fdup_1.c: Likewise.
>       * gcc.target/aarch64/sve/fold_div_zero.c: Likewise.
>       * gcc.target/aarch64/sve/fold_mul_zero.c: Likewise.
>       * gcc.target/aarch64/sve/pcs/args_2.c: Likewise.
>       * gcc.target/aarch64/sve/pcs/args_3.c: Likewise.
>       * gcc.target/aarch64/sve/pcs/args_4.c: Likewise.
>       * gcc.target/aarch64/vect-fmovd-zero.c: Likewise.

Thanks for doing this and sorry for creating a lot of the problems
in the first place.

The patch looks good, but some of the new regexps look more robust/
future-proof than others.  E.g. some require z0.b specifically
(rather than [bhsd]), and some require mov d0, #0 rather than
movi d0,#0.

How about the result of passing the patch through the attached hokey
python script?  I tried testing it locally and it seemed to work.

The regexps used in the script do allow some syntactically invalid asm,
but I think that's ok.  The tests are assembly tests rather than
compiler tests, so we can leave the assembler to pick up nonsense.

OK with that change if you agree, and if there are no objections
before Wednesday.

Thanks,
Richard

import sys

GENERAL_Z0 = "+**	movi?	[vdz]0\\.(?:[0-9]*[bhsd])?, #?0\n"
GENERAL_ZCAPTURE = "+**	movi?	[vdz]([0-9]+)\\.(?:[0-9]*[bhsd])?, #?0\n"

MAP = {
    "+**	mov	(d0|z0.b), #0\n": GENERAL_Z0,
    "+**	mov	(d0|z0.[bhsd]), #0\n": GENERAL_Z0,
    "+**	mov(?:i\\td0|\\tz0.b), #0\n": GENERAL_Z0,
    "+**	mov(?:i\\td0|\\tz0.[bhsd]), #0\n": GENERAL_Z0,
    "+**	mov(?:i\\td|\\tz)([0-9]+)(?:\\.[bhsd])?, #0\n": GENERAL_ZCAPTURE,
    "+**	movi	v([0-9]+).\\d+[bhsd], 0\n": GENERAL_ZCAPTURE,
}

for line in sys.stdin:
    sys.stdout.write(MAP.get(line, line))

Reply via email to