d.
> >
> > Signed-off-by: Philippe Mathieu-Daudé
> > ---
> > Cc: Max Filippov
> > ---
> > target/xtensa/cpu.h| 2 +-
> > target/xtensa/helper.c | 5 +++--
> > 2 files changed, 4 insertions(+), 3 deletions(-)
>
> Reviewed-by: Richard Henderson
Acked-by: Max Filippov
--
Thanks.
-- Max
On Mon, Feb 10, 2025 at 2:26 AM Philippe Mathieu-Daudé
wrote:
>
> Only modify XtensaConfig within xtensa_register_core(),
> when the class is registered, not when it is initialized.
>
> Signed-off-by: Philippe Mathieu-Daudé
> ---
> Cc: Max Filippov
> ---
>
Reviewed-by: Max Chou
On 2025/1/26 3:20 PM, Anton Blanchard wrote:
Signed-off-by: Anton Blanchard
---
target/riscv/insn_trans/trans_rvv.c.inc | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b/target/riscv
.
Thanks,
Max
On 2025/1/26 3:20 PM, Anton Blanchard wrote:
for 2*SEW = 2*SEW op SEW instructions vs2 and vs1 cannot overlap
because it would mean a register is read with two different SEW
settings.
Signed-off-by: Anton Blanchard
---
target/riscv/insn_trans/trans_rvv.c.inc | 3 ++-
1 file changed
the “vector slide instructions” to replace the
specified vslide1down.vx instruction would be better.)
The patch 06 also has the same issue.
Thanks,
Max
On 2025/1/26 3:20 PM, Anton Blanchard wrote:
Signed-off-by: Anton Blanchard
---
target/riscv/insn_trans/trans_rvv.c.inc | 1 +
1 file
.)
Additionally, the patch 04/07/08/09/10 also have the same issue.
Thanks,
Max
On 2025/1/26 3:20 PM, Anton Blanchard wrote:
Signed-off-by: Anton Blanchard
---
target/riscv/insn_trans/trans_rvv.c.inc | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b
Hi Anton,
You might need to extend this patch or provide a new patch to handle
the different EEWs source operands checking for the vrgatherei16.vv
instruction (when SEW is not 16).
Thanks,
Max
On 2025/1/26 3:20 PM, Anton Blanchard wrote:
Signed-off-by: Anton Blanchard
---
target/riscv
Reviewed-by: Max Chou
On 2025/1/26 3:20 PM, Anton Blanchard wrote:
Add the relevant ISA paragraphs explaining why source (and destination)
registers cannot overlap the mask register.
Signed-off-by: Anton Blanchard
---
target/riscv/insn_trans/trans_rvv.c.inc | 29
ally no board-level switch for the CPU endianness.
Also big- and little-endian instruction encodings are different on
otherwise identical xtensa CPUs.
--
Thanks.
-- Max
According to the Vector Reduction Operations section in the RISC-V "V"
Vector Extension spec,
"If vl=0, no operation is performed and the destination register is not
updated."
The vd should be updated when vl is larger than 0.
Signed-off-by: Max Chou
---
target/riscv
In prop_vlen_set function, there is an incorrect comparison between
vlen(bit) and vlenb(byte).
This will cause unexpected error when user applies the `vlen=1024` cpu
option with a vendor predefined cpu type that the default vlen is
1024(vlenb=128).
Signed-off-by: Max Chou
---
target/riscv/cpu.c
value
to 64 bits during the TCG translation phase to ensure that the helper
functions won't lost the higer 32 bits.
Signed-off-by: Max Chou
---
target/riscv/helper.h | 16
target/riscv/insn_trans/trans_rvv.c.inc | 50 -
target/
https://github.com/rnax/rvv_ldst_test
max
Reviewed-by: Max Chou
max
On 2024/12/18 10:23 PM, Craig Blackmore wrote:
Replace `continus` with `continuous`.
Signed-off-by: Craig Blackmore
---
target/riscv/vector_helper.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/target/riscv/vector_helper.c b
On 2024/12/11 8:51 PM, Craig Blackmore wrote:
Calling `vext_continuous_ldst_tlb` for load/stores smaller than 12 bytes
significantly improves performance.
Co-authored-by: Helene CHELIN
Co-authored-by: Paolo Savini
Co-authored-by: Craig Blackmore
Signed-off-by: Helene CHELIN
Signed-off-by: P
> +if (swap_needed) {
> +bswap32s(&argptr);
> +}
>
> cpu_memory_rw_debug(cs,
> regs[3] + i * sizeof(uint32_t),
--
Thanks.
-- Max
sa_isa_is_big_endian(xtensa_isa isa);
This file doesn't include stdbool.h and other boolean functions in it
(e.g. xtensa_opcode_is_branch()) return int. I'd suggest sticking with
that. With that change:
Acked-by: Max Filippov
> #ifdef __cplusplus
> }
> diff --git a/target/xtensa
group multiple elements, the vstart
value remains the index of the first element, which is not the actual
element index that raised the exception.
Max
On 2024/12/4 8:29 PM, Craig Blackmore wrote:
This patch improves the performance of the emulation of the RVV unit-stride
loads and stores in the
SR will get unexpected value.
Because this flow does not update vstart CSR value.
max
et/xtensa/cpu.h| 6 ++
> target/xtensa/cpu.c| 2 +-
> target/xtensa/fpu_helper.c | 33 +++--
> 3 files changed, 26 insertions(+), 15 deletions(-)
Reviewed-by: Max Filippov
--
Thanks.
-- Max
n flag because the propagation rules
> will handle everything.)
>
> Signed-off-by: Peter Maydell
> ---
> target/xtensa/fpu_helper.c | 2 ++
> fpu/softfloat-specialize.c.inc | 12 +---
> 2 files changed, 3 insertions(+), 11 deletions(-)
Reviewed-by: Max Filippov
--
Thanks.
-- Max
ping.
On 2024/9/19 1:14 AM, Max Chou wrote:
Hi,
This version fixes several issues in v5
- The cross page bound checking issue
- The mismatch vl comparison in the early exit checking of vext_ldst_us
- The endian issue when host is big endian
Thank for Richard Henderson's suggestions that
agnostic, so remove the vstart early exit checking.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 129 +++
1 file changed, 70 insertions(+), 59 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index c2fcf8b3a00
Because the real vl (evl) of vext_ldst_us may be different (e.g.
vlm.v/vsm.v/etc.), so the VSTART_CHECK_EARLY_EXIT checking function
should be replaced by checking evl in vext_ldst_us.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 5 -
1 file changed, 4 insertions(+), 1
The vector unmasked unit-stride and whole register load/store
instructions will load/store continuous memory. If the endian of both
the host and guest architecture are the same, then we can group the
element load/store to load/store more data at a time.
Signed-off-by: Max Chou
---
target/riscv
the
element load/store through the original softmmu flow and the direct
access host memory flow.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 363 +--
1 file changed, 224 insertions(+), 139 deletions(-)
diff --git a/target/riscv/vector_helper.c b/ta
The unmasked unit-stride fault-only-first load instructions are similar
to the unmasked unit-stride load/store instructions that is suitable to
be optimized by using a direct access to host ram fast path.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 98
In the vector unit-stride load/store helper functions. the vext_ldst_us
& vext_ldst_whole functions corresponding most of the execution time.
Inline the functions can avoid the function call overhead to improve the
helper function performance.
Signed-off-by: Max Chou
Reviewed-by: Ric
v2: https://lore.kernel.org/all/20240531174504.281461-1-max.c...@sifive.com/
- v3: https://lore.kernel.org/all/20240613141906.1276105-1-max.c...@sifive.com/
- v4: https://lore.kernel.org/all/20240613175122.1299212-1-max.c...@sifive.com/
- v5: https://lore.kernel.org/all/20240717133936.713642-1-max.c...@sifive.
The vm field of the vector load/store whole register instruction's
encoding is 1.
The helper function of the vector load/store whole register instructions
may need the vdata.vm field to do some optimizations.
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 3 +++
1
; Spotted because Coverity (correctly) thought the issue was still
> outstanding.
> ---
> target/xtensa/exc_helper.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Acked-by: Max Filippov
--
Thanks.
-- Max
On 2024/7/25 2:04 PM, Richard Henderson wrote:
On 7/17/24 23:39, Max Chou wrote:
+static inline QEMU_ALWAYS_INLINE void
+vext_continus_ldst_host(CPURISCVState *env, vext_ldst_elem_fn_host
*ldst_host,
+ void *vd, uint32_t evl, uint32_t reg_start,
void *host
On 2024/7/25 1:51 PM, Richard Henderson wrote:
On 7/17/24 23:39, Max Chou wrote:
@@ -199,7 +212,7 @@ static void
vext_ldst_stride(void *vd, void *v0, target_ulong base,
target_ulong stride, CPURISCVState *env,
uint32_t desc, uint32_t vm
On 2024/7/25 2:05 PM, Richard Henderson wrote:
On 7/17/24 23:39, Max Chou wrote:
In the vector unit-stride load/store helper functions. the vext_ldst_us
& vext_ldst_whole functions corresponding most of the execution time.
Inline the functions can avoid the function call overhead to improve
per.c
> @@ -991,7 +991,7 @@ uint32_t HELPER(rptlb1)(CPUXtensaState *env, uint32_t s)
> uint32_t HELPER(pptlb)(CPUXtensaState *env, uint32_t v)
> {
> unsigned nhits;
> -unsigned segment = XTENSA_MPU_PROBE_B;
> +unsigned segment;
The change suggests that coverity is ok
Reviewed-by: Max Chou
On 2024/7/19 9:07 AM, Richard Henderson wrote:
The current pairing of tlb_vaddr_to_host with extra is either
inefficient (user-only, with page_check_range) or incorrect
(system, with probe_pages).
For proper non-fault behaviour, use probe_access_flags with
its nonfault
The vm field of the vector load/store whole register instruction's
encoding is 1.
The helper function of the vector load/store whole register instructions
may need the vdata.vm field to do some optimizations.
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 3 +++
1
In the vector unit-stride load/store helper functions. the vext_ldst_us
& vext_ldst_whole functions corresponding most of the execution time.
Inline the functions can avoid the function call overhead to improve the
helper function performance.
Signed-off-by: Max Chou
Reviewed-by: Ric
The vector unmasked unit-stride and whole register load/store
instructions will load/store continuous memory. If the endian of both
the host and guest architecture are the same, then we can group the
element load/store to load/store more data at a time.
Signed-off-by: Max Chou
---
target/riscv
el.org/all/20240531174504.281461-1-max.c...@sifive.com/
- v3: https://lore.kernel.org/all/20240613141906.1276105-1-max.c...@sifive.com/
- v4: https://lore.kernel.org/all/20240613175122.1299212-1-max.c...@sifive.com/
Max Chou (5):
target/riscv: Set vdata.vm field for vector load/store whol
agnostic, so remove the vstart early exit checking.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 123 +--
1 file changed, 61 insertions(+), 62 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 23396a1b750
the
element load/store through the original softmmu flow and the direct
access host memory flow.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 361 +--
1 file changed, 220 insertions(+), 141 deletions(-)
diff --git a/target/riscv/vector_helper.c b/ta
On 2024/7/10 11:28 AM, Richard Henderson wrote:
The current pairing of tlb_vaddr_to_host with extra is either
inefficient (user-only, with page_check_range) or incorrect
(system, with probe_pages).
For proper non-fault behaviour, use probe_access_flags with
its nonfault parameter set to true.
S
On 2024/6/20 12:29 PM, Richard Henderson wrote:
On 6/13/24 10:51, Max Chou wrote:
This commit references the sve_ldN_r/sve_stN_r helper functions in ARM
target to optimize the vector unmasked unit-stride load/store
instructions by following items:
* Get the loose bound of activate elements
On 2024/6/20 12:38 PM, Richard Henderson wrote:
On 6/13/24 10:51, Max Chou wrote:
The vector unmasked unit-stride and whole register load/store
instructions will load/store continuous memory. If the endian of both
the host and guest architecture are the same, then we can group the
element load
The vector unmasked unit-stride and whole register load/store
instructions will load/store continuous memory. If the endian of both
the host and guest architecture are the same, then we can group the
element load/store to load/store more data at a time.
Signed-off-by: Max Chou
---
target/riscv
If there are not any QEMU plugin memory callback functions, checking
before calling the qemu_plugin_vcpu_mem_cb function can reduce the
function call overhead.
Signed-off-by: Max Chou
---
accel/tcg/ldst_common.c.inc | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a
new interface to direct access host memory
The original element load/store interface is replaced by the new element
load/store functions with _tlb & _host postfix that means doing the
element load/store through the original softmmu flow and the direct
access host memory flow.
Signed-off-by:
In the vector unit-stride load/store helper functions. the vext_ldst_us
& vext_ldst_whole functions corresponding most of the execution time.
Inline the functions can avoid the function call overhead to improve the
helper function performance.
Signed-off-by: Max Chou
---
target/r
.1276105-1-max.c...@sifive.com/
Max Chou (5):
accel/tcg: Avoid unnecessary call overhead from
qemu_plugin_vcpu_mem_cb
target/riscv: rvv: Provide a fast path using direct access to host ram
for unmasked unit-stride load/store
target/riscv: rvv: Provide a fast path using direct access t
The vector unit-stride whole register load/store instructions are
similar to unmasked unit-stride load/store instructions that is suitable
to be optimized by using a direct access to host ram fast path.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 185
The vector unmasked unit-stride and whole register load/store
instructions will load/store continuous memory. If the endian of both
the host and guest architecture are the same, then we can group the
element load/store to load/store more data at a time.
Signed-off-by: Max Chou
---
target/riscv
If there are not any QEMU plugin memory callback functions, checking
before calling the qemu_plugin_vcpu_mem_cb function can reduce the
function call overhead.
Signed-off-by: Max Chou
---
accel/tcg/ldst_common.c.inc | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a
ore vector ld/st functions
Previous version:
- v1: https://lore.kernel.org/all/20240215192823.729209-1-max.c...@sifive.com/
- v2: https://lore.kernel.org/all/20240531174504.281461-1-max.c...@sifive.com/
Max Chou (5):
accel/tcg: Avoid unnecessary call overhead from
qemu_plugin_vcpu_mem_cb
ta
The vector unit-stride whole register load/store instructions are
similar to unmasked unit-stride load/store instructions that is suitable
to be optimized by using a direct access to host ram fast path.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 185
new interface to direct access host memory
The original element load/store interface is replaced by the new element
load/store functions with _tlb & _host postfix that means doing the
element load/store through the original softmmu flow and the direct
access host memory flow.
Signed-off-by:
In the vector unit-stride load/store helper functions. the vext_ldst_us
& vext_ldst_whole functions corresponding most of the execution time.
Inline the functions can avoid the function call overhead to improve the
helper function performance.
Signed-off-by: Max Chou
---
target/r
ch is more correct (if we were to support a different accel).
>
> Reported-by: Anton Johansson
> Signed-off-by: Philippe Mathieu-Daudé
> ---
> target/xtensa/Kconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Acked-by: Max Filippov
--
Thanks.
-- Max
ions that suggested in tcg-op doc).
I will provide next version with the helper function implementation like
sve_ldN_r in ARM target.
Thank you,
Max
On 2024/6/3 1:45 AM, Richard Henderson wrote:
On 5/31/24 12:44, Max Chou wrote:
The vector unit-stride load/store instructions (e.g. vle8.v/vs
endian
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 196 +++-
1 file changed, 194 insertions(+), 2 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b/target/riscv/insn_trans/trans_rvv.c.inc
index bbac73bb12b..44763ccec06 100644
--- a
This commit separate the helper function implementations of vector
segment load/store instructions from other vector load/store
instructions.
This can improve performance by avoiding unnecessary segment operation
when NF = 1.
Signed-off-by: Max Chou
---
target/riscv/helper.h
The helper_check_probe_[read|write] functions wrap the probe_pages
function to perform virtual address resolution for continuous vector
load/store instructions.
Signed-off-by: Max Chou
---
target/riscv/helper.h| 4
target/riscv/vector_helper.c | 12
2 files changed
* Without mask
* Without tail agnostic
* Both host and target are little endian
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 197 +++-
1 file changed, 195 insertions(+), 2 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b/target/riscv
In the vector unit-stride load/store helper functions. the vext_ldst_us
function corresponding most of the execution time. Inline the functions
can avoid the function call overhead to improve the helper function
performance.
Signed-off-by: Max Chou
Reviewed-by: Richard Henderson
---
target
QEMU user mode.
PS: This RFC patch set only focuses on the vle8.v/vse8.v/vl8re8.v/vs8r.v
instructions. The next version will try to complete other instructions.
Series based on riscv-to-apply.next branch (commit 1806da7).
Max Chou (6):
target/riscv: Separate vector segment ld/st instructions
If there are not any QEMU plugin memory callback functions, checking
before calling the qemu_plugin_vcpu_mem_cb function can reduce the
function call overhead.
Signed-off-by: Max Chou
---
accel/tcg/ldst_common.c.inc | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a
Reviewed-by: Max Chou
Max
On 2024/5/11 7:26 PM, Yangyu Chen wrote:
This code has a typo that writes zvkb to zvkg, causing users can't
enable zvkb through the config. This patch gets this fixed.
Signed-off-by: Yangyu Chen
Fixes: ea61ef7097d0 ("target/riscv: Move vector crypto ext
nner
> that is consistent with how the changes to other CPUs have been documented so
> far?
>
> If so, I indeed desire such an account.
I've created an account ZackBuhman for you and sent the password off-list.
--
Thanks.
-- Max
-sta...@nongnu.org
Signed-off-by: Max Filippov
---
Changes v1->v2:
- split into a separate patch
- add PPC, SPARC and big-endian MIPS
linux-user/syscall.c | 20 +++-
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
in
sysv IPC structures")
Signed-off-by: Max Filippov
---
Changes v1->v2:
- split into a separate patch
linux-user/syscall.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index e384e1424890..d9bfd31c1cad 100644
---
On Fri, Mar 29, 2024 at 5:48 AM Philippe Mathieu-Daudé
wrote:
>
> Hi Max,
>
> On 29/3/24 07:31, Max Filippov wrote:
> > - target_ipc_perm::mode and target_ipc_perm::__seq fields are 32-bit wide
> >on xtensa and thus need to use tswap32
> > - target_msqid_ds::msg_
http://nsz.repo.hu/git/?p=libc-test
Cc: qemu-sta...@nongnu.org
Fixes: a3da8be5126b ("target/xtensa: linux-user: fix sysv IPC structures")
Signed-off-by: Max Filippov
---
linux-user/syscall.c | 19 +++
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/
d, 991 insertions(+), 918 deletions(-)
> create mode 100644 target/xtensa/mmu.h
> create mode 100644 target/xtensa/mmu.c
--
Thanks.
-- Max
[PATCH] target/riscv: rvv: Check single width operator for vector fp
widen instructions
[PATCH] target/riscv: rvv: Check single width operator for
vfncvt.rod.f.f.w
[PATCH] target/riscv: rvv: Remove redudant SEW checking for vector fp
narrow/widen instructions
Max Chou (4
The opfv_narrow_check needs to check the single width float operator by
require_rvf.
Signed-off-by: Max Chou
Reviewed-by: Daniel Henrique Barboza
---
target/riscv/insn_trans/trans_rvv.c.inc | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b/target
width float, so the opfxv_widen_check function doesn’t
need require_rvf for the single width operator(integer).
Signed-off-by: Max Chou
Reviewed-by: Daniel Henrique Barboza
---
target/riscv/insn_trans/trans_rvv.c.inc | 5 +
1 file changed, 5 insertions(+)
diff --git a/target/riscv
According v spec 18.4, only the vfwcvt.f.f.v and vfncvt.f.f.w
instructions will be affected by Zvfhmin extension.
And the vfwcvt.f.f.v and vfncvt.f.f.w instructions only support the
conversions of
* From 1*SEW(16/32) to 2*SEW(32/64)
* From 2*SEW(32/64) to 1*SEW(16/32)
Signed-off-by: Max Chou
If the checking functions check both the single and double width
operators at the same time, then the single width operator checking
functions (require_rvf[min]) will check whether the SEW is 8.
Signed-off-by: Max Chou
Reviewed-by: Daniel Henrique Barboza
---
target/riscv/insn_trans
Thanks for the notification.
I'll resend this series and rebase on the riscv-to-apply.next branch.
Max
On 2024/3/22 12:12 PM, Alistair Francis wrote:
On Wed, Mar 20, 2024 at 5:28 PM Max Chou wrote:
When SEW is 16, we need to check whether the Zvfhmin is enabled for the
single width ope
According to the Zvfbfmin definition in the RISC-V BF16 extensions spec,
the Zvfbfmin extension only requires either the V extension or the
Zve32f extension.
Signed-off-by: Max Chou
---
target/riscv/tcg/tcg-cpu.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/target/riscv/tcg/tcg-cpu.c
The opfv_narrow_check needs to check the single width float operator by
require_rvf.
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b/target/riscv/insn_trans/trans_rvv.c.inc
index
If the checking functions check both the single and double width
operators at the same time, then the single width operator checking
functions (require_rvf[min]) will check whether the SEW is 8.
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 16
1 file
width float, so the opfxv_widen_check function doesn’t
need require_rvf for the single width operator(integer).
Signed-off-by: Max Chou
---
target/riscv/insn_trans/trans_rvv.c.inc | 5 +
1 file changed, 5 insertions(+)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc
b/target/riscv
instructions.
Max Chou (4):
target/riscv: rvv: Fix Zvfhmin checking for vfwcvt.f.f.v and
vfncvt.f.f.w instructions
target/riscv: rvv: Check single width operator for vector fp widen
instructions
target/riscv: rvv: Check single width operator for vfncvt.rod.f.f.w
target/riscv: rvv: Remove
According v spec 18.4, only the vfwcvt.f.f.v and vfncvt.f.f.w
instructions will be affected by Zvfhmin extension.
And the vfwcvt.f.f.v and vfncvt.f.f.w instructions only support the
conversions of
* From 1*SEW(16/32) to 2*SEW(32/64)
* From 2*SEW(32/64) to 1*SEW(16/32)
Signed-off-by: Max Chou
Reviewed-by: Max Chou
On 2024/3/15 1:56 AM, Daniel Henrique Barboza wrote:
Commit 8ff8ac6329 added a conditional to guard the vext_ldst_whole()
helper if vstart >= evl. But by skipping the helper we're also not
setting vstart = 0 at the end of the insns, which is incorrect.
We
em_mask(v0, i)) { \
/* set masked-off elements to 1s */ \
@@ -4772,6 +4844,8 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong s1,
void *vs2, \
uint32_t vma = vext_vma(desc); \
target_ulong offset = s1, i_min, i;
Hi Daniel,
According the v spec section 15.2 & 15.3.
"The vcpop.m instruction writes x[rd] even if vl=0 (with the value 0,
since no mask elements are active).
Traps on vcpop.m are always reported with a vstart of 0. The vcpop.m
instruction will raise an illegal instruction exception if vstar
Looks liked that I missed this one.
Thank you Daniel
Max.
On 2024/3/7 1:17 AM, Daniel Henrique Barboza wrote:
On 3/6/24 13:10, Max Chou wrote:
When vlmul is larger than 5, the original fractional LMUL checking may
gets unexpected result.
Signed-off-by: Max Chou
---
There's alre
When vlmul is larger than 5, the original fractional LMUL checking may
gets unexpected result.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index
until we can make sure
that these patches can get benefit on all combinations without side effect.
I'll focus on avoiding over-use of the full out-of-line load/store
routines for the next version.
Thanks for the suggestion and question,
Max
[1]
https://inbox.sourceware.org/libc-alpha/202305
Hi Daniel,
Thank you for the information and suggestion.
Yes, we can do it better if we load/store more bytes at a time.
I'll try to improve the RFC on this way.
Thanks,
Max
On 2024/2/16 5:11 AM, Daniel Henrique Barboza wrote:
On 2/15/24 16:28, Max Chou wrote:
In the vector unit-s
Hi Richard,
Thank you for the suggestion and the reference.
I'm trying to follow the reference to implement it and I'll send another
version for this.
Thanks a lot,
Max
On 2024/2/16 4:24 AM, Richard Henderson wrote:
On 2/15/24 09:28, Max Chou wrote:
Hi all,
When glibc with RVV
more experiment results to check the status of other
plugin callbacks.
Thanks,
Max
On 2024/2/16 4:21 AM, Daniel Henrique Barboza wrote:
On 2/15/24 16:28, Max Chou wrote:
If there are not any QEMU plugin memory callback functions, checking
before calling the qemu_plugin_vcpu_mem_cb function can r
Hi Richard,
Thank you for the suggestion. I'll do a v2 with this.
Thanks,
Max
On 2024/2/16 4:03 AM, Richard Henderson wrote:
On 2/15/24 09:28, Max Chou wrote:
If there are not any QEMU plugin memory callback functions, checking
before calling the qemu_plugin_vcpu_mem_cb function can r
Signed-off-by: Max Chou
---
accel/tcg/user-exec.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 803c271df11..9ef35a22279 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1050,8 +1050,9 @@ static
If there are not any QEMU plugin memory callback functions, checking
before calling the qemu_plugin_vcpu_mem_cb function can reduce the
function call overhead.
Signed-off-by: Max Chou
---
accel/tcg/ldst_common.c.inc | 40 +++--
1 file changed, 30 insertions
In the vector unit-stride load/store helper functions. the vext_ldst_us
function corresponding most of the execution time. Inline the functions
can avoid the function call overhead to imperove the helper function
performance.
Signed-off-by: Max Chou
---
target/riscv/vector_helper.c | 30
Signed-off-by: Max Chou
---
accel/tcg/user-exec.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 68b252cb8e8..c5453810eee 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -942,8 +942,11 @@ void
Signed-off-by: Max Chou
---
accel/tcg/user-exec.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index c5453810eee..803c271df11 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -963,8 +963,9 @@ static inline
1 - 100 of 6882 matches
Mail list logo