do_fill_tlb_entry is used to fill a tlb entry.
Signed-off-by: Song Gao
---
target/loongarch/tcg/tlb_helper.c | 43 ++-
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c
b/target/loongarch/tcg/tlb_helper.c
index 3c3452b3
Loongson-3A6000 and newer processors have hardware page table walker
(PTW) support. PTW can handle all fastpaths of PIL/PIS/PIF/PIE
exceptions by hardware.
V2:
- Remove the '21' magic value, patch1;
- Add a flag is_debug for debug access, patch5;
- Use qatomic_cmpxchg to change the new pte_val, pa
Add hardware page table walker (HPTW) feature for la664.
Set CPUCFG2.HPTW = 1 to indicate that HPTW is implemented on this CPU.
Set PWCH.HPTW_EN = 1 to enable HPTW.
Signed-off-by: Song Gao
---
target/loongarch/cpu-csr.h| 3 +
target/loongarch/cpu.c| 1 +
target/loongarch/
Add a new LoongArch cpu type la664. The la664 has many new features,
such as new atomic instructions, hardware page table walk, etc.
We will implement them later.
Signed-off-by: Song Gao
---
target/loongarch/cpu.c | 50 +-
1 file changed, 35 insertions(+),
get_random_tlb_index() is used to get a random tlb index.
Signed-off-by: Song Gao
---
target/loongarch/tcg/tlb_helper.c | 34 +--
1 file changed, 23 insertions(+), 11 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c
b/target/loongarch/tcg/tlb_helper.c
ind
do_lddir is used for accessing directory entries during page table
walking, do_ldpte is used for page table entry accesses during page
table walking.
Signed-off-by: Song Gao
---
target/loongarch/tcg/tlb_helper.c | 53 ---
1 file changed, 34 insertions(+), 19 deletions
On 09.10.24 23:53, Fabiano Rosas wrote:
Vladimir Sementsov-Ogievskiy writes:
On 30.09.24 17:07, Andrey Drobyshev wrote:
On 9/30/24 12:25 PM, Vladimir Sementsov-Ogievskiy wrote:
[add migration maintainers]
On 24.09.24 15:56, Andrey Drobyshev wrote:
[...]
I doubt that this a correct way to
During the hot-unplugging of vhost-user-net type network cards,
the vhost_user_cleanup function may add the same rcu node to
the rcu linked list.
The function call relationship in this case is as follows:
vhost_user_cleanup
->vhost_user_host_notifier_remove
->call_rcu(n, vhost_user_hos
The second if-condition can be true only if the first one above is true.
Enclose the latter into the former to avoid un-necessary check if first
condition fails.
Reviewed-by: BALATON Zoltan
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/helper_regs.c | 6 +++---
ppc_excp_apply_ail has multiple if-checks for ail which is un-necessary.
Combine them as appropriate.
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/excp_helper.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/target/ppc/excp_helper
Most of the p8 exception handling accesses env->pending_interrupts and
env->spr[SPR_LPCR] at multiple places. Passing it directly as local
variables simplifies the code and avoids multiple indirect accesses.
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/excp_help
Currently, p9 exception handling has multiple if-condition checks where
it does an indirect access to pending_interrupts and LPCR via env.
Pass the values during entry to avoid multiple indirect accesses.
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/excp_helper.
Cache env->spr[SPR_POWER_MMCR0] in a local variable as used in multiple
conditions to avoid multiple indirect accesses.
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/helper_regs.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/target
As previously done for arch specific handlers, simplify var usage in
ppc_next_unmasked_interrupt by caching the env->pending_interrupts and
env->spr[SPR_LPCR] in local vars and using it later at multiple places.
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/excp_
This a set of misc ppc arch specific code improvements/optimizations.
Most of the improvements are done in Power9/8/7 exception handling code
alongwith some code cleanups.
Since patch 7/7 of v2 series have been picked by Aditya in his patchset
for P11 support, I have excluded that patch in this se
hreg_compute_hflags_value already stores msr locally to be used in most
of the logic in the routine however some instances are still using
env->msr which is unnecessary. Use locally stored value as available.
Reviewed-by: Nicholas Piggin
Reviewed-by: BALATON Zoltan
Signed-off-by: Harsh Prateek B
Like p8 and p9, simplifying p7 exception handling rotuines to avoid
un-necessary multiple indirect accesses to env->pending_interrupts and
env->spr[SPR_LPCR].
Reviewed-by: Nicholas Piggin
Signed-off-by: Harsh Prateek Bora
---
target/ppc/excp_helper.c | 46 ++-
Historically, the registration of sprs have been inherited alongwith
every new Power arch support being added leading to a lot of code
duplication. It's time to do necessary cleanups now to avoid further
duplication with newer arch support being added.
Signed-off-by: Harsh Prateek Bora
Reviewed-b
On 10/10/2024 03.37, Jared Rossi wrote:
On 10/9/24 8:48 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
From: Jared Rossi
Add two new qtests to verify that a valid IPL device can successfully
boot after
failed IPL attempts from one or more invalid devices.
cdrom-t
On 10/10/2024 03.37, Jared Rossi wrote:
On 10/9/24 6:53 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
[snip...]
load_eckd_segments() returns a value of type block_number_t which is an
unsigned type, so returning a negative error value will likely not work as
expe
>-Original Message-
>From: Jason Wang
>Subject: Re: [PATCH] intel_iommu: Remove Transient Mapping (TM) field
>from second-level page-tables
>
>On Mon, Sep 30, 2024 at 2:56 PM Zhenzhong Duan
> wrote:
>>
>> VT-d spec removed Transient Mapping (TM) field from second-level page-
>tables
>> a
Calling bind without checking return value. Add a assert for it.
Signed-off-by: Kunwu
---
tests/unit/test-io-channel-socket.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/tests/unit/test-io-channel-socket.c
b/tests/unit/test-io-channel-socket.c
index b964bb202d..dc7
Add Get/Set Response Message Limit commands.
Signed-off-by: Davidlohr Bueso
---
hw/cxl/cxl-mailbox-utils.c | 68 --
1 file changed, 65 insertions(+), 3 deletions(-)
diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index c2d776bc96eb..98416
On 10/9/24 5:46 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
From: Jared Rossi
Remove panic-on-error from IPL ISO El Torito specific functions so
that error
recovery may be possible in the future.
Functions that would previously panic now provide a return code.
On 10/9/24 7:35 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
From: Jared Rossi
Remove panic-on-error from Netboot specific functions so that error
recovery
may be possible in the future.
Functions that would previously panic now provide a return code.
Signed-o
On 10/9/24 8:48 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
From: Jared Rossi
Add two new qtests to verify that a valid IPL device can successfully
boot after
failed IPL attempts from one or more invalid devices.
cdrom-test/as-fallback-device: Defines the prim
On 10/9/24 7:18 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
+ if (!vs_io_assert(virtio_run(vdev, VR_REQUEST, cmd) == 0, title)) {
+ puts(title);
Should there be a "return" with a non-0 value here? ...
+ }
+
+ return 0;
}
/* SCSI protocol i
On 10/9/24 6:53 AM, Thomas Huth wrote:
On 08/10/2024 03.15, jro...@linux.ibm.com wrote:
[snip...]
load_eckd_segments() returns a value of type block_number_t which is
an unsigned type, so returning a negative error value will likely not
work as expected...
...
@@ -317,21 +352,28 @@ stat
On 10/9/24 16:05, Pierrick Bouvier wrote:
@@ -720,13 +728,10 @@ static void tlb_flush_range_locked(CPUState *cpu, int
midx,
return;
}
+ tlbfast_flush_range_locked(d, f, addr, len, mask);
+
for (vaddr i = 0; i < len; i += TARGET_PAGE_SIZE) {
vaddr page = addr +
> And I think this series is ready to merge once the tree re-opens.
Hi, is there any remaining blocker toward merging the patch set?
On 10/9/24 10:10, Richard Henderson wrote:
On 10/9/24 09:27, BALATON Zoltan wrote:
On Wed, 9 Oct 2024, Richard Henderson wrote:
Based-on: 20241009000453.315652-1-richard.hender...@linaro.org
("[PATCH v3 00/20] accel/tcg: Introduce tlb_fill_align hook")
The initial idea was: how much can we do
On 10/9/24 08:08, Richard Henderson wrote:
Now that all targets have been converted to tlb_fill_align,
remove the tlb_fill hook.
Signed-off-by: Richard Henderson
---
include/hw/core/tcg-cpu-ops.h | 10 --
accel/tcg/cputlb.c| 19 ---
2 files changed, 4 ins
On 10/9/24 08:08, Richard Henderson wrote:
This array is now write-only, and may be remove.
Signed-off-by: Richard Henderson
---
include/hw/core/cpu.h | 1 -
accel/tcg/cputlb.c| 39 ---
2 files changed, 8 insertions(+), 32 deletions(-)
diff --git a/
On 10/9/24 08:08, Richard Henderson wrote:
Link from the fast tlb entry to the interval tree node.
Signed-off-by: Richard Henderson
---
include/exec/tlb-common.h | 2 ++
accel/tcg/cputlb.c| 59 ++-
2 files changed, 23 insertions(+), 38 deletions(
On 10/9/24 08:08, Richard Henderson wrote:
Because translation is special, we don't need the speed
of the direct-mapped softmmu tlb. We cache a lookups in
DisasContextBase within the translator loop anyway.
Drop the addr_code comparator from CPUTLBEntry.
Go directly to the IntervalTree for MMU_
On Sun, 06 Oct 2024 08:17:58 +0100 David Woodhouse wrote:
> +config PTP_1588_CLOCK_VMCLOCK
> + tristate "Virtual machine PTP clock"
> + depends on X86_TSC || ARM_ARCH_TIMER
> + depends on PTP_1588_CLOCK && ACPI && ARCH_SUPPORTS_INT128
> + default y
Why default to enabled? Linus wil
On 10/9/24 08:08, Richard Henderson wrote:
Ensure a common entry point for all code lookups.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 59ee766d51..61daa89e0
On 10/9/24 08:08, Richard Henderson wrote:
Remove force_mmio and place the expression into the IF
expression, behind the short-circuit logic expressions
that might eliminate its computation.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 12
1 file changed, 8 insertio
On 10/9/24 08:08, Richard Henderson wrote:
CPUTLBEntryFull structures are no longer directly included within
the CPUState structure. Move the structure definition out of cpu.h
to reduce visibility.
Signed-off-by: Richard Henderson
---
include/exec/tlb-common.h | 63 ++
On 10/9/24 08:08, Richard Henderson wrote:
This has been functionally replaced by the IntervalTree.
Signed-off-by: Richard Henderson
---
include/hw/core/cpu.h | 8 --
accel/tcg/cputlb.c| 64 ---
2 files changed, 72 deletions(-)
diff --git a/
On 10/9/24 08:08, Richard Henderson wrote:
Change from a linear search on the victim tlb
to a balanced binary tree search on the interval tree.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 62 +++---
1 file changed, 31 insertions(+), 31 de
On 10/9/24 08:08, Richard Henderson wrote:
Update the addr_write copy within an interval tree node.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 95f
On 10/9/24 08:08, Richard Henderson wrote:
Update the addr_write copy within each interval tree node.
Tidy the iteration within the other two loops as well.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 19 +++
1 file changed, 11 insertions(+), 8 deletions(-)
dif
On 10/9/24 08:08, Richard Henderson wrote:
Flush a masked range of pages from the IntervalTree cache.
When the mask is not used there is a redundant comparison,
but that is better than duplicating code at this point.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 25 +++
On 10/9/24 08:08, Richard Henderson wrote:
Flush a page from the IntervalTree cache.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 16
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index d964e1b2e8..772656c
From: Hao Xiang
* DSA device open and close.
* DSA group contains multiple DSA devices.
* DSA group configure/start/stop/clean.
Signed-off-by: Hao Xiang
Signed-off-by: Bryan Zhang
Signed-off-by: Yichen Wang
---
include/qemu/dsa.h | 103 +
util/dsa.c | 282
From: Hao Xiang
* Create a dedicated thread for DSA task completion.
* DSA completion thread runs a loop and poll for completed tasks.
* Start and stop DSA completion thread during DSA device start stop.
User space application can directly submit task to Intel DSA
accelerator by writing to DSA's
On 10/9/24 08:08, Richard Henderson wrote:
Add or replace an entry in the IntervalTree for each
page installed into softmmu. We do not yet use the
tree for anything else.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 34 --
1 file changed, 28 inse
From: Hao Xiang
Multifd sender path gets an array of pages queued by the migration
thread. It performs zero page checking on every page in the array.
The pages are classfied as either a zero page or a normal page. This
change uses Intel DSA to offload the zero page checking from CPU to
the DSA ac
From: Hao Xiang
* Use a safe thread queue for DSA task enqueue/dequeue.
* Implement DSA task submission.
* Implement DSA batch task submission.
Signed-off-by: Hao Xiang
Signed-off-by: Yichen Wang
---
include/qemu/dsa.h | 29 +++
util/dsa.c | 202 ++
From: Hao Xiang
Intel DSA offloading is an optional feature that turns on if
proper hardware and software stack is available. To turn on
DSA offloading in multifd live migration by setting:
zero-page-detection=dsa-accel
dsa-accel-path=[dsa_dev_path1] [dsa_dev_path2] ... [dsa_dev_pathX]
This fea
From: Hao Xiang
* Add test case to start and complete multifd live migration with DSA
offloading enabled.
* Add test case to start and cancel multifd live migration with DSA
offloading enabled.
Signed-off-by: Bryan Zhang
Signed-off-by: Hao Xiang
Signed-off-by: Yichen Wang
---
tests/qtest/mig
From: Hao Xiang
Create DSA task with operation code DSA_OPCODE_COMPVAL.
Here we create two types of DSA tasks, a single DSA task and
a batch DSA task. Batch DSA task reduces task submission overhead
and hence should be the default option. However, due to the way DSA
hardware works, a DSA batch ta
From: Hao Xiang
During live migration, if the latency between sender and receiver is
high and bandwidth is also high (a long and fat pipe), using a bigger
packet size can help reduce migration total time. The current multifd
packet size is 128 * 4kb. In addition, Intel DSA offloading performs
bet
From: Hao Xiang
* Add a DSA task completion callback.
* DSA completion thread will call the tasks's completion callback
on every task/batch task completion.
* DSA submission path to wait for completion.
* Implement CPU fallback if DSA is not able to complete the task.
Signed-off-by: Hao Xiang
S
v6
* Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1;
* Refactor code to have clean history on all commits;
* Add comments on DSA specific defines about how the value is picked;
* Address all comments from v5 reviews about api defines, questions, etc.;
v5
* Rebase on top of 39a032cea23e5
From: Hao Xiang
* Test DSA start and stop path.
* Test DSA configure and cleanup path.
* Test DSA task submission and completion path.
Signed-off-by: Bryan Zhang
Signed-off-by: Hao Xiang
Signed-off-by: Yichen Wang
---
tests/unit/meson.build | 6 +
tests/unit/test-dsa.c | 503 +
Signed-off-by: Yichen Wang
---
scripts/update-linux-headers.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/update-linux-headers.sh b/scripts/update-linux-headers.sh
index c34ac6454e..5aba95d9cb 100755
--- a/scripts/update-linux-headers.sh
+++ b/scripts/update-linux
From: Hao Xiang
Enable instruction set enqcmd in build.
Signed-off-by: Hao Xiang
Signed-off-by: Yichen Wang
---
meson.build | 14 ++
meson_options.txt | 2 ++
scripts/meson-buildoptions.sh | 3 +++
3 files changed, 19 insertions(+)
diff --git a/mes
On 10/9/24 08:08, Richard Henderson wrote:
Add the data structures for tracking softmmu pages via
a balanced interval tree. So far, only initialize and
destroy the data structure.
Signed-off-by: Richard Henderson
---
include/hw/core/cpu.h | 3 +++
accel/tcg/cputlb.c| 11 +++
2
On 10/9/24 08:08, Richard Henderson wrote:
We expect masked address spaces to be quite large, e.g. 56 bits
for AArch64 top-byte-ignore mode. We do not expect addr+len to
wrap around, but it is possible with AArch64 guest flush range
instructions.
Convert this unlikely case to a full tlb flush.
On 10/9/24 08:08, Richard Henderson wrote:
Probably never happens, but next patches will assume non-zero length.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index fd8da8586f..93b42d1
On 10/9/24 08:08, Richard Henderson wrote:
The INVALID bit should only be auto-cleared when we have
just called tlb_fill, not along the victim_tlb_hit path.
In atomic_mmu_lookup, rename tlb_addr to flags, as that
is what we're actually carrying around.
Signed-off-by: Richard Henderson
---
ac
Signed-off-by: Atish Patra
---
target/riscv/cpu.h | 25 +
1 file changed, 25 insertions(+)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 2ac391a7cf74..53426710f73e 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -189,6 +189,28 @@ typedef struct PM
The current PMU events are defined by SBI PMU
specification. As there is no standard event encoding
scheme, Virt machine chooses to use the SBI PMU encoding.
A platform may choose to implement a different event
encoding scheme completely.
Rename the event names to reflect the reality.
No functio
The pmu implementation requires hashtable lookup operation sprinkled
through the file. Add a helper function that allows to consolidate
the implementation and extend it in the future easily.
Signed-off-by: Atish Patra
---
target/riscv/pmu.c | 56 ++
If the software programs an invalid hpmevent or selects a invalid
counter mapping, the hashtable entry should be updated accordingly.
Otherwise, the user may get stale value from the old mapped counter.
Signed-off-by: Atish Patra
---
target/riscv/pmu.c | 39 +
Add a read/write lock to protect the hashtable access operations
in multi-threaded scenario.
Signed-off-by: Atish Patra
---
target/riscv/cpu.h | 1 +
target/riscv/pmu.c | 10 +-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index
As per the latest privilege specification v1.13[1], the sscofpmf
only reserves first 8 bits of hpmeventX. Update the corresponding
masks accordingly.
[1]https://github.com/riscv/riscv-isa-manual/issues/1578
Signed-off-by: Atish Patra
---
target/riscv/cpu_bits.h | 4 ++--
1 file changed, 2 inser
The virt machine implementation relies on the SBI PMU extension.
The OpenSBI implementation requires a PMU specific DT node that
is currently encodes the counter and PMU events mapping.
As the PMU DT node encodes the platform specific event encodings,
it should be implement in platform specific cod
The virt PMU related implemention should belong to virt
machine file rather than common pmu.c which can be used
for other implementations.
Make pmu.c generic by moving all the virt PMU event related
structures to it's appropriate place.
Signed-off-by: Atish Patra
---
hw/riscv/virt.c| 81 +++
We have TLB related event call back available now. Invoke
them from generic cpu helper code so that other machines can
implement those as well in the future. The virt machine is
the only user for now though.
Signed-off-by: Atish Patra
---
target/riscv/cpu_helper.c | 21 +++--
tar
The event ID can be a upto 56 bit value when sscofpmf is implemented.
Change the event to counter hashtable to store the keys as 64 bit value
instead of uint.
Signed-off-by: Atish Patra
---
target/riscv/pmu.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/targ
Hello,
Michael Tokarev, le sam. 05 oct. 2024 10:07:53 +0300, a ecrit:
> libslirp introduced new typedef after 4.8.0, slirp_os_socket, which
> is defined to SOCKET on windows, which, in turn, is a 64bit number.
> qemu uses int, so callback function prorotypes changed.
I have fixed the code in upst
On 10/9/24 08:08, Richard Henderson wrote:
While this may at present be overly complicated for use
by single page flushes, do so with the expectation that
this will eventually allow simplification of large pages.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 61 +++
On 10/9/24 08:08, Richard Henderson wrote:
Often we already have the CPUTLBDescFast structure pointer.
Allows future code simplification.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 16
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/accel/tcg/cp
On 10/9/24 08:08, Richard Henderson wrote:
We will have a need to flush only the "fast" portion
of the tlb, allowing re-fill from the "full" portion.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/accel/tcg
On 10/9/24 08:08, Richard Henderson wrote:
Provide a general-purpose release-all-nodes operation, that allows
for the IntervalTreeNode to be embeded within a larger structure.
Signed-off-by: Richard Henderson
---
include/qemu/interval-tree.h | 11 +++
util/interval-tree.c | 2
On 9/25/24 13:48, Pierrick Bouvier wrote:
Contrib plugins have been built out of tree so far, thanks to a Makefile.
However, it is quite inconvenient for maintenance, as we may break them,
especially for specific architectures.
First patches are fixing warnings for existing plugins, then we add
Peter Xu writes:
> On Wed, Oct 09, 2024 at 04:18:31PM -0400, Steven Sistare wrote:
>> Yes, I am also brainstorming along these lines, looking for more gotcha's,
>> but its a big design change. I don't love it so far.
>>
>> These issues all creep in because of transfer mode. Exec mode did not ha
On Tue, 2024-10-08 at 11:17 -0700, Richard Henderson wrote:
> On 10/5/24 13:35, Ilya Leoshkevich wrote:
> > > How can we handle the long-running syscalls?
> > > Just waiting sounds unsatisfying.
> > > Sending a reserved host signal may alter the guest's behaviour if
> > > a
> > > syscall like pause
On 10/9/24 02:04, Richard Henderson wrote:
Convert hppa_cpu_tlb_fill to hppa_cpu_tlb_fill_align so that we
can recognize alignment exceptions in the correct priority order.
Resolves: https://bugzilla.kernel.org/show_bug.cgi?id=219339
Signed-off-by: Richard Henderson
Reviewed-by: Helge Deller
On 10/9/24 02:04, Richard Henderson wrote:
When we have a tlb miss, defer the alignment check to
the new tlb_fill_align hook. Move the existing alignment
check so that we only perform it with a tlb hit.
Signed-off-by: Richard Henderson
Reviewed-by: Helge Deller
On Wed, Oct 09, 2024 at 04:18:31PM -0400, Steven Sistare wrote:
> Yes, I am also brainstorming along these lines, looking for more gotcha's,
> but its a big design change. I don't love it so far.
>
> These issues all creep in because of transfer mode. Exec mode did not have
> this
> problem, as
Vladimir Sementsov-Ogievskiy writes:
> On 30.09.24 17:07, Andrey Drobyshev wrote:
>> On 9/30/24 12:25 PM, Vladimir Sementsov-Ogievskiy wrote:
>>> [add migration maintainers]
>>>
>>> On 24.09.24 15:56, Andrey Drobyshev wrote:
[...]
>>>
>>> I doubt that this a correct way to go.
>>>
>>> As far
On Wed, Oct 09, 2024 at 04:09:45PM -0400, Steven Sistare wrote:
> On 10/9/2024 3:06 PM, Peter Xu wrote:
> > On Wed, Oct 09, 2024 at 02:43:44PM -0400, Steven Sistare wrote:
> > > On 10/8/2024 3:48 PM, Peter Xu wrote:
> > > > On Tue, Oct 08, 2024 at 04:11:38PM -0300, Fabiano Rosas wrote:
> > > > > As
On 10/9/2024 3:59 PM, Peter Xu wrote:
On Wed, Oct 09, 2024 at 03:06:53PM -0400, Peter Xu wrote:
On Wed, Oct 09, 2024 at 02:43:44PM -0400, Steven Sistare wrote:
On 10/8/2024 3:48 PM, Peter Xu wrote:
On Tue, Oct 08, 2024 at 04:11:38PM -0300, Fabiano Rosas wrote:
As of half an hour ago =) We cou
On 10/9/2024 3:06 PM, Peter Xu wrote:
On Wed, Oct 09, 2024 at 02:43:44PM -0400, Steven Sistare wrote:
On 10/8/2024 3:48 PM, Peter Xu wrote:
On Tue, Oct 08, 2024 at 04:11:38PM -0300, Fabiano Rosas wrote:
As of half an hour ago =) We could put a feature branch up and work
together, if you have m
On Wed, Oct 09, 2024 at 03:06:53PM -0400, Peter Xu wrote:
> On Wed, Oct 09, 2024 at 02:43:44PM -0400, Steven Sistare wrote:
> > On 10/8/2024 3:48 PM, Peter Xu wrote:
> > > On Tue, Oct 08, 2024 at 04:11:38PM -0300, Fabiano Rosas wrote:
> > > > As of half an hour ago =) We could put a feature branch
On 10/9/24 02:04, Richard Henderson wrote:
Add a new callback to handle softmmu paging. Return the page
details directly, instead of passing them indirectly to
tlb_set_page. Handle alignment simultaneously with paging so
that faults are handled with target-specific priority.
Route all calls th
On Tue, 8 Oct 2024 at 15:16, Thomas Huth wrote:
>
> On 08/10/2024 16.13, Peter Maydell wrote:
> > The qmp-cmd-test test takes typically about 15s on my local machine.
> > On the k8s runners it takes usually 20s but sometimes about 60s,
> > because the k8s runners have wildly variable execution tim
On Tue, 8 Oct 2024 at 19:51, Richard Henderson
wrote:
>
> The following changes since commit 2af37e791906cfda42cb9604a16d218e56994bb1:
>
> Merge tag 'pull-request-2024-10-07' of https://gitlab.com/thuth/qemu into
> staging (2024-10-07 12:55:02 +0100)
>
> are available in the Git repository at:
On Wed, 9 Oct 2024 at 09:39, wrote:
>
> From: Marc-André Lureau
>
> The following changes since commit 2af37e791906cfda42cb9604a16d218e56994bb1:
>
> Merge tag 'pull-request-2024-10-07' of https://gitlab.com/thuth/qemu into
> staging (2024-10-07 12:55:02 +0100)
>
> are available in the Git repo
On Wed, Oct 09, 2024 at 02:43:44PM -0400, Steven Sistare wrote:
> On 10/8/2024 3:48 PM, Peter Xu wrote:
> > On Tue, Oct 08, 2024 at 04:11:38PM -0300, Fabiano Rosas wrote:
> > > As of half an hour ago =) We could put a feature branch up and work
> > > together, if you have more concrete thoughts on
On 8/10/24 21:04, Richard Henderson wrote:
Split out of mmu_lookup.
Reviewed-by: Helge Deller
Reviewed-by: Peter Maydell
Signed-off-by: Richard Henderson
---
include/exec/memop.h | 24
accel/tcg/cputlb.c | 16 ++--
2 files changed, 26 insertions(+),
On 9/10/24 12:08, Richard Henderson wrote:
We will have a need to flush only the "fast" portion
of the tlb, allowing re-fill from the "full" portion.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
Reviewed-by: Philippe
On 9/10/24 12:08, Richard Henderson wrote:
Probably never happens, but next patches will assume non-zero length.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index fd8da8586f..93b42d1
On 9/10/24 12:08, Richard Henderson wrote:
Ensure a common entry point for all code lookups.
Signed-off-by: Richard Henderson
---
accel/tcg/cputlb.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé
On 10/8/2024 3:48 PM, Peter Xu wrote:
On Tue, Oct 08, 2024 at 04:11:38PM -0300, Fabiano Rosas wrote:
As of half an hour ago =) We could put a feature branch up and work
together, if you have more concrete thoughts on how this would look like
let me know.
[I'll hijack this thread with one more
Hi Roman
On Wed, Oct 9, 2024 at 9:47 PM Roman Penyaev wrote:
> Mux is a character backend (host side) device, which multiplexes
> multiple frontends with one backend device. The following is a
> few lines from the QEMU manpage [1]:
>
> A multiplexer is a "1:N" device, and here the "1" end is y
1 - 100 of 236 matches
Mail list logo