ping
Reminder: this is to support secure (measured) boot with AMD SEV with
QEMU's -kernel/-initrd/-append switches.
The OVMF side of the implementation is under review (with some changes
requested), but so far no functional changes are exepcted from the QEMU
side, on top of this proposed patch.
Use a local variable instead of referencing BlockCopyState through a
BlockCopyCallState or BlockCopyTask every time.
This is in preparation for next patches.
No functional change intended.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-copy.c | 14 --
1 file changed, 8 in
This serie of patches aims to reduce the usage of the
AioContexlock in block-copy, by introducing smaller granularity
locks thus on making the block layer thread safe.
This serie depends on my previous serie that brings thread safety
to the smaller API used by block-copy, like ratelimit, progress
From: Paolo Bonzini
Put the logic to determine the copy size in a separate function, so
that there is a simple state machine for the possible methods of
copying data from one BlockDriverState to the other.
Use .method instead of .copy_range as in-out argument, and
include also .zeroes as an addi
Moving this function in task_end ensures to update the progress
anyways, even if there is an error.
It also helps in next patch, allowing task_end to have only
one critical section.
Reviewed-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-copy.c | 6 +
As done in BlockCopyCallState, categorize BlockCopyTask
and BlockCopyState in IN, State and OUT fields.
This is just to understand which field has to be protected with a lock.
.sleep_state is handled in the series "coroutine: new sleep/wake API"
and thus here left as TODO.
Signed-off-by: Emanuele
Add a CoMutex to protect concurrent access of block-copy
data structures.
This mutex also protects .copy_bitmap, because its thread-safe
API does not prevent it from assigning two tasks to the same
bitmap region.
.finished, .cancelled and reads to .ret and .error_is_read will be
protected in the
By adding acquire/release pairs, we ensure that .ret and .error_is_read
fields are written by block_copy_dirty_clusters before .finished is true.
The atomic here are necessary because the fields are concurrently modified
also outside coroutines.
Signed-off-by: Emanuele Giuseppe Esposito
---
blo
On 14/06/2021 06:42, Philippe Mathieu-Daudé wrote:
On 6/13/21 12:26 PM, Mark Cave-Ayland wrote:
Commit 4e78f3bf35 "esp: defer command completion interrupt on incoming data
transfers" added a version check for use with VMSTATE_*_TEST macros to allow
migration from older QEMU versions. Unfortunat
Gerd Hoffmann writes:
> Add QAPI schema for the module info database.
>
> Signed-off-by: Gerd Hoffmann
> ---
> qapi/meson.build | 1 +
> qapi/modules.json | 36
> qapi/qapi-schema.json | 1 +
> 3 files changed, 38 insertions(+)
> create mode 1006
On 14/06/2021 06:36, Philippe Mathieu-Daudé wrote:
Cc'ing Finn & Laurent.
On 6/13/21 6:37 PM, Mark Cave-Ayland wrote:
Here is the next set of patches from my attempts to boot MacOS under QEMU's
Q800 machine related to the Sonic network adapter.
Patches 1 and 2 sort out checkpatch and convert
From: Paolo Bonzini
Both users of RateLimit, block-copy.c and blockjob.c, treat
a speed of zero as unlimited, while RateLimit treats it as
"as slow as possible". The latter is nicer from the code
point of view but pretty useless, so disable rate limiting
if a speed of zero is provided.
Reviewed
From: Paolo Bonzini
Reviewed-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Paolo Bonzini
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 13 +++--
1 file changed, 3 insertions(+), 10 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index dc1d9e0e46..22e5bb9b1f 100644
--
From: Paolo Bonzini
Reviewed-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Paolo Bonzini
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-copy.c | 28 +++-
1 file changed, 11 insertions(+), 17 deletions(-)
diff --git a/block/block-copy.c b/block/block-copy.
This serie of patches bring thread safety to the smaller APIs used by
block-copy, namely ratelimit, progressmeter, co-shared-resource
and aiotask.
The end goal is to reduce the usage of AioContexlock in block-copy,
by introducing smaller granularity locks thus on making the block layer
thread safe.
co-shared-resource is currently not thread-safe, as also reported
in co-shared-resource.h. Add a QemuMutex because co_try_get_from_shres
can also be invoked from non-coroutine context.
Reviewed-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/co-shared
On 13/06/21 12:40, Mark Cave-Ayland wrote:
Unfortunately the VMSTATE_*_V() macros don't work in ESPState because
ESPState is currently embedded in both sysbusespscsi and pciespscsi
using VMSTATE_STRUCT() where the version of the vmstate_esp
VMStateDescription does not match those in the vmsta
Progressmeter is protected by the AioContext mutex, which
is taken by the block jobs and their caller (like blockdev).
We would like to remove the dependency of block layer code on the
AioContext mutex, since most drivers and the core I/O code are already
not relying on it.
Create a new C file to
On 13/06/21 12:26, Mark Cave-Ayland wrote:
Commit 4e78f3bf35 "esp: defer command completion interrupt on incoming data
transfers" added a version check for use with VMSTATE_*_TEST macros to allow
migration from older QEMU versions. Unfortunately the version check fails to
work in its current form
On 14/06/2021 10:11, Emanuele Giuseppe Esposito wrote:
This serie of patches bring thread safety to the smaller APIs used by
block-copy, namely ratelimit, progressmeter, co-shared-resource
and aiotask.
The end goal is to reduce the usage of AioContexlock in block-copy,
by introducing smaller g
From: Paolo Bonzini
Both users of RateLimit, block-copy.c and blockjob.c, treat
a speed of zero as unlimited, while RateLimit treats it as
"as slow as possible". The latter is nicer from the code
point of view but pretty useless, so disable rate limiting
if a speed of zero is provided.
Reviewed
On Mon, 7 Jun 2021 21:03:06 +
Eric DeVolder wrote:
> Igor,
> Thanks for the information/feedback. I am working to implement all your
> suggestions; from my perspective, there were two big changes requested, and
> the use of hostmem-file was the first, and the conversion to PCI the second.
From: Paolo Bonzini
Reviewed-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Paolo Bonzini
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-copy.c | 28 +++-
1 file changed, 11 insertions(+), 17 deletions(-)
diff --git a/block/block-copy.c b/block/block-copy.
Please discard this thread, I had an issue with git send-email and patch
3-5 are missing.
Thank you,
Emanuele
On 14/06/2021 10:08, Emanuele Giuseppe Esposito wrote:
This serie of patches bring thread safety to the smaller APIs used by
block-copy, namely ratelimit, progressmeter, co-shared-reso
Stefan Berger writes:
> Cc: M: Michael S. Tsirkin
Pasto; drop the "M: ".
> Cc: Igor Mammedov
> Signed-off-by: Stefan Berger
This serie of patches bring thread safety to the smaller APIs used by
block-copy, namely ratelimit, progressmeter, co-shared-resource
and aiotask.
The end goal is to reduce the usage of AioContexlock in block-copy,
by introducing smaller granularity locks thus on making the block layer
thread safe.
When qemu_coroutine_enter is executed in a loop
(even QEMU_FOREACH_SAFE), the new routine can modify the list,
for example removing an element, causing problem when control
is given back to the caller that continues iterating on the same list.
Patch 1 solves the issue in blkdebug_debug_resume by
Extract to a separate function. Do not rely on FOREACH_SAFE, which is
only "safe" if the *current* node is removed---not if another node is
removed. Instead, just walk the entire list from the beginning when
asked to resume all suspended requests with a given tag.
Co-developed-by: Paolo Bonzini
Add a counter for each action that a rule can trigger.
This is mainly used to keep track of how many coroutine_yield()
we need to perform after processing all rules in the list.
Co-developed-by: Paolo Bonzini
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Vladimir Sementsov-Ogievskiy
--
That would be unsafe in case a rule other than the current one
is removed while the coroutine has yielded.
Keep FOREACH_SAFE because suspend_request deletes the current rule.
After this patch, *all* matching rules are deleted before suspending
the coroutine, rather than just one.
This doesn't affe
On 10/06/21 15:13, Daniel P. Berrangé wrote:
On Thu, Jun 10, 2021 at 03:04:24PM +0200, Gerd Hoffmann wrote:
Hi Paolo,
+if config_host.has_key('CONFIG_MODULES')
+ qemu_modinfo = executable('qemu-modinfo', files('qemu-modinfo.c') + genh,
+ dependencies: [glib, qe
We want to move qemu_coroutine_yield() after the loop on rules,
because QLIST_FOREACH_SAFE is wrong if the rule list is modified
while the coroutine has yielded. Therefore move the suspended
request to the heap and clean it up from the remove side.
All that is left is for blkdebug_debug_event to h
Stefan Berger writes:
> The following patches entirely elimiante TPM related code if CONFIG_TPM
> is not set.
>
> Stefan
I believe this is on top of Philippe's "[PATCH v2 2/2] tpm: Return QMP
error when TPM is disabled in build"
Based-on: <20210609184955.1193081-3-phi...@redhat.com>
However,
There seems to be no benefit in using a field. Replace it with a local
variable, and move the state update before the yields.
The state update has do be done before the yields because now using
a local variable does not allow the new updated state to be visible
by the other yields.
Signed-off-by:
Signed-off-by: Richard Henderson
---
tcg/aarch64/tcg-target.c.inc | 12
1 file changed, 12 insertions(+)
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
index 27cde314a9..f72218b036 100644
--- a/tcg/aarch64/tcg-target.c.inc
+++ b/tcg/aarch64/tcg-target.c.inc
On Mon, Jun 14, 2021 at 07:26:23AM +0200, Philippe Mathieu-Daudé wrote:
> Commit 7de2e856533 made migration/qemu-file-channel.c include
> "io/channel-tls.h" but forgot to add the new GNUTLS dependency
> on Meson, leading to build failure on OSX:
>
> [2/35] Compiling C object libmigration.fa.p/mi
First, categorize the structure fields to identify what needs
to be protected and what doesn't.
We essentially need to protect only .state, and the 3 lists in
BDRVBlkdebugState.
Then, add the lock and mark the functions accordingly.
Co-developed-by: Paolo Bonzini
Signed-off-by: Emanuele Giusepp
This has been on my to-do list for several years, and I've
finally spent a rainy weekend doing something about it.
The current tcg bswap opcode is fairly strict: for swaps smaller
than the TCGv size, it requires zero-extended input and provides
zero-extended output.
This has meant that various tc
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 28
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index 64c24382a8..f0e42e4b88 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg
Merge tcg_out_bswap16 and tcg_out_bswap16s. Use the flags
in the internal uses for loads and stores.
Signed-off-by: Richard Henderson
---
tcg/mips/tcg-target.c.inc | 60 ++-
1 file changed, 28 insertions(+), 32 deletions(-)
diff --git a/tcg/mips/tcg-target.c
We will shortly require sari in other context;
split out both for cleanliness sake.
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 15 ---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index aa35ff8
This will eventually simplify front-end usage, and will allow
backends to unset TCG_TARGET_HAS_MEMORY_BSWAP without loss of
optimization.
The argument is added during expansion, not currently exposed
to the front end translators. Non-zero values are not yet
supported by any backends.
Signed-off-
Notice when the input is known to be zero-extended and force
the TCG_BSWAP_IZ flag on. Honor the TCG_BSWAP_OS bit during
constant folding. Propagate the input to the output mask.
Signed-off-by: Richard Henderson
---
tcg/optimize.c | 56 +-
1 file
For INDEX_op_bswap32_i32, pass 0 for flags: input not zero-extended,
output does not need extension within the host 64-bit register.
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 38 +-
1 file changed, 25 insertions(+), 13 deletions(-)
diff
Retain the current rorw bswap16 expansion for the zero-in/zero-out case.
Otherwise, perform a wider bswap plus a right-shift or extend.
Signed-off-by: Richard Henderson
---
tcg/i386/tcg-target.c.inc | 20 +++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/tcg/i386
With the use of a suitable temporary, we can use the same
algorithm when src overlaps dst. The result is the same
number of instructions either way.
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 26 +++---
1 file changed, 11 insertions(+), 15 deletions(-)
The existing interpreter zero-extends, ignoring high bits.
Simply add a separate sign-extension opcode if required.
Ensure that the interpreter supports ext16s when bswap16 is enabled.
Signed-off-by: Richard Henderson
---
tcg/tci.c| 3 ++-
tcg/tci/tcg-target.c.inc | 23 +
Use a break instead of an ifdefed else.
There's no need to move the values through s->T0.
Remove TCG_BSWAP_IZ and the preceding zero-extension.
Cc: Paolo Bonzini
Cc: Eduardo Habkost
Signed-off-by: Richard Henderson
---
target/i386/tcg/translate.c | 14 --
1 file changed, 4 insertio
Combine the three bswap16 routines, and differentiate via the flags.
Use the correct flags combination from the load/store routines, and
pass along the constant parameter from tcg_out_op.
Signed-off-by: Richard Henderson
---
tcg/arm/tcg-target.c.inc | 78
By removing TCG_BSWAP_IZ we indicate that the input is
not zero-extended, and thus can remove an explicit extend.
By removing TCG_BSWAP_OZ, we allow the implementation to
leave high bits set, which will be ignored by the store.
Signed-off-by: Richard Henderson
---
tcg/tcg-op.c | 9 +++--
1 f
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 34 ++
1 file changed, 34 insertions(+)
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index e868417168..af87643f54 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg-target.c.i
Implement the new semantics in the fallback expansion.
Change all callers to supply the flags that keep the
semantics unchanged locally.
Signed-off-by: Richard Henderson
---
include/tcg/tcg-op.h| 8 +--
target/arm/translate-a64.c | 12 ++--
target/arm/translate.c |
We will shortly require these in other context;
make the expansion as clear as possible.
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 31 +--
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-targ
Remove TCG_BSWAP_IZ and the preceding zero-extension.
Cc: Yoshinori Sato
Signed-off-by: Richard Henderson
---
target/sh4/translate.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/target/sh4/translate.c b/target/sh4/translate.c
index 147219759b..f45515952f 100644
--- a/ta
For INDEX_op_bswap16_i64, use 64-bit instructions so that we can
easily provide the extension to 64-bits. Drop the special case,
previously used, where the input is already zero-extended -- the
minor code size savings is not worth the complication.
Signed-off-by: Richard Henderson
---
tcg/s390/
The memory bswap support in the aarch64 backend merely dates from
a time when it was required. There is nothing special about the
backend support that could not have been provided by the middle-end
even prior to the introduction of the bswap flags.
Signed-off-by: Richard Henderson
---
tcg/aarch
Signed-off-by: Richard Henderson
---
tcg/ppc/tcg-target.c.inc | 51 +---
1 file changed, 21 insertions(+), 30 deletions(-)
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index f0e42e4b88..690c77b4da 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b
For the sf version, we are performing two 32-bit bswaps
in either half of the register. This is equivalent to
performing one 64-bit bswap followed by a rotate.
For the non-sf version, we can remove TCG_BSWAP_IZ
and the preceding zero-extension.
Cc: Peter Maydell
Signed-off-by: Richard Henderson
Now that the middle-end can replicate the same tricks as tcg/arm
used for optimizing bswap for signed loads and for stores, do not
pretend to have these memory ops in the backend.
Signed-off-by: Richard Henderson
---
tcg/arm/tcg-target.h | 2 +-
tcg/arm/tcg-target.c.inc | 214 +
Merge tcg_out_bswap32 and tcg_out_bswap32s. Use the flags
in the internal uses for loads and stores.
Signed-off-by: Richard Henderson
---
tcg/mips/tcg-target.c.inc | 39 ---
1 file changed, 16 insertions(+), 23 deletions(-)
diff --git a/tcg/mips/tcg-target.c
TCG_TARGET_HAS_MEMORY_BSWAP is already unset for this backend,
which means that MO_BSWAP be handled by the middle-end and
will never be seen by the backend. Thus the indexes used with
qemu_{ld,st}_helpers will always be zero.
Tidy the comments and asserts in tcg_out_qemu_{ld,st}_direct.
It is not
We can perform any required sign-extension via TCG_BSWAP_OS.
Signed-off-by: Richard Henderson
---
tcg/tcg-op.c | 24 ++--
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 3763285bb0..702da7afb7 100644
--- a/tcg/tcg-op.c
+++ b/t
On 04/06/2021 11:17, Emanuele Giuseppe Esposito wrote:
This series adds the option to attach gdbserver and valgrind
to the QEMU binary running in qemu_iotests.
It also allows to redirect QEMU binaries output of the python tests
to the stdout, instead of a log file.
Patches 1-9 introduce the -
On 6/14/21 9:44 AM, Mark Cave-Ayland wrote:
> On 14/06/2021 06:42, Philippe Mathieu-Daudé wrote:
>
>> On 6/13/21 12:26 PM, Mark Cave-Ayland wrote:
>>> Commit 4e78f3bf35 "esp: defer command completion interrupt on
>>> incoming data
>>> transfers" added a version check for use with VMSTATE_*_TEST ma
We can eliminate the requirement for a zero-extended output,
because the following store will ignore any garbage high bits.
Cc: Peter Maydell
Signed-off-by: Richard Henderson
---
target/arm/translate-a64.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/target/arm/tra
Define the new system registers that MTE introduces and context switch
them. The MTE feature is still hidden from the ID register as it isn't
supported in a VM yet.
Reviewed-by: Catalin Marinas
Signed-off-by: Steven Price
---
arch/arm64/include/asm/kvm_arm.h | 3 +-
arch/arm64/includ
The VMM may not wish to have it's own mapping of guest memory mapped
with PROT_MTE because this causes problems if the VMM has tag checking
enabled (the guest controls the tags in physical RAM and it's unlikely
the tags are correct for the VMM).
Instead add a new ioctl which allows the VMM to easi
There were two bugs here: (1) the required endianness was
not present in the MemOp, and (2) we were not providing a
zero-extended input to the bswap as semantics required.
The best fix is to fold the bswap into the memory operation,
producing the desired result directly.
Cc: Philippe Mathieu-Daud
A new capability (KVM_CAP_ARM_MTE) identifies that the kernel supports
granting a guest access to the tags, and provides a mechanism for the
VMM to enable it.
A new ioctl (KVM_ARM_MTE_COPY_TAGS) provides a simple way for a VMM to
access the tags of a guest without having to maintain a PROT_MTE map
The new bswap flags can implement the semantics exactly.
Cc: Peter Maydell
Signed-off-by: Richard Henderson
---
target/arm/translate.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 6b88163e3a..46d95d75ae 100644
---
mte_sync_tags() used test_and_set_bit() to set the PG_mte_tagged flag
before restoring/zeroing the MTE tags. However if another thread were to
race and attempt to sync the tags on the same page before the first
thread had completed restoring/zeroing then it would see the flag is
already set and con
A KVM guest could store tags in a page even if the VMM hasn't mapped
the page with PROT_MTE. So when restoring pages from swap we will
need to check to see if there are any saved tags even if !pte_tagged().
However don't check pages for which pte_access_permitted() returns false
as these will not
Am 14.06.21 um 07:26 schrieb Philippe Mathieu-Daudé:
Commit 7de2e856533 made migration/qemu-file-channel.c include
"io/channel-tls.h" but forgot to add the new GNUTLS dependency
on Meson, leading to build failure on OSX:
[2/35] Compiling C object libmigration.fa.p/migration_qemu-file-channel
Log instruction execution and memory access to a file.
This plugin can be used for reverse engineering or for side-channel analysis
using QEMU.
Signed-off-by: Alexandre Iooss
---
MAINTAINERS | 1 +
contrib/plugins/Makefile | 1 +
contrib/plugins/execlog.c | 112 +++
It's now safe for the VMM to enable MTE in a guest, so expose the
capability to user space.
Reviewed-by: Catalin Marinas
Signed-off-by: Steven Price
---
arch/arm64/kvm/arm.c | 9 +
arch/arm64/kvm/reset.c| 3 ++-
arch/arm64/kvm/sys_regs.c | 3 +++
3 files changed, 14 insertions(
This series adds support for using the Arm Memory Tagging Extensions
(MTE) in a KVM guest.
I realise there are still open questions[1] around the performance of
this series (the 'big lock', tag_sync_lock, introduced in the first
patch). But there should be no impact on non-MTE workloads and until
Add a new VM feature 'KVM_ARM_CAP_MTE' which enables memory tagging
for a VM. This will expose the feature to the guest and automatically
tag memory pages touched by the VM as PG_mte_tagged (and clear the tag
storage) to ensure that the guest cannot see stale tags, and so that
the tags are correctl
On 6/14/21 10:37 AM, Richard Henderson wrote:
> For the sf version, we are performing two 32-bit bswaps
> in either half of the register. This is equivalent to
> performing one 64-bit bswap followed by a rotate.
>
> For the non-sf version, we can remove TCG_BSWAP_IZ
> and the preceding zero-exten
On 6/14/21 10:37 AM, Richard Henderson wrote:
> This will eventually simplify front-end usage, and will allow
> backends to unset TCG_TARGET_HAS_MEMORY_BSWAP without loss of
> optimization.
>
> The argument is added during expansion, not currently exposed
> to the front end translators. Non-zero
On Mon, 14 Jun 2021 at 02:37, Richard Henderson
wrote:
>
> On 6/13/21 10:10 AM, Peter Maydell wrote:
> > Also on x86-64 host, this failure in check-tcg:
> >
> > make[2]: Leaving directory
> > '/home/petmay01/linaro/qemu-for-merges/build/all-linux-static/tests/tcg/hppa-linux-user'
> > make[2]: Ente
On 6/14/21 10:37 AM, Richard Henderson wrote:
> Merge tcg_out_bswap32 and tcg_out_bswap32s. Use the flags
> in the internal uses for loads and stores.
>
> Signed-off-by: Richard Henderson
> ---
> tcg/mips/tcg-target.c.inc | 39 ---
> 1 file changed, 16 insert
On 6/14/21 10:37 AM, Richard Henderson wrote:
> The existing interpreter zero-extends, ignoring high bits.
> Simply add a separate sign-extension opcode if required.
> Ensure that the interpreter supports ext16s when bswap16 is enabled.
>
> Signed-off-by: Richard Henderson
> ---
> tcg/tci.c
Richard Henderson writes:
> On 6/13/21 10:10 AM, Peter Maydell wrote:
>> Also on x86-64 host, this failure in check-tcg:
>> make[2]: Leaving directory
>> '/home/petmay01/linaro/qemu-for-merges/build/all-linux-static/tests/tcg/hppa-linux-user'
>> make[2]: Entering directory
>> '/home/petmay01/li
On 6/14/21 10:37 AM, Richard Henderson wrote:
> Implement the new semantics in the fallback expansion.
> Change all callers to supply the flags that keep the
> semantics unchanged locally.
>
> Signed-off-by: Richard Henderson
> ---
> include/tcg/tcg-op.h| 8 +--
> target/arm/transl
On 6/14/21 10:37 AM, Richard Henderson wrote:
> The new bswap flags can implement the semantics exactly.
>
> Cc: Peter Maydell
> Signed-off-by: Richard Henderson
> ---
> target/arm/translate.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé
On 6/12/21 3:21 AM, Stefan Berger wrote:
> Signed-off-by: Stefan Berger
> ---
> include/sysemu/tpm.h | 6 +-
> include/sysemu/tpm_backend.h | 6 +-
> 2 files changed, 10 insertions(+), 2 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé
> > > +VFIO_USER_DMA_UNMAP
> > > +---
> > > +
> > > +This command message is sent by the client to the server to inform
> > > +it that a DMA region, previously made available via a
> > > +VFIO_USER_DMA_MAP command message, is no longer available for
> DMA.
> > > +It typically occurs
On 6/12/21 3:21 AM, Stefan Berger wrote:
> Cc: M: Michael S. Tsirkin
> Cc: Igor Mammedov
> Signed-off-by: Stefan Berger
> ---
> hw/acpi/aml-build.c | 2 ++
> hw/arm/virt-acpi-build.c | 2 ++
> hw/i386/acpi-build.c | 20
> include/hw/acpi/tpm.h| 4
> stu
On 6/12/21 9:33 PM, BALATON Zoltan wrote:
> Hello,
>
> On Tue, 23 Jun 2020, Philippe Mathieu-Daudé wrote:
>> This is v2 of Zoltan's patch:
>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg714711.html
>>
>> - rebased
>> - added docstring
>> - include hw/misc/auxbus.c fix
>>
>> Supersedes: <
> > Are there rules for avoiding deadlock between client->server and
> > server->client messages? For example, the client sends
> > VFIO_USER_REGION_WRITE and the server sends
> VFIO_USER_VM_INTERRUPT
> > before replying to the write message.
> >
> > Multi-threaded clients and servers could end up
On Jun 13 17:29, Gollu Appalanaidu wrote:
On Wed, Jun 09, 2021 at 10:22:49PM +0200, Klaus Jensen wrote:
On Jun 1 20:32, Gollu Appalanaidu wrote:
Add the controller identifiers list CNS 0x13, available list of ctrls
in NVM Subsystem that may or may not be attached to namespaces.
In Identify Ct
On Mon, 14 Jun 2021, Philippe Mathieu-Daudé wrote:
On 6/12/21 9:33 PM, BALATON Zoltan wrote:
Hello,
On Tue, 23 Jun 2020, Philippe Mathieu-Daudé wrote:
This is v2 of Zoltan's patch:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg714711.html
- rebased
- added docstring
- include hw/misc/
> -Original Message-
> From: Stefan Hajnoczi
> Sent: 04 May 2021 14:52
> To: Thanos Makatos
> Cc: qemu-devel@nongnu.org; John Levon ;
> John G Johnson ;
> benjamin.wal...@intel.com; Elena Ufimtseva
> ; jag.ra...@oracle.com;
> james.r.har...@intel.com; Swapnil Ingle ;
> konrad.w...@orac
Following the APM2 I added some checks to
resolve the following tests in kvm-unit-tests for svm:
* vmrun_intercept_check
* asid_zero
* sel_cr0_bug
* CR0 CD=0,NW=1: a0010011
* CR0 63:32: 180010011
* CR0 63:32: 1080010011
* CR0 63:32: 10080010011
* CR0 63:32: 100080010011
* CR0 63
When the selective CR0 write intercept is set, all writes to bits in
CR0 other than CR0.TS or CR0.MP cause a VMEXIT.
Signed-off-by: Lara Lazier
---
target/i386/cpu.h| 2 ++
target/i386/tcg/sysemu/misc_helper.c | 9 +
2 files changed, 11 insertions(+)
diff --git a/tar
Zero VMRUN intercept and ASID should cause an immediate VMEXIT
during the consistency checks performed by VMRUN.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier
---
target/i386/svm.h | 2 ++
target/i386/tcg/sysemu/svm_helper.c | 10 ++
2 f
The combination of unset CD and set NW bit in CR0 is illegal.
CR0[63:32] are also reserved and need to be zero.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier
---
target/i386/cpu.h | 2 ++
target/i386/svm.h | 1 +
target/i386/t
On Wed, 9 Jun 2021 at 02:05, Richard Henderson
wrote:
>
> On 6/7/21 9:57 AM, Peter Maydell wrote:
> > +#define DO_LDAVH(OP, ESIZE, TYPE, H, XCHG, EVENACC, ODDACC, TO128) \
> > +uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
> > +v
On 04/06/2021 11:17, Emanuele Giuseppe Esposito wrote:
Attaching gdbserver implies that the qmp socket
should wait indefinitely for an answer from QEMU.
For Timeout class, create a @contextmanager that
switches Timeout with NoTimeout (empty context manager)
so that if --gdb is set, no timeout
Richard Henderson writes:
> We had a single ATOMIC_MMU_LOOKUP macro that probed for
> read+write on all atomic ops. This is incorrect for
> plain atomic load and atomic store.
>
> For user-only, we rely on the host page permissions.
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3
1 - 100 of 443 matches
Mail list logo