Widen the length field of NBDRequest to 64-bits, although we can
assert that all current uses are still under 32 bits. Move the
request magic number to nbd.h, to live alongside the reply magic
number. Add the necessary bools that will eventually track whether
the client successfully negotiated ex
In order to more easily add a third reply type with an even larger
header, but where the payload will look the same for both structured
and extended replies, it is nicer if simple and structured replies are
nested inside the same layer of sbuf.reply.hdr. While at it, note
that while .or and .sr ar
When extended headers are in use, the server can send us 64-bit
extents, even for a 32-bit query (if the server knows the entire image
is data, for example, or if the metacontext has a status definition
that uses more than 32 bits). Also, while most contexts only have
32-bit flags, a server is all
This is the NBD spec series; there are matching qemu and libnbd
patches that implement the changes in this series. I'm happy to drop
the RFC patches from all three, but wanted the conversation on whether
it makes sense to have 64-bit holes during NBD_CMD_READ first (it
would make more sense if we
Add the magic numbers and new structs necessary to implement the NBD
protocol extension of extended headers providing 64-bit lengths. This
corresponds to upstream nbd commits XXX-XXX[*].
---
[*] FIXME update commit ids before pushing
---
lib/nbd-protocol.h | 66 ++
Add a new negotiation feature where the client and server agree to use
larger packet headers on every packet sent during transmission phase.
This has two purposes: first, it makes it possible to perform
operations like trim, write zeroes, and block status on more than 2^32
bytes in a single command
This series implements the spec changes in a counterpart NBD series,
and has been tested to be interoperable with libnbd implementing the
same spec. I'm not too happy with the RFC patch at the end, but
implemented it for discussion. Given the release timing, this would
be qemu 8.0 material if we
Support a server giving us a 64-bit extent. Note that the protocol
says a server should not give a 64-bit answer when extended headers
are not negotiated; we can handle that by reporting EPROTO but
otherwise accepting the information. Meanwhile, when extended headers
are in effect, even a 32-bit
Commit 9f30fedb improved the spec to allow non-payload requests that
exceed any advertised maximum block size. Take this one step further
by permitting the server to use NBD_EOVERFLOW as a hint to the client
when a request is oversize (while permitting NBD_EINVAL for
back-compat), and by rewording
All the pieces are in place for a client to finally request extended
headers. Note that we must not request extended headers when qemu-nbd
is used to connect to the kernel module (as nbd.ko does not expect
them), but there is no harm in all other clients requesting them.
Extended headers are not
Very similar to the recent addition of nbd_opt_structured_reply,
giving us fine-grained control over an extended headers request.
Because nbdkit does not yet support extended headers, testsuite
coverage is limited to interop testing with qemu-nbd. It shows that
extended headers imply structured r
On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote:
> On 11/1/22 16:19, Michael Roth wrote:
> > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote:
> >> >
> >> > 1) restoring kernel directmap:
> >> >
> >> > Currently SNP (and I believe TDX) need to either split or remov
On 08.11.22 13:37, Kevin Wolf wrote:
We want to change .bdrv_co_drained_begin/end() back to be non-coroutine
callbacks, so in preparation, avoid yielding in their implementation.
This does almost the same as the existing logic in bdrv_drain_invoke(),
by creating and entering coroutines internall
Although our use of "base:allocation" doesn't require the use of the
64-bit API for flags, we might perform slightly faster for a server
that does give us 64-bit extent lengths and honors larger nbd_zero
lengths.
---
copy/nbd-ops.c | 22 +++---
1 file changed, 11 insertions(+), 11
As part of adding extended headers, the NBD spec debated about adding
support for reading 64-bit holes. It was documented in a separate
upstream commit XXX[*] to make it easier to decide whether 64-bit
holes should be required of all clients supporting extended headers,
or whether it is an unneede
Part of NBD's 64-bit headers extension involves passing the client's
requested offset back as part of the reply header (one reason for this
change: converting absolute offsets stored in
NBD_REPLY_TYPE_OFFSET_DATA to relative offsets within the buffer is
easier if the absolute offset of the buffer i
On Mon, Nov 14, 2022 at 05:53:41PM +, Jonathan Cameron wrote:
> Hi Gregory,
>
> I've not been rushing on this purely because we are after the feature
> freeze for this QEMU cycle so no great rush to line up new features
> (and there was some fun with the pull request the previous set of QEMU
>
As part of extending NBD to support 64-bit lengths, the protocol also
added an option for servers to allow clients to request filtered
responses to NBD_CMD_BLOCK_STATUS when more than one meta-context is
negotiated (see NBD commit XXX[*]). At the same time as this patch,
qemu-nbd was taught to sup
Because we use NBD_CMD_FLAG_REQ_ONE with NBD_CMD_BLOCK_STATUS, a
client in narrow mode should not be able to provoke a server into
sending a block status result larger than the client's 32-bit request.
But in extended mode, a 64-bit status request must be able to handle a
64-bit status result, once
In the recent NBD protocol extensions to add 64-bit commands, an
additional option was added to allow NBD_CMD_BLOCK_STATUS pass a
client payload instructing the server to filter its answers (mainly
useful when the client requests more than one meta context with
NBD_OPT_SET_META_CONTEXT). This patc
Even though the NBD spec has been altered to allow us to accept
NBD_CMD_READ larger than the max payload size (provided our response
is a hole or broken up over more than one data chunk), we are not
planning to take advantage of that, and continue to cap NBD_CMD_READ
to 32M regardless of header siz
Peter Xu wrote:
> Now with rs->pss we can already cache channels in pss->pss_channels. That
> pss_channel contains more infromation than rs->f because it's per-channel.
> So rs->f could be replaced by rss->pss[RAM_CHANNEL_PRECOPY].pss_channel,
> while rs->f itself is a bit vague now.
>
> Note tha
Prove that we can round-trip a block status request larger than 4G
through a new-enough qemu-nbd. Also serves as a unit test of our shim
for converting internal 64-bit representation back to the older 32-bit
nbd_block_status callback interface.
---
interop/Makefile.am | 6 ++
interop/large-
From: Marc-André Lureau
../hw/usb/ccid-card-emulated.c: In function 'handle_apdu_thread':
../hw/usb/ccid-card-emulated.c:251:24: error: cast from pointer to integer of
different size [-Werror=pointer-to-int-cast]
251 | assert((unsigned long)event > 1000);
Signed-off-by: Marc-A
For 32-bit block status, we were able to cheat and use an array with
an odd number of elements, with array[0] holding the context id, and
passing &array[1] to the user's callback. But once we have 64-bit
extents, we can no longer abuse array element 0 like that, for two
reasons: 64-bit extents con
Le 14/11/2022 à 09:40, Philippe Mathieu-Daudé a écrit :
On 11/11/22 13:19, Frédéric Pétrot wrote:
Commit 40244040 changed the way the S irqs are numbered. This breaks when
40244040a7 in case?
Seems reasonnable, indeed, I'll even align with what git blame shows
(11 chars, so 40244040a7a).
On 08.11.22 13:37, Kevin Wolf wrote:
ignore_bds_parents is now ignored, so we can just remove it.
Signed-off-by: Kevin Wolf
---
include/block/block-io.h | 10 ++
block.c | 4 +--
block/io.c | 78 +++-
3 files changed,
Add another bit of overall server information, as well as a '--can
extended-headers' silent query. For now, the testsuite is written
assuming that when nbdkit finally adds extended headers support, it
will also add a --no-eh kill switch comparable to its existing --no-sr
switch.
---
info/nbdinfo.
Thanks all,
I will send v4 patch to fix the 80 characters limitation issue.
On Sat, Nov 12, 2022 at 6:05 AM Gavin Shan wrote:
>
> On 11/11/22 6:54 PM, Igor Mammedov wrote:
> > On Fri, 11 Nov 2022 17:34:04 +0800
> > Gavin Shan wrote:
> >> On 11/11/22 5:13 PM, Igor Mammedov wrote:
> >>> On Fri, 11
On Sat, Nov 12, 2022 at 11:36 PM Conor Dooley wrote:
>
> From: Conor Dooley
>
> On PolarFire SoC, some peripherals (eg the PCI root port) are clocked by
> "Clock Conditioning Circuitry" in the FPGA. The specific clock depends
> on the FPGA bitstream & can be locked to one particular {D,P}LL - in
Hi Alex and Richard,
The aarch64 GitLab CI runner is down again. Are you able to restart it?
Any idea why it disconnects sometimes?
Thanks,
Stefan
On 08.11.22 13:37, Kevin Wolf wrote:
All callers of bdrv_parent_drained_begin_single() pass poll=false now,
so we don't need the parameter any more.
Signed-off-by: Kevin Wolf
---
include/block/block-io.h | 5 ++---
block.c | 4 ++--
block/io.c | 7 ++-
3
Although we usually map "base:allocation" which doesn't require the
use of the 64-bit API for flags, this application IS intended to map
out other metacontexts that might have 64-bit flags. And when
extended headers are in use, we might as well ask for the server to
give us extents as large as it
ling xu wrote:
> This commit updates code of avx512 support for xbzrle_encode_buffer
> function to accelerate xbzrle encoding speed. Runtime check of avx512
> support and benchmark for this feature are added. Compared with C
> version of xbzrle_encode_buffer function, avx512 version can achieve
>
在 2022/11/11 3:17, Michael S. Tsirkin 写道:
On Sun, Oct 30, 2022 at 09:52:39PM +0800, huang...@chinatelecom.cn wrote:
From: Hyman Huang(黄勇)
Save the acked_features once it be configured by guest
virtio driver so it can't miss any features.
Note that this patch also change the features saving
Support receiving headers for 64-bit replies if extended headers were
negotiated. We already insist that the server not send us too much
payload in one reply, so we can exploit that and merge the 64-bit
length back into a normalized 32-bit field for the rest of the payload
length calculations. Th
Leonardo Brás wrote:
> On Tue, 2022-08-02 at 08:39 +0200, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index 89811619d8..54acdc004c 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -667,8 +667,8 @@ static
On 14/11/22 04:24, Zhenyu Zhang wrote:
Commit ffac16fab3 "hostmem: introduce "prealloc-threads" property"
(v5.0.0) changed the default number of threads from number of CPUs
to 1. This was deemed a regression, and fixed in commit f8d426a685
"hostmem: default the amount of prealloc-threads to smp-
On 08.11.22 13:37, Kevin Wolf wrote:
We only need to call both the BlockDriver's callback and the parent
callbacks when going from undrained to drained or vice versa. A second
drain section doesn't make a difference for the driver or the parent,
they weren't supposed to send new requests before a
The _guarded() calls are required in BHs, timers, fd read/write
callbacks, etc because we're no longer in the memory region dispatch
code with the reentrancy guard set. It's not clear to me whether the
_guarded() calls are actually required in most of these patches
though? Do you plan to convert ev
On Mon, Nov 14, 2022 at 11:35:30PM +0800, Hyman wrote:
>
>
> 在 2022/11/11 3:17, Michael S. Tsirkin 写道:
> > On Sun, Oct 30, 2022 at 09:52:39PM +0800, huang...@chinatelecom.cn wrote:
> > > From: Hyman Huang(黄勇)
> > >
> > > Save the acked_features once it be configured by guest
> > > virtio driver
Peter Xu wrote:
> The major change is to replace "!save_page_use_compression()" with
> "xbzrle_enabled" to make it clear.
>
> Reasonings:
>
> (1) When compression enabled, "!save_page_use_compression()" is exactly the
> same as checking "xbzrle_enabled".
>
> (2) When compression disabled, "!sa
On 08.11.22 13:37, Kevin Wolf wrote:
Subtree drains are not used any more. Remove them.
After this, BdrvChildClass.attach/detach() don't poll any more.
Signed-off-by: Kevin Wolf
---
include/block/block-io.h | 18 +--
include/block/block_int-common.h | 1 -
include/block/block_in
Peter Xu wrote:
> Since we already have bitmap_mutex to protect either the dirty bitmap or
> the clear log bitmap, we don't need atomic operations to set/clear/test on
> the clear log bitmap. Switching all ops from atomic to non-atomic
> versions, meanwhile touch up the comments to show which loc
On 11/14/22 11:24 AM, Zhenyu Zhang wrote:
Commit ffac16fab3 "hostmem: introduce "prealloc-threads" property"
(v5.0.0) changed the default number of threads from number of CPUs
to 1. This was deemed a regression, and fixed in commit f8d426a685
"hostmem: default the amount of prealloc-threads to s
Update the client code to be able to send an extended request, and
parse an extended header from the server. Note that since we reject
any structured reply with a too-large payload, we can always normalize
a valid header back into the compact form, so that the caller need not
deal with two branche
On Sun, 23 Oct 2022 at 16:37, wrote:
>
> From: Tobias Röhmel
>
> ARMv8-R AArch32 CPUs behave as if TTBCR.EAE is always 1 even
> tough they don't have the TTBCR register.
> See ARM Architecture Reference Manual Supplement - ARMv8, for the ARMv8-R
> AArch32 architecture profile Version:A.c section
On Fri, 11 Nov 2022 at 13:23, Philippe Mathieu-Daudé wrote:
>
> On 31/10/22 14:03, Peter Maydell wrote:
> > On Mon, 31 Oct 2022 at 12:08, Philippe Mathieu-Daudé
> > wrote:
> >>
> >> On 4/10/22 16:54, Peter Maydell wrote:
> >>> On Tue, 4 Oct 2022 at 14:33, Alex Bennée wrote:
>
>
>
On 14/11/22 11:34, Frédéric Pétrot wrote:
Le 14/11/2022 à 09:40, Philippe Mathieu-Daudé a écrit :
On 11/11/22 13:19, Frédéric Pétrot wrote:
Eventually we could unify the style:
-- >8 --
@@ -476,11 +476,11 @@ DeviceState *sifive_plic_create(hwaddr addr,
char *hart_config,
CPUStat
On Sat, 12 Nov 2022 at 21:49, Strahinja Jankovic
wrote:
>
> Trying to run U-Boot for Cubieboard (Allwinner A10) fails because it cannot
> access SD card. The problem is that FIFO register in current
> allwinner-sdhost implementation is at the address corresponding to
> Allwinner H3, but not A10.
>
From: Paolo Bonzini
When translating code that is using LAHF and SAHF in combination with the
REX prefix, the instructions should not use any other register than AH;
however, QEMU selects SPL (SP being register 4, just like AH) if the
REX prefix is present. To fix this, use deposit directly with
On Mon, 14 Nov 2022 12:47:40 +0100
quint...@redhat.com wrote:
> Hi
>
> Please, send any topic that you are interested in covering.
>
> We already have some topics:
> Re agenda, see below topics our team would like to discuss:
>
>- QEMU support for kernel/vfio V2 live migration patches
>
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/7.2 for any
user-visible changes.
signature.asc
Description: PGP signature
The spec was silent on how many extents a server could reply with.
However, both qemu and nbdkit (the two server implementations known to
have implemented the NBD_CMD_BLOCK_STATUS extension) implement a hard
cap, and will truncate the amount of extents in a reply to avoid
sending a client a reply s
Hi,
Thank you for your reply.
On Mon, Nov 14, 2022 at 4:42 PM Peter Maydell wrote:
>
> On Sat, 12 Nov 2022 at 21:49, Strahinja Jankovic
> wrote:
> >
> > Trying to run U-Boot for Cubieboard (Allwinner A10) fails because it cannot
> > access SD card. The problem is that FIFO register in current
>
On Sun, 23 Oct 2022 at 16:37, wrote:
>
> From: Tobias Röhmel
>
> The v8R PMSAv8 has a two-stage MPU translation process, but, unlike
> VMSAv8, the stage 2 attributes are in the same format as the stage 1
> attributes (8-bit MAIR format). Rather than converting the MAIR
> format to the format used
Am 08.11.22 um 10:23 schrieb Alex Bennée:
The previous fix to virtio_device_started revealed a problem in its
use by both the core and the device code. The core code should be able
to handle the device "starting" while the VM isn't running to handle
the restoration of migration state. To solve th
On Sat, Nov 12, 2022 at 10:40 PM Longpeng(Mike) wrote:
>
> From: Longpeng
>
> Signed-off-by: Longpeng
> ---
> .../devices/vhost-vdpa-generic-device.rst | 46 +++
> 1 file changed, 46 insertions(+)
> create mode 100644 docs/system/devices/vhost-vdpa-generic-device.rst
>
> di
v1:
https://lists.nongnu.org/archive/html/qemu-devel/2022-10/msg01824.html
https://lists.nongnu.org/archive/html/qemu-devel/2022-11/msg01073.html
v1 -> v2:
* Use QEMU_LOCK_GUARD (Alex).
* Handle TARGET_TB_PCREL (Alex).
* Support ELF -kernels, add a note about this (Alex). Tested with
qemu-system
This is the culmination of the previous patches' preparation work for
using extended headers when possible. The new states in the state
machine are copied extensively from our handling of
OPT_STRUCTURED_REPLY. The next patch will then expose a new API
nbd_opt_extended_headers() for manual control
Chao Peng writes:
> Introduction
>
> KVM userspace being able to crash the host is horrible. Under current
> KVM architecture, all guest memory is inherently accessible from KVM
> userspace and is exposed to the mentioned crash issue. The goal of this
> series is to provide a solu
The existing nbd_block_status() callback is permanently stuck with an
array of uint32_t pairs (len/2 extents), which is both constrained on
maximum extent size (no larger than 4G) and on the status flags (must
fit in 32 bits). While the "base:allocation" metacontext will never
exceed 32 bits, it i
On Sat, Nov 12, 2022 at 11:37 PM Conor Dooley wrote:
>
> From: Conor Dooley
>
> The Fabric Interconnect Controllers provide interfaces between the FPGA
> fabric and the core complex. There are 5 FICs on PolarFire SoC, numbered
> 0 through 4. FIC2 is an AXI4 slave interface from the FPGA fabric an
14.11.2022 11:58, Daniel P. Berrangé wrote:
..
On current systems, using works
now (despite the pkg-config-supplied -I/usr/include/capstone) -
since on all systems capstone headers are put into capstone/
subdirectory of a system include dir. So this change is
compatible with both the obsolete wa
On 11/10/22 20:29, Daniel Henrique Barboza wrote:
On 11/10/22 11:57, Jan Richter wrote:
On 11/10/22 00:26, Philippe Mathieu-Daudé wrote:
On 9/11/22 16:39, Daniel Henrique Barboza wrote:
On 10/27/22 06:01, Daniel P. Berrangé wrote:
On Thu, Oct 27, 2022 at 09:46:29AM +0200, Thomas Huth w
On Mon, 14 Nov 2022 14:25:02 +0100
Thomas Huth wrote:
> The "loadparm" machine property is useful for selecting alternative
> kernels on the disk of the guest, but so far we do not tell the users
> yet how to use it. Add some documentation to fill this gap.
>
> Buglink: https://bugzilla.redhat.c
The new NBD_OPT_EXTENDED_HEADERS feature is worth using by default,
but there may be cases where the user explicitly wants to stick with
the older 32-bit headers. nbd_set_request_extended_headers() will let
the client override the default, nbd_get_request_extended_headers()
determines the current
Since our example program for 32-bit extents is inherently limited to
32-bit lengths, it is also worth demonstrating the 64-bit extent API,
including the difference in the array indexing being saner.
---
ocaml/examples/Makefile.am | 3 ++-
ocaml/examples/extents64.ml | 42 +++
Although our use of "base:allocation" doesn't require the use of the
64-bit API for flags, we might perform slightly faster for a server
that does give us 64-bit extent lengths.
---
dump/dump.c | 27 ++-
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/dump/d
On 11.11.22 20:20, Stefan Hajnoczi wrote:
On Fri, 11 Nov 2022 at 10:29, Kevin Wolf wrote:
The following changes since commit 2ccad61746ca7de5dd3e25146062264387e43bd4:
Merge tag 'pull-tcg-20221109' of https://gitlab.com/rth7680/qemu into
staging (2022-11-09 13:26:45 -0500)
are available in
Peter Xu wrote:
> When starting ram saving procedure (especially at the completion phase),
> always set last_seen_block to non-NULL to make sure we can always correctly
> detect the case where "we've migrated all the dirty pages".
>
> Then we'll guarantee both last_seen_block and pss.block will be
Hi
Please, send any topic that you are interested in covering.
We already have some topics:
Re agenda, see below topics our team would like to discuss:
- QEMU support for kernel/vfio V2 live migration patches
- acceptance of changes required for Grace/Hopper passthrough and vGPU
suppo
On 08.11.22 13:37, Kevin Wolf wrote:
The subtree drain was introduced in commit b1e1af394d9 as a way to avoid
graph changes between finding the base node and changing the block graph
as necessary on completion of the image streaming job.
The block graph could change between these two points beca
Zhenyu Zhang writes:
> Commit ffac16fab3 "hostmem: introduce "prealloc-threads" property"
> (v5.0.0) changed the default number of threads from number of CPUs
> to 1. This was deemed a regression, and fixed in commit f8d426a685
> "hostmem: default the amount of prealloc-threads to smp-cpus".
> E
Overcome the inherent 32-bit limitation of our existing
nbd_block_status command by adding a 64-bit variant. The command sent
to the server does not change, but the user's callback is now handed
64-bit information regardless of whether the server replies with 32-
or 64-bit extents.
Unit tests pro
Support sending 64-bit requests if extended headers were negotiated.
This includes setting NBD_CMD_FLAG_PAYLOAD_LEN any time we send an
extended NBD_CMD_WRITE; this is such a fundamental part of the
protocol that for now it is easier to silently ignore whatever value
the user passes in for that bit
Chao Peng writes:
> In memory encryption usage, guest memory may be encrypted with special
> key and can be accessed only by the guest itself. We call such memory
> private memory. It's valueless and sometimes can cause problem to allow
> userspace to access guest private memory. This new KVM m
On Mon, Nov 14, 2022 at 5:30 AM Jason Wang wrote:
>
>
> 在 2022/11/11 21:12, Eugenio Perez Martin 写道:
> > On Fri, Nov 11, 2022 at 8:49 AM Jason Wang wrote:
> >>
> >> 在 2022/11/10 21:47, Eugenio Perez Martin 写道:
> >>> On Thu, Nov 10, 2022 at 7:01 AM Jason Wang wrote:
> On Wed, Nov 9, 2022 at
This series is posted alongside a spec change to NBD, and
interoperable with changes posted to qemu-nbd/qemu-storage-daemon.
The RFC patch at the end is optional; ineroperability with qemu works
only when either both projects omit the RFC patch, or when both
projects include it (if only one of the
Peter Xu wrote:
> In qemu_file_shutdown(), there's a possible race if with current order of
> operation. There're two major things to do:
>
> (1) Do real shutdown() (e.g. shutdown() syscall on socket)
> (2) Update qemufile's last_error
>
> We must do (2) before (1) otherwise there can be a ra
ling xu wrote:
> Unit test code is in test-xbzrle.c, and benchmark code is in xbzrle-bench.c
> for performance benchmarking.
>
> Signed-off-by: ling xu
> Co-authored-by: Zhou Zhao
> Co-authored-by: Jun Jin
Reviewed-by: Juan Quintela
queued.
On Mon, Nov 14, 2022 at 5:26 AM Jason Wang wrote:
>
>
> 在 2022/11/11 20:58, Eugenio Perez Martin 写道:
> > On Fri, Nov 11, 2022 at 9:07 AM Jason Wang wrote:
> >> On Fri, Nov 11, 2022 at 3:56 PM Eugenio Perez Martin
> >> wrote:
> >>> On Fri, Nov 11, 2022 at 8:34 AM Jason Wang wrote:
>
>
Peter Xu wrote:
> The 2nd check on RAM_SAVE_FLAG_CONTINUE is a bit redundant. Use a boolean
> to be clearer.
>
> Reviewed-by: Dr. David Alan Gilbert
> Signed-off-by: Peter Xu
Reviewed-by: Juan Quintela
On Mon, Nov 14, 2022 at 05:18:53PM +0100, Christian Borntraeger wrote:
> Am 08.11.22 um 10:23 schrieb Alex Bennée:
> > The previous fix to virtio_device_started revealed a problem in its
> > use by both the core and the device code. The core code should be able
> > to handle the device "starting" w
One of the benefits of extended replies is that we can do a
fixed-length read for the entire header of every server reply, which
is fewer syscalls than the split-read approach required by structured
replies. But one of the drawbacks of doing a large read is that if
the server is non-compliant (not
The /dev/mem is used for two purposes:
- reading PCI_MSIX_ENTRY_CTRL_MASKBIT
- reading Pending Bit Array (PBA)
The first one was originally done because when Xen did not send all
vector ctrl writes to the device model, so QEMU might have outdated old
register value. This has been changed in Xen,
On 11/11/2022 19.38, Stefan Weil wrote:
Am 11.11.22 um 19:28 schrieb Thomas Huth:
Fix typos (discovered with the 'codespell' utility).
Signed-off-by: Thomas Huth
---
hw/s390x/ipl.h | 2 +-
pc-bios/s390-ccw/cio.h | 2 +-
pc-bios/s390-ccw/iplb.h
Allow a client to request a subset of negotiated meta contexts. For
example, a client may ask to use a single connection to learn about
both block status and dirty bitmaps, but where the dirty bitmap
queries only need to be performed on a subset of the disk; forcing the
server to compute that info
On Thu, Oct 27, 2022 at 3:53 PM Mayuresh Chitale
wrote:
>
> Set the state of each ISA extension on the vcpu depending on what
> is set in the CPU property and what is allowed by KVM for that extension.
>
> Signed-off-by: Mayuresh Chitale
Reviewed-by: Alistair Francis
Alistair
> ---
> target/
On Fri, 11 Nov 2022 at 14:55, Alex Bennée wrote:
>
> a66a24585f (hw/intc/arm_gic: Implement read of GICC_IIDR) implemented
> this for the CPU interface register. The fact we don't implement it
> shows up when running Xen with -d guest_error which is definitely
> wrong because the guest is perfectl
The previous patch handled extended headers by truncating large block
status requests from the client back to 32 bits. But this is not
ideal; for cases where we can truly determine the status of the entire
image quickly (for example, when reporting the entire image as
non-sparse because we lack th
The next commit will add support for the new addition of
NBD_CMD_FLAG_PAYLOAD during NBD_CMD_BLOCK_STATUS, where the client can
request that the server only return a subset of negotiated contexts,
rather than all contexts. To make that task easier, this patch
populates the list of contexts to retu
在 2022/11/11 21:12, Eugenio Perez Martin 写道:
On Fri, Nov 11, 2022 at 8:49 AM Jason Wang wrote:
在 2022/11/10 21:47, Eugenio Perez Martin 写道:
On Thu, Nov 10, 2022 at 7:01 AM Jason Wang wrote:
On Wed, Nov 9, 2022 at 1:08 AM Eugenio Pérez wrote:
The memory listener that thells the device ho
* zhenwei pi (pizhen...@bytedance.com) wrote:
> Example of this command:
> # virsh qemu-monitor-command vm --hmp info cryptodev
> cryptodev1: service=[akcipher|mac|hash|cipher]
> queue 0: type=builtin
> cryptodev0: service=[akcipher]
> queue 0: type=lkcf
>
> Signed-off-by: zhenwei pi
> -
On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote:
> On 11/1/22 16:19, Michael Roth wrote:
> > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote:
> >> >
> >> > 1) restoring kernel directmap:
> >> >
> >> > Currently SNP (and I believe TDX) need to either split or remov
Peter Xu wrote:
> Removing referencing to RAMState.f in compress_page_with_multi_thread() and
> flush_compressed_data().
>
> Compression code by default isn't compatible with having >1 channels (or it
> won't currently know which channel to flush the compressed data), so to
> make it simple we alw
Our existing use of structured replies either reads into a qiov capped
at 32M (NBD_CMD_READ) or caps allocation to 1000 bytes (see
NBD_MAX_MALLOC_PAYLOAD in block/nbd.c). But the existing length
checks are rather late; if we encounter a buggy (or malicious) server
that sends a super-large payload
:06 -0500)
are available in the Git repository at:
https://git.linaro.org/people/pmaydell/qemu-arm.git
tags/pull-target-arm-20221114
for you to fetch changes up to d9721f19cd05a382f4f5a7093c80d1c4a8a1aa82:
hw/intc/arm_gicv3: fix prio masking on pmr write (2022-11-14
Upcoming additions to support NBD 64-bit effect lengths allow for the
possibility to distinguish between payload length (capped at 32M) and
effect length (up to 63 bits). Without that extension, only the
NBD_CMD_WRITE request has a payload; but with the extension, it makes
sense to allow at least
On Mon, 14 Nov 2022 at 13:33, Jens Wiklander wrote:
>
> With commit 39f29e599355 ("hw/intc/arm_gicv3: Use correct number of
> priority bits for the CPU") the number of priority bits was changed from
> the maximum value 8 to typically 5. As a consequence a few of the lowest
> bits in ICC_PMR_EL1 be
1 - 100 of 200 matches
Mail list logo