--- Begin Message ---
Package: release.debian.org
Severity: normal
Tags: bookworm
User: release.debian....@packages.debian.org
Usertags: pu
X-Debbugs-Cc: q...@packages.debian.org, pkg-qemu-de...@lists.alioth.debian.org
Control: affects -1 + src:qemu
[ Reason ]
There's a new upstream qemu stable/bugfix release, as usual,
fixing a number of various issues here and there, including
a security issue CVE-2024-7409 (qemu nbd server).
[ Tests ]
This release passes both upstream extensive testsuite and
my usual share of tests with multiple commonly used guests.
[ Risks ]
The risks do exists obviously, however we're trying hard to minimize
possible risks as much as possible by carefully selecting which changes
to pick and how to do that.
[ Checklist ]
[X] *all* changes are documented in the d/changelog
[X] I reviewed all changes and I approve them
[X] attach debdiff against the package in (old)stable
[X] the issue is verified as fixed in unstable
[ Changes ]
All changes except one comes from the upstream repository,
which is also mirrored on salsa:
https://salsa.debian.org/qemu-team/qemu/-/commits/stable-7.2/
In this case the talk is about v7.2.13 and v7.2.14 tags.
Complete changelog is below (a part of debdiff).
[ Other info ]
Historically, qemu in debian were built with base upstream release
plus stable/bugfix patches (7.2.orig.tar.gz which corresponds to
upstream 7.2.0 plus 7.2.1..7.2.2..7.2.3 etc patches). I don't
remember why this has been done this way, and changed it to include
complete 3-component upstream version tarball past bookworm, but
continue this scheme in bookworm stable. This has been corrected
for trixie, but for bookworm we keep using hisorical scheme.
In the debdiff below, there's a single .diff file, v7.2.14.diff,
which is `git diff v7.2.13..v7.2.14`, - a diff between version
currently in debian (which is based on 7.2.13) and next upstream
7.2.14 (which is latest upstream version in 7.2 line at this time).
[ Debdiff ]
diff -Nru qemu-7.2+dfsg/debian/changelog qemu-7.2+dfsg/debian/changelog
--- qemu-7.2+dfsg/debian/changelog 2024-07-17 14:27:14.000000000 +0300
+++ qemu-7.2+dfsg/debian/changelog 2024-11-01 16:50:46.000000000 +0300
@@ -1,3 +1,59 @@
+qemu (1:7.2+dfsg-7+deb12u8) bookworm; urgency=medium
+
+ * update to upstream 7.2.14 stable/bugfix release, v7.2.14.diff,
+ https://gitlab.com/qemu-project/qemu/-/commits/v7.2.14 :
+ - Update version for 7.2.14 release
+ - hw/intc/arm_gic: fix spurious level triggered interrupts
+ - tests/docker: remove debian-armel-cross
+ - hw/display/vhost-user-gpu.c: fix vhost_user_gpu_chr_read()
+ - crypto: check gnutls & gcrypt support the requested pbkdf hash
+ - crypto: run qcrypto_pbkdf2_count_iters in a new thread
+ - softmmu/physmem: fix memory leak in dirty_memory_extend()
+ - gitlab: migrate the s390x custom machine to 22.04
+ - crypto/tlscredspsk: Free username on finalize
+ - module: Prevent crash by resetting local_err in module_load_qom_all()
+ - target/i386: Do not apply REX to MMX operands
+ - block/blkio: use FUA flag on write zeroes only if supported
+ - hw/core/ptimer: fix timer zero period condition for freq > 1GHz
+ - nbd/server: CVE-2024-7409: Avoid use-after-free when closing server
+ - nbd/server: CVE-2024-7409: Close stray clients at server-stop
+ - nbd/server: CVE-2024-7409: Drop non-negotiating clients
+ - nbd/server: CVE-2024-7409: Cap default max-connections to 100
+ - nbd/server: Plumb in new args to nbd_client_add()
+ - iotests: Add `vvfat` tests
+ - vvfat: Fix reading files with non-continuous clusters
+ - vvfat: Fix wrong checks for cluster mappings invariant
+ - vvfat: Fix usage of `info.file.offset`
+ - vvfat: Fix bug in writing to middle of file
+ - hw/sd/sdhci: Reset @data_count index on invalid ADMA transfers
+ - virtio-net: Fix network stall at the host side waiting for kick
+ - virtio-net: Ensure queue index fits with RSS
+ - target/arm: Handle denormals correctly for FMOPA (widening)
+ - hw/arm/mps2-tz.c: fix RX/TX interrupts order
+ - hw/i386/amd_iommu: Don't leak memory in amdvi_update_iotlb()
+ - docs/sphinx/depfile.py: Handle env.doc2path() returning a Path not a str
+ - target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled
+ - target/arm: Avoid shifts by -1 in tszimm_shr() and tszimm_shl()
+ - target/arm: Fix UMOPA/UMOPS of 16-bit values
+ - target/arm: Don't assert for 128-bit tile accesses when SVL is 128
+ - hw/misc/bcm2835_property: Fix handling of FRAMEBUFFER_SET_PALETTE
+ - hw/char/bcm2835_aux: Fix assert when receive FIFO fills up
+ - target/rx: Use target_ulong for address in LI
+ - hw/virtio: Fix the de-initialization of vhost-user devices
+ - util/async.c: Forbid negative min/max
+ in aio_context_set_thread_pool_params()
+ - hw/intc/loongson_ipi: Access memory in little endian
+ - chardev/char-win-stdio.c: restore old console mode
+ - target/i386: do not crash if microvm guest uses SGX CPUID leaves
+ - intel_iommu: fix FRCD construction macro
+ - hw/cxl/cxl-host: Fix segmentation fault when getting cxl-fmw property
+ - hw/nvme: fix memory leak in nvme_dsm
+ - target/arm: Use FPST_F16 for SME FMOPA (widening)
+ - target/arm: Use float_status copy in sme_fmopa_s
+ - qapi/qom: Document feature unstable of @x-vfio-user-server
+
+ -- Michael Tokarev <m...@tls.msk.ru> Fri, 01 Nov 2024 16:50:46 +0300
+
qemu (1:7.2+dfsg-7+deb12u7) bookworm; urgency=medium
* update to upstream 7.2.13 stable/bugfix release, v7.2.13.diff,
diff -Nru qemu-7.2+dfsg/debian/patches/series
qemu-7.2+dfsg/debian/patches/series
--- qemu-7.2+dfsg/debian/patches/series 2024-07-17 14:27:14.000000000 +0300
+++ qemu-7.2+dfsg/debian/patches/series 2024-11-01 16:49:20.000000000 +0300
@@ -11,6 +11,7 @@
v7.2.11.diff
v7.2.12.diff
v7.2.13.diff
+v7.2.14.diff
microvm-default-machine-type.patch
skip-meson-pc-bios.diff
linux-user-binfmt-P.diff
diff -Nru qemu-7.2+dfsg/debian/patches/v7.2.14.diff
qemu-7.2+dfsg/debian/patches/v7.2.14.diff
--- qemu-7.2+dfsg/debian/patches/v7.2.14.diff 1970-01-01 03:00:00.000000000
+0300
+++ qemu-7.2+dfsg/debian/patches/v7.2.14.diff 2024-11-01 16:50:46.000000000
+0300
@@ -0,0 +1,3173 @@
+Subject: v7.2.14
+Date: Wed Sep 18 19:14:56 2024 +0300
+From: Michael Tokarev <m...@tls.msk.ru>
+Forwarded: not-needed
+
+This is a difference between upstream qemu v7.2.13
+and upstream qemu v7.2.14.
+
+ .gitlab-ci.d/container-cross.yml | 6 -
+ .gitlab-ci.d/crossbuilds.yml | 14 -
+ .gitlab-ci.d/custom-runners.yml | 2 +-
+ ...untu-20.04-s390x.yml => ubuntu-22.04-s390x.yml} | 28 +-
+ VERSION | 2 +-
+ block/blkio.c | 6 +-
+ block/monitor/block-hmp-cmds.c | 3 +-
+ block/vvfat.c | 27 +-
+ blockdev-nbd.c | 59 +-
+ chardev/char-win-stdio.c | 5 +
+ crypto/pbkdf-gcrypt.c | 2 +-
+ crypto/pbkdf-gnutls.c | 2 +-
+ crypto/pbkdf.c | 53 +-
+ crypto/tlscredspsk.c | 1 +
+ docs/sphinx/depfile.py | 2 +-
+ hw/arm/mps2-tz.c | 6 +-
+ hw/char/bcm2835_aux.c | 2 +-
+ hw/core/ptimer.c | 4 +-
+ hw/cxl/cxl-host.c | 3 +-
+ hw/display/vhost-user-gpu.c | 2 +-
+ hw/i386/amd_iommu.c | 8 +-
+ hw/i386/intel_iommu_internal.h | 2 +-
+ hw/i386/sgx.c | 6 +-
+ hw/intc/arm_gic.c | 11 +-
+ hw/intc/loongarch_ipi.c | 9 +-
+ hw/misc/bcm2835_property.c | 27 +-
+ hw/net/virtio-net.c | 31 +-
+ hw/nvme/ctrl.c | 1 +
+ hw/sd/sdhci.c | 1 +
+ hw/virtio/virtio.c | 64 +-
+ include/block/nbd.h | 18 +-
+ include/exec/ramlist.h | 1 +
+ include/hw/virtio/virtio.h | 27 +-
+ meson.build | 4 +
+ nbd/server.c | 46 +-
+ nbd/trace-events | 1 +
+ qapi/block-export.json | 4 +-
+ qapi/qom.json | 3 +-
+ qemu-nbd.c | 4 +-
+ softmmu/physmem.c | 35 +-
+ target/arm/helper-sme.h | 2 +-
+ target/arm/helper.c | 2 +-
+ target/arm/sme_helper.c | 49 +-
+ target/arm/translate-sme.c | 43 +-
+ target/arm/translate-sve.c | 18 +-
+ target/i386/tcg/decode-new.c.inc | 5 +-
+ target/rx/translate.c | 3 +-
+ tests/docker/dockerfiles/debian-armel-cross.docker | 170 -----
+ tests/lcitool/refresh | 5 -
+ tests/qemu-iotests/check | 2 +-
+ tests/qemu-iotests/fat16.py | 690 +++++++++++++++++++++
+ tests/qemu-iotests/testenv.py | 2 +-
+ tests/qemu-iotests/tests/vvfat | 485 +++++++++++++++
+ tests/qemu-iotests/tests/vvfat.out | 5 +
+ tests/unit/ptimer-test.c | 33 +
+ util/async.c | 2 +-
+ util/module.c | 2 +-
+ 57 files changed, 1683 insertions(+), 367 deletions(-)
+
+diff --git a/.gitlab-ci.d/container-cross.yml
b/.gitlab-ci.d/container-cross.yml
+index 24343192ac..f4c8642f5a 100644
+--- a/.gitlab-ci.d/container-cross.yml
++++ b/.gitlab-ci.d/container-cross.yml
+@@ -22,12 +22,6 @@ arm64-debian-cross-container:
+ variables:
+ NAME: debian-arm64-cross
+
+-armel-debian-cross-container:
+- extends: .container_job_template
+- stage: containers
+- variables:
+- NAME: debian-armel-cross
+-
+ armhf-debian-cross-container:
+ extends: .container_job_template
+ stage: containers
+diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
+index c4cd96433d..ba2971ec96 100644
+--- a/.gitlab-ci.d/crossbuilds.yml
++++ b/.gitlab-ci.d/crossbuilds.yml
+@@ -1,20 +1,6 @@
+ include:
+ - local: '/.gitlab-ci.d/crossbuild-template.yml'
+
+-cross-armel-system:
+- extends: .cross_system_build_job
+- needs:
+- job: armel-debian-cross-container
+- variables:
+- IMAGE: debian-armel-cross
+-
+-cross-armel-user:
+- extends: .cross_user_build_job
+- needs:
+- job: armel-debian-cross-container
+- variables:
+- IMAGE: debian-armel-cross
+-
+ cross-armhf-system:
+ extends: .cross_system_build_job
+ needs:
+diff --git a/.gitlab-ci.d/custom-runners.yml b/.gitlab-ci.d/custom-runners.yml
+index 97f99e29c2..94414457f1 100644
+--- a/.gitlab-ci.d/custom-runners.yml
++++ b/.gitlab-ci.d/custom-runners.yml
+@@ -14,7 +14,7 @@ variables:
+ GIT_STRATEGY: clone
+
+ include:
+- - local: '/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml'
++ - local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml'
+ - local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml'
+ - local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml'
+ - local: '/.gitlab-ci.d/custom-runners/centos-stream-8-x86_64.yml'
+diff --git a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
b/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml
+similarity index 89%
+rename from .gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
+rename to .gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml
+index 0c835939db..12c6e21119 100644
+--- a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
++++ b/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml
+@@ -1,12 +1,12 @@
+-# All ubuntu-20.04 jobs should run successfully in an environment
++# All ubuntu-22.04 jobs should run successfully in an environment
+ # setup by the scripts/ci/setup/build-environment.yml task
+-# "Install basic packages to build QEMU on Ubuntu 20.04/20.04"
++# "Install basic packages to build QEMU on Ubuntu 22.04"
+
+-ubuntu-20.04-s390x-all-linux-static:
++ubuntu-22.04-s390x-all-linux-static:
+ needs: []
+ stage: build
+ tags:
+- - ubuntu_20.04
++ - ubuntu_22.04
+ - s390x
+ rules:
+ - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~
/^staging/'
+@@ -24,11 +24,11 @@ ubuntu-20.04-s390x-all-linux-static:
+ - make --output-sync -j`nproc` check-tcg V=1
+ || { cat meson-logs/testlog.txt; exit 1; } ;
+
+-ubuntu-20.04-s390x-all:
++ubuntu-22.04-s390x-all:
+ needs: []
+ stage: build
+ tags:
+- - ubuntu_20.04
++ - ubuntu_22.04
+ - s390x
+ timeout: 75m
+ rules:
+@@ -43,11 +43,11 @@ ubuntu-20.04-s390x-all:
+ - make --output-sync -j`nproc` check V=1
+ || { cat meson-logs/testlog.txt; exit 1; } ;
+
+-ubuntu-20.04-s390x-alldbg:
++ubuntu-22.04-s390x-alldbg:
+ needs: []
+ stage: build
+ tags:
+- - ubuntu_20.04
++ - ubuntu_22.04
+ - s390x
+ rules:
+ - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~
/^staging/'
+@@ -66,11 +66,11 @@ ubuntu-20.04-s390x-alldbg:
+ - make --output-sync -j`nproc` check V=1
+ || { cat meson-logs/testlog.txt; exit 1; } ;
+
+-ubuntu-20.04-s390x-clang:
++ubuntu-22.04-s390x-clang:
+ needs: []
+ stage: build
+ tags:
+- - ubuntu_20.04
++ - ubuntu_22.04
+ - s390x
+ rules:
+ - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~
/^staging/'
+@@ -88,11 +88,11 @@ ubuntu-20.04-s390x-clang:
+ - make --output-sync -j`nproc` check V=1
+ || { cat meson-logs/testlog.txt; exit 1; } ;
+
+-ubuntu-20.04-s390x-tci:
++ubuntu-22.04-s390x-tci:
+ needs: []
+ stage: build
+ tags:
+- - ubuntu_20.04
++ - ubuntu_22.04
+ - s390x
+ rules:
+ - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~
/^staging/'
+@@ -108,11 +108,11 @@ ubuntu-20.04-s390x-tci:
+ || { cat config.log meson-logs/meson-log.txt; exit 1; }
+ - make --output-sync -j`nproc`
+
+-ubuntu-20.04-s390x-notcg:
++ubuntu-22.04-s390x-notcg:
+ needs: []
+ stage: build
+ tags:
+- - ubuntu_20.04
++ - ubuntu_22.04
+ - s390x
+ rules:
+ - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~
/^staging/'
+diff --git a/VERSION b/VERSION
+index c0d5d580b2..0755f425a1 100644
+--- a/VERSION
++++ b/VERSION
+@@ -1 +1 @@
+-7.2.13
++7.2.14
+diff --git a/block/blkio.c b/block/blkio.c
+index cb66160268..1fd47c434c 100644
+--- a/block/blkio.c
++++ b/block/blkio.c
+@@ -808,8 +808,10 @@ static int blkio_file_open(BlockDriverState *bs, QDict
*options, int flags,
+ }
+
+ bs->supported_write_flags = BDRV_REQ_FUA | BDRV_REQ_REGISTERED_BUF;
+- bs->supported_zero_flags = BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP |
+- BDRV_REQ_NO_FALLBACK;
++ bs->supported_zero_flags = BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK;
++#ifdef CONFIG_BLKIO_WRITE_ZEROS_FUA
++ bs->supported_zero_flags |= BDRV_REQ_FUA;
++#endif
+
+ qemu_mutex_init(&s->blkio_lock);
+ qemu_co_mutex_init(&s->bounce_lock);
+diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
+index cf21b5e40a..d564d8d234 100644
+--- a/block/monitor/block-hmp-cmds.c
++++ b/block/monitor/block-hmp-cmds.c
+@@ -413,7 +413,8 @@ void hmp_nbd_server_start(Monitor *mon, const QDict *qdict)
+ goto exit;
+ }
+
+- nbd_server_start(addr, NULL, NULL, 0, &local_err);
++ nbd_server_start(addr, NULL, NULL, NBD_DEFAULT_MAX_CONNECTIONS,
++ &local_err);
+ qapi_free_SocketAddress(addr);
+ if (local_err != NULL) {
+ goto exit;
+diff --git a/block/vvfat.c b/block/vvfat.c
+index 723c91216e..eb844c3134 100644
+--- a/block/vvfat.c
++++ b/block/vvfat.c
+@@ -1368,8 +1368,9 @@ static int open_file(BDRVVVFATState* s,mapping_t*
mapping)
+ return -1;
+ vvfat_close_current_file(s);
+ s->current_fd = fd;
+- s->current_mapping = mapping;
+ }
++
++ s->current_mapping = mapping;
+ return 0;
+ }
+
+@@ -1407,7 +1408,9 @@ read_cluster_directory:
+
+ assert(s->current_fd);
+
+-
offset=s->cluster_size*(cluster_num-s->current_mapping->begin)+s->current_mapping->info.file.offset;
++ offset = s->cluster_size *
++ ((cluster_num - s->current_mapping->begin)
++ + s->current_mapping->info.file.offset);
+ if(lseek(s->current_fd, offset, SEEK_SET)!=offset)
+ return -3;
+ s->cluster=s->cluster_buffer;
+@@ -1877,7 +1880,6 @@ static uint32_t
get_cluster_count_for_direntry(BDRVVVFATState* s,
+
+ uint32_t cluster_num = begin_of_direntry(direntry);
+ uint32_t offset = 0;
+- int first_mapping_index = -1;
+ mapping_t* mapping = NULL;
+ const char* basename2 = NULL;
+
+@@ -1928,8 +1930,9 @@ static uint32_t
get_cluster_count_for_direntry(BDRVVVFATState* s,
+ (mapping->mode & MODE_DIRECTORY) == 0) {
+
+ /* was modified in qcow */
+- if (offset != mapping->info.file.offset + s->cluster_size
+- * (cluster_num - mapping->begin)) {
++ if (offset != s->cluster_size
++ * ((cluster_num - mapping->begin)
++ + mapping->info.file.offset)) {
+ /* offset of this cluster in file chain has changed */
+ abort();
+ copy_it = 1;
+@@ -1938,14 +1941,9 @@ static uint32_t
get_cluster_count_for_direntry(BDRVVVFATState* s,
+
+ if (strcmp(basename, basename2))
+ copy_it = 1;
+- first_mapping_index = array_index(&(s->mapping),
mapping);
+- }
+-
+- if (mapping->first_mapping_index != first_mapping_index
+- && mapping->info.file.offset > 0) {
+- abort();
+- copy_it = 1;
+ }
++ assert(mapping->first_mapping_index == -1
++ || mapping->info.file.offset > 0);
+
+ /* need to write out? */
+ if (!was_modified && is_file(direntry)) {
+@@ -2402,7 +2400,7 @@ static int commit_mappings(BDRVVVFATState* s,
+ (mapping->end - mapping->begin);
+ } else
+ next_mapping->info.file.offset = mapping->info.file.offset +
+- mapping->end - mapping->begin;
++ (mapping->end - mapping->begin);
+
+ mapping = next_mapping;
+ }
+@@ -2522,8 +2520,9 @@ static int commit_one_file(BDRVVVFATState* s,
+ return -1;
+ }
+
+- for (i = s->cluster_size; i < offset; i += s->cluster_size)
++ for (i = 0; i < offset; i += s->cluster_size) {
+ c = modified_fat_get(s, c);
++ }
+
+ fd = qemu_open_old(mapping->path, O_RDWR | O_CREAT | O_BINARY, 0666);
+ if (fd < 0) {
+diff --git a/blockdev-nbd.c b/blockdev-nbd.c
+index 012256bb02..e06c26b0af 100644
+--- a/blockdev-nbd.c
++++ b/blockdev-nbd.c
+@@ -21,12 +21,18 @@
+ #include "io/channel-socket.h"
+ #include "io/net-listener.h"
+
++typedef struct NBDConn {
++ QIOChannelSocket *cioc;
++ QLIST_ENTRY(NBDConn) next;
++} NBDConn;
++
+ typedef struct NBDServerData {
+ QIONetListener *listener;
+ QCryptoTLSCreds *tlscreds;
+ char *tlsauthz;
+ uint32_t max_connections;
+ uint32_t connections;
++ QLIST_HEAD(, NBDConn) conns;
+ } NBDServerData;
+
+ static NBDServerData *nbd_server;
+@@ -51,6 +57,14 @@ int nbd_server_max_connections(void)
+
+ static void nbd_blockdev_client_closed(NBDClient *client, bool ignored)
+ {
++ NBDConn *conn = nbd_client_owner(client);
++
++ assert(qemu_in_main_thread() && nbd_server);
++
++ object_unref(OBJECT(conn->cioc));
++ QLIST_REMOVE(conn, next);
++ g_free(conn);
++
+ nbd_client_put(client);
+ assert(nbd_server->connections > 0);
+ nbd_server->connections--;
+@@ -60,31 +74,56 @@ static void nbd_blockdev_client_closed(NBDClient *client,
bool ignored)
+ static void nbd_accept(QIONetListener *listener, QIOChannelSocket *cioc,
+ gpointer opaque)
+ {
++ NBDConn *conn = g_new0(NBDConn, 1);
++
++ assert(qemu_in_main_thread() && nbd_server);
+ nbd_server->connections++;
++ object_ref(OBJECT(cioc));
++ conn->cioc = cioc;
++ QLIST_INSERT_HEAD(&nbd_server->conns, conn, next);
+ nbd_update_server_watch(nbd_server);
+
+ qio_channel_set_name(QIO_CHANNEL(cioc), "nbd-server");
+- nbd_client_new(cioc, nbd_server->tlscreds, nbd_server->tlsauthz,
+- nbd_blockdev_client_closed);
++ /* TODO - expose handshake timeout as QMP option */
++ nbd_client_new(cioc, NBD_DEFAULT_HANDSHAKE_MAX_SECS,
++ nbd_server->tlscreds, nbd_server->tlsauthz,
++ nbd_blockdev_client_closed, conn);
+ }
+
+ static void nbd_update_server_watch(NBDServerData *s)
+ {
+- if (!s->max_connections || s->connections < s->max_connections) {
+- qio_net_listener_set_client_func(s->listener, nbd_accept, NULL, NULL);
+- } else {
+- qio_net_listener_set_client_func(s->listener, NULL, NULL, NULL);
++ if (s->listener) {
++ if (!s->max_connections || s->connections < s->max_connections) {
++ qio_net_listener_set_client_func(s->listener, nbd_accept, NULL,
++ NULL);
++ } else {
++ qio_net_listener_set_client_func(s->listener, NULL, NULL, NULL);
++ }
+ }
+ }
+
+ static void nbd_server_free(NBDServerData *server)
+ {
++ NBDConn *conn, *tmp;
++
+ if (!server) {
+ return;
+ }
+
++ /*
++ * Forcefully close the listener socket, and any clients that have
++ * not yet disconnected on their own.
++ */
+ qio_net_listener_disconnect(server->listener);
+ object_unref(OBJECT(server->listener));
++ server->listener = NULL;
++ QLIST_FOREACH_SAFE(conn, &server->conns, next, tmp) {
++ qio_channel_shutdown(QIO_CHANNEL(conn->cioc),
QIO_CHANNEL_SHUTDOWN_BOTH,
++ NULL);
++ }
++
++ AIO_WAIT_WHILE_UNLOCKED(NULL, server->connections > 0);
++
+ if (server->tlscreds) {
+ object_unref(OBJECT(server->tlscreds));
+ }
+@@ -168,6 +207,10 @@ void nbd_server_start(SocketAddress *addr, const char
*tls_creds,
+
+ void nbd_server_start_options(NbdServerOptions *arg, Error **errp)
+ {
++ if (!arg->has_max_connections) {
++ arg->max_connections = NBD_DEFAULT_MAX_CONNECTIONS;
++ }
++
+ nbd_server_start(arg->addr, arg->tls_creds, arg->tls_authz,
+ arg->max_connections, errp);
+ }
+@@ -180,6 +223,10 @@ void qmp_nbd_server_start(SocketAddressLegacy *addr,
+ {
+ SocketAddress *addr_flat = socket_address_flatten(addr);
+
++ if (!has_max_connections) {
++ max_connections = NBD_DEFAULT_MAX_CONNECTIONS;
++ }
++
+ nbd_server_start(addr_flat, tls_creds, tls_authz, max_connections, errp);
+ qapi_free_SocketAddress(addr_flat);
+ }
+diff --git a/chardev/char-win-stdio.c b/chardev/char-win-stdio.c
+index eb830eabd9..6e59db84dd 100644
+--- a/chardev/char-win-stdio.c
++++ b/chardev/char-win-stdio.c
+@@ -33,6 +33,7 @@
+ struct WinStdioChardev {
+ Chardev parent;
+ HANDLE hStdIn;
++ DWORD dwOldMode;
+ HANDLE hInputReadyEvent;
+ HANDLE hInputDoneEvent;
+ HANDLE hInputThread;
+@@ -159,6 +160,7 @@ static void qemu_chr_open_stdio(Chardev *chr,
+ }
+
+ is_console = GetConsoleMode(stdio->hStdIn, &dwMode) != 0;
++ stdio->dwOldMode = dwMode;
+
+ if (is_console) {
+ if (qemu_add_wait_object(stdio->hStdIn,
+@@ -221,6 +223,9 @@ static void char_win_stdio_finalize(Object *obj)
+ {
+ WinStdioChardev *stdio = WIN_STDIO_CHARDEV(obj);
+
++ if (stdio->hStdIn != INVALID_HANDLE_VALUE) {
++ SetConsoleMode(stdio->hStdIn, stdio->dwOldMode);
++ }
+ if (stdio->hInputReadyEvent != INVALID_HANDLE_VALUE) {
+ CloseHandle(stdio->hInputReadyEvent);
+ }
+diff --git a/crypto/pbkdf-gcrypt.c b/crypto/pbkdf-gcrypt.c
+index a8d8e64f4d..bc0719c831 100644
+--- a/crypto/pbkdf-gcrypt.c
++++ b/crypto/pbkdf-gcrypt.c
+@@ -33,7 +33,7 @@ bool qcrypto_pbkdf2_supports(QCryptoHashAlgorithm hash)
+ case QCRYPTO_HASH_ALG_SHA384:
+ case QCRYPTO_HASH_ALG_SHA512:
+ case QCRYPTO_HASH_ALG_RIPEMD160:
+- return true;
++ return qcrypto_hash_supports(hash);
+ default:
+ return false;
+ }
+diff --git a/crypto/pbkdf-gnutls.c b/crypto/pbkdf-gnutls.c
+index 2dfbbd382c..911b565bea 100644
+--- a/crypto/pbkdf-gnutls.c
++++ b/crypto/pbkdf-gnutls.c
+@@ -33,7 +33,7 @@ bool qcrypto_pbkdf2_supports(QCryptoHashAlgorithm hash)
+ case QCRYPTO_HASH_ALG_SHA384:
+ case QCRYPTO_HASH_ALG_SHA512:
+ case QCRYPTO_HASH_ALG_RIPEMD160:
+- return true;
++ return qcrypto_hash_supports(hash);
+ default:
+ return false;
+ }
+diff --git a/crypto/pbkdf.c b/crypto/pbkdf.c
+index 8d198c152c..d1c06ef3ed 100644
+--- a/crypto/pbkdf.c
++++ b/crypto/pbkdf.c
+@@ -19,6 +19,7 @@
+ */
+
+ #include "qemu/osdep.h"
++#include "qemu/thread.h"
+ #include "qapi/error.h"
+ #include "crypto/pbkdf.h"
+ #ifndef _WIN32
+@@ -85,12 +86,28 @@ static int qcrypto_pbkdf2_get_thread_cpu(unsigned long
long *val_ms,
+ #endif
+ }
+
+-uint64_t qcrypto_pbkdf2_count_iters(QCryptoHashAlgorithm hash,
+- const uint8_t *key, size_t nkey,
+- const uint8_t *salt, size_t nsalt,
+- size_t nout,
+- Error **errp)
++typedef struct CountItersData {
++ QCryptoHashAlgorithm hash;
++ const uint8_t *key;
++ size_t nkey;
++ const uint8_t *salt;
++ size_t nsalt;
++ size_t nout;
++ uint64_t iterations;
++ Error **errp;
++} CountItersData;
++
++static void *threaded_qcrypto_pbkdf2_count_iters(void *data)
+ {
++ CountItersData *iters_data = (CountItersData *) data;
++ QCryptoHashAlgorithm hash = iters_data->hash;
++ const uint8_t *key = iters_data->key;
++ size_t nkey = iters_data->nkey;
++ const uint8_t *salt = iters_data->salt;
++ size_t nsalt = iters_data->nsalt;
++ size_t nout = iters_data->nout;
++ Error **errp = iters_data->errp;
++
+ uint64_t ret = -1;
+ g_autofree uint8_t *out = g_new(uint8_t, nout);
+ uint64_t iterations = (1 << 15);
+@@ -114,7 +131,10 @@ uint64_t qcrypto_pbkdf2_count_iters(QCryptoHashAlgorithm
hash,
+
+ delta_ms = end_ms - start_ms;
+
+- if (delta_ms > 500) {
++ if (delta_ms == 0) { /* sanity check */
++ error_setg(errp, "Unable to get accurate CPU usage");
++ goto cleanup;
++ } else if (delta_ms > 500) {
+ break;
+ } else if (delta_ms < 100) {
+ iterations = iterations * 10;
+@@ -129,5 +149,24 @@ uint64_t qcrypto_pbkdf2_count_iters(QCryptoHashAlgorithm
hash,
+
+ cleanup:
+ memset(out, 0, nout);
+- return ret;
++ iters_data->iterations = ret;
++ return NULL;
++}
++
++uint64_t qcrypto_pbkdf2_count_iters(QCryptoHashAlgorithm hash,
++ const uint8_t *key, size_t nkey,
++ const uint8_t *salt, size_t nsalt,
++ size_t nout,
++ Error **errp)
++{
++ CountItersData data = {
++ hash, key, nkey, salt, nsalt, nout, 0, errp
++ };
++ QemuThread thread;
++
++ qemu_thread_create(&thread, "pbkdf2", threaded_qcrypto_pbkdf2_count_iters,
++ &data, QEMU_THREAD_JOINABLE);
++ qemu_thread_join(&thread);
++
++ return data.iterations;
+ }
+diff --git a/crypto/tlscredspsk.c b/crypto/tlscredspsk.c
+index 546cad1c5a..0d6b71a37c 100644
+--- a/crypto/tlscredspsk.c
++++ b/crypto/tlscredspsk.c
+@@ -243,6 +243,7 @@ qcrypto_tls_creds_psk_finalize(Object *obj)
+ QCryptoTLSCredsPSK *creds = QCRYPTO_TLS_CREDS_PSK(obj);
+
+ qcrypto_tls_creds_psk_unload(creds);
++ g_free(creds->username);
+ }
+
+ static void
+diff --git a/docs/sphinx/depfile.py b/docs/sphinx/depfile.py
+index afdcbcec6e..e74be6af98 100644
+--- a/docs/sphinx/depfile.py
++++ b/docs/sphinx/depfile.py
+@@ -19,7 +19,7 @@
+
+ def get_infiles(env):
+ for x in env.found_docs:
+- yield env.doc2path(x)
++ yield str(env.doc2path(x))
+ yield from ((os.path.join(env.srcdir, dep)
+ for dep in env.dependencies[x]))
+ for mod in sys.modules.values():
+diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
+index 284c09c91d..334cd836c3 100644
+--- a/hw/arm/mps2-tz.c
++++ b/hw/arm/mps2-tz.c
+@@ -427,7 +427,7 @@ static MemoryRegion *make_uart(MPS2TZMachineState *mms,
void *opaque,
+ const char *name, hwaddr size,
+ const int *irqs, const PPCExtraData *extradata)
+ {
+- /* The irq[] array is tx, rx, combined, in that order */
++ /* The irq[] array is rx, tx, combined, in that order */
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
+ CMSDKAPBUART *uart = opaque;
+ int i = uart - &mms->uart[0];
+@@ -439,8 +439,8 @@ static MemoryRegion *make_uart(MPS2TZMachineState *mms,
void *opaque,
+ qdev_prop_set_uint32(DEVICE(uart), "pclk-frq", mmc->apb_periph_frq);
+ sysbus_realize(SYS_BUS_DEVICE(uart), &error_fatal);
+ s = SYS_BUS_DEVICE(uart);
+- sysbus_connect_irq(s, 0, get_sse_irq_in(mms, irqs[0]));
+- sysbus_connect_irq(s, 1, get_sse_irq_in(mms, irqs[1]));
++ sysbus_connect_irq(s, 0, get_sse_irq_in(mms, irqs[1]));
++ sysbus_connect_irq(s, 1, get_sse_irq_in(mms, irqs[0]));
+ sysbus_connect_irq(s, 2, qdev_get_gpio_in(orgate_dev, i * 2));
+ sysbus_connect_irq(s, 3, qdev_get_gpio_in(orgate_dev, i * 2 + 1));
+ sysbus_connect_irq(s, 4, get_sse_irq_in(mms, irqs[2]));
+diff --git a/hw/char/bcm2835_aux.c b/hw/char/bcm2835_aux.c
+index 96410b1ff8..0f1b28547e 100644
+--- a/hw/char/bcm2835_aux.c
++++ b/hw/char/bcm2835_aux.c
+@@ -138,7 +138,7 @@ static uint64_t bcm2835_aux_read(void *opaque, hwaddr
offset, unsigned size)
+ res = 0x30e; /* space in the output buffer, empty tx fifo, idle tx/rx
*/
+ if (s->read_count > 0) {
+ res |= 0x1; /* data in input buffer */
+- assert(s->read_count < BCM2835_AUX_RX_FIFO_LEN);
++ assert(s->read_count <= BCM2835_AUX_RX_FIFO_LEN);
+ res |= ((uint32_t)s->read_count) << 16; /* rx fifo fill level */
+ }
+ return res;
+diff --git a/hw/core/ptimer.c b/hw/core/ptimer.c
+index eb5ba1aff7..f1f8109385 100644
+--- a/hw/core/ptimer.c
++++ b/hw/core/ptimer.c
+@@ -83,7 +83,7 @@ static void ptimer_reload(ptimer_state *s, int delta_adjust)
+ delta = s->delta = s->limit;
+ }
+
+- if (s->period == 0) {
++ if (s->period == 0 && s->period_frac == 0) {
+ if (!qtest_enabled()) {
+ fprintf(stderr, "Timer with period zero, disabling\n");
+ }
+@@ -309,7 +309,7 @@ void ptimer_run(ptimer_state *s, int oneshot)
+
+ assert(s->in_transaction);
+
+- if (was_disabled && s->period == 0) {
++ if (was_disabled && s->period == 0 && s->period_frac == 0) {
+ if (!qtest_enabled()) {
+ fprintf(stderr, "Timer with period zero, disabling\n");
+ }
+diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
+index 0fc3e57138..3253874322 100644
+--- a/hw/cxl/cxl-host.c
++++ b/hw/cxl/cxl-host.c
+@@ -282,7 +282,8 @@ static void machine_set_cxl(Object *obj, Visitor *v, const
char *name,
+ static void machine_get_cfmw(Object *obj, Visitor *v, const char *name,
+ void *opaque, Error **errp)
+ {
+- CXLFixedMemoryWindowOptionsList **list = opaque;
++ CXLState *state = opaque;
++ CXLFixedMemoryWindowOptionsList **list = &state->cfmw_list;
+
+ visit_type_CXLFixedMemoryWindowOptionsList(v, name, list, errp);
+ }
+diff --git a/hw/display/vhost-user-gpu.c b/hw/display/vhost-user-gpu.c
+index 19c0e20103..7dee566cfe 100644
+--- a/hw/display/vhost-user-gpu.c
++++ b/hw/display/vhost-user-gpu.c
+@@ -335,7 +335,7 @@ vhost_user_gpu_chr_read(void *opaque)
+ }
+
+ msg->request = request;
+- msg->flags = size;
++ msg->flags = flags;
+ msg->size = size;
+
+ if (request == VHOST_USER_GPU_CURSOR_UPDATE ||
+diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
+index a20f3e1d50..02597db1e1 100644
+--- a/hw/i386/amd_iommu.c
++++ b/hw/i386/amd_iommu.c
+@@ -346,12 +346,12 @@ static void amdvi_update_iotlb(AMDVIState *s, uint16_t
devid,
+ uint64_t gpa, IOMMUTLBEntry to_cache,
+ uint16_t domid)
+ {
+- AMDVIIOTLBEntry *entry = g_new(AMDVIIOTLBEntry, 1);
+- uint64_t *key = g_new(uint64_t, 1);
+- uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
+-
+ /* don't cache erroneous translations */
+ if (to_cache.perm != IOMMU_NONE) {
++ AMDVIIOTLBEntry *entry = g_new(AMDVIIOTLBEntry, 1);
++ uint64_t *key = g_new(uint64_t, 1);
++ uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
++
+ trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid),
+ PCI_FUNC(devid), gpa, to_cache.translated_addr);
+
+diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
+index e4d43ce48c..830e319e34 100644
+--- a/hw/i386/intel_iommu_internal.h
++++ b/hw/i386/intel_iommu_internal.h
+@@ -267,7 +267,7 @@
+ /* For the low 64-bit of 128-bit */
+ #define VTD_FRCD_FI(val) ((val) & ~0xfffULL)
+ #define VTD_FRCD_PV(val) (((val) & 0xffffULL) << 40)
+-#define VTD_FRCD_PP(val) (((val) & 0x1) << 31)
++#define VTD_FRCD_PP(val) (((val) & 0x1ULL) << 31)
+
+ /* DMA Remapping Fault Conditions */
+ typedef enum VTDFaultReason {
+diff --git a/hw/i386/sgx.c b/hw/i386/sgx.c
+index 09d9c7c73d..f64987c6dd 100644
+--- a/hw/i386/sgx.c
++++ b/hw/i386/sgx.c
+@@ -268,10 +268,12 @@ void hmp_info_sgx(Monitor *mon, const QDict *qdict)
+
+ bool sgx_epc_get_section(int section_nr, uint64_t *addr, uint64_t *size)
+ {
+- PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
++ PCMachineState *pcms =
++ (PCMachineState *)object_dynamic_cast(qdev_get_machine(),
++ TYPE_PC_MACHINE);
+ SGXEPCDevice *epc;
+
+- if (pcms->sgx_epc.size == 0 || pcms->sgx_epc.nr_sections <= section_nr) {
++ if (!pcms || pcms->sgx_epc.size == 0 || pcms->sgx_epc.nr_sections <=
section_nr) {
+ return true;
+ }
+
+diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
+index 47f01e45e3..b8a4364b7f 100644
+--- a/hw/intc/arm_gic.c
++++ b/hw/intc/arm_gic.c
+@@ -1263,9 +1263,14 @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
+ trace_gic_enable_irq(irq + i);
+ }
+ GIC_DIST_SET_ENABLED(irq + i, cm);
+- /* If a raised level triggered IRQ enabled then mark
+- is as pending. */
+- if (GIC_DIST_TEST_LEVEL(irq + i, mask)
++ /*
++ * If a raised level triggered IRQ enabled then mark
++ * it as pending on 11MPCore. For other GIC revisions we
++ * handle the "level triggered and line asserted" check
++ * at the other end in gic_test_pending().
++ */
++ if (s->revision == REV_11MPCORE
++ && GIC_DIST_TEST_LEVEL(irq + i, mask)
+ && !GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
+ DPRINTF("Set %d pending mask %x\n", irq + i, mask);
+ GIC_DIST_SET_PENDING(irq + i, mask);
+diff --git a/hw/intc/loongarch_ipi.c b/hw/intc/loongarch_ipi.c
+index 40e98af2ce..a4079e3732 100644
+--- a/hw/intc/loongarch_ipi.c
++++ b/hw/intc/loongarch_ipi.c
+@@ -12,6 +12,7 @@
+ #include "qapi/error.h"
+ #include "qemu/log.h"
+ #include "exec/address-spaces.h"
++#include "exec/memory.h"
+ #include "hw/loongarch/virt.h"
+ #include "migration/vmstate.h"
+ #include "target/loongarch/internals.h"
+@@ -59,8 +60,8 @@ static void send_ipi_data(CPULoongArchState *env,
target_ulong val, target_ulong
+ * if the mask is 0, we need not to do anything.
+ */
+ if ((val >> 27) & 0xf) {
+- data = address_space_ldl(&env->address_space_iocsr, addr,
+- MEMTXATTRS_UNSPECIFIED, NULL);
++ data = address_space_ldl_le(&env->address_space_iocsr, addr,
++ MEMTXATTRS_UNSPECIFIED, NULL);
+ for (i = 0; i < 4; i++) {
+ /* get mask for byte writing */
+ if (val & (0x1 << (27 + i))) {
+@@ -71,8 +72,8 @@ static void send_ipi_data(CPULoongArchState *env,
target_ulong val, target_ulong
+
+ data &= mask;
+ data |= (val >> 32) & ~mask;
+- address_space_stl(&env->address_space_iocsr, addr,
+- data, MEMTXATTRS_UNSPECIFIED, NULL);
++ address_space_stl_le(&env->address_space_iocsr, addr,
++ data, MEMTXATTRS_UNSPECIFIED, NULL);
+ }
+
+ static void ipi_send(uint64_t val)
+diff --git a/hw/misc/bcm2835_property.c b/hw/misc/bcm2835_property.c
+index de056ea2df..c7834d3fc7 100644
+--- a/hw/misc/bcm2835_property.c
++++ b/hw/misc/bcm2835_property.c
+@@ -26,8 +26,6 @@ static void bcm2835_property_mbox_push(BCM2835PropertyState
*s, uint32_t value)
+ uint32_t tot_len;
+ size_t resplen;
+ uint32_t tmp;
+- int n;
+- uint32_t offset, length, color;
+
+ /*
+ * Copy the current state of the framebuffer config; we will update
+@@ -258,18 +256,25 @@ static void
bcm2835_property_mbox_push(BCM2835PropertyState *s, uint32_t value)
+ resplen = 16;
+ break;
+ case 0x0004800b: /* Set palette */
+- offset = ldl_le_phys(&s->dma_as, value + 12);
+- length = ldl_le_phys(&s->dma_as, value + 16);
+- n = 0;
+- while (n < length - offset) {
+- color = ldl_le_phys(&s->dma_as, value + 20 + (n << 2));
+- stl_le_phys(&s->dma_as,
+- s->fbdev->vcram_base + ((offset + n) << 2),
color);
+- n++;
++ {
++ uint32_t offset = ldl_le_phys(&s->dma_as, value + 12);
++ uint32_t length = ldl_le_phys(&s->dma_as, value + 16);
++ int resp;
++
++ if (offset > 255 || length < 1 || length > 256) {
++ resp = 1; /* invalid request */
++ } else {
++ for (uint32_t e = 0; e < length; e++) {
++ uint32_t color = ldl_le_phys(&s->dma_as, value + 20 + (e
<< 2));
++ stl_le_phys(&s->dma_as,
++ s->fbdev->vcram_base + ((offset + e) << 2),
color);
++ }
++ resp = 0;
+ }
+- stl_le_phys(&s->dma_as, value + 12, 0);
++ stl_le_phys(&s->dma_as, value + 12, resp);
+ resplen = 4;
+ break;
++ }
+ case 0x00040013: /* Get number of displays */
+ stl_le_phys(&s->dma_as, value + 12, 1);
+ resplen = 4;
+diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
+index beadea5bf8..925a5c319e 100644
+--- a/hw/net/virtio-net.c
++++ b/hw/net/virtio-net.c
+@@ -1597,24 +1597,28 @@ static bool virtio_net_can_receive(NetClientState *nc)
+
+ static int virtio_net_has_buffers(VirtIONetQueue *q, int bufsize)
+ {
++ int opaque;
++ unsigned int in_bytes;
+ VirtIONet *n = q->n;
+- if (virtio_queue_empty(q->rx_vq) ||
+- (n->mergeable_rx_bufs &&
+- !virtqueue_avail_bytes(q->rx_vq, bufsize, 0))) {
+- virtio_queue_set_notification(q->rx_vq, 1);
+-
+- /* To avoid a race condition where the guest has made some buffers
+- * available after the above check but before notification was
+- * enabled, check for available buffers again.
+- */
+- if (virtio_queue_empty(q->rx_vq) ||
+- (n->mergeable_rx_bufs &&
+- !virtqueue_avail_bytes(q->rx_vq, bufsize, 0))) {
++
++ while (virtio_queue_empty(q->rx_vq) || n->mergeable_rx_bufs) {
++ opaque = virtqueue_get_avail_bytes(q->rx_vq, &in_bytes, NULL,
++ bufsize, 0);
++ /* Buffer is enough, disable notifiaction */
++ if (bufsize <= in_bytes) {
++ break;
++ }
++
++ if (virtio_queue_enable_notification_and_check(q->rx_vq, opaque)) {
++ /* Guest has added some buffers, try again */
++ continue;
++ } else {
+ return 0;
+ }
+ }
+
+ virtio_queue_set_notification(q->rx_vq, 0);
++
+ return 1;
+ }
+
+@@ -1846,7 +1850,8 @@ static ssize_t virtio_net_receive_rcu(NetClientState
*nc, const uint8_t *buf,
+ if (!no_rss && n->rss_data.enabled && n->rss_data.enabled_software_rss) {
+ int index = virtio_net_process_rss(nc, buf, size);
+ if (index >= 0) {
+- NetClientState *nc2 = qemu_get_subqueue(n->nic, index);
++ NetClientState *nc2 =
++ qemu_get_subqueue(n->nic, index % n->curr_queue_pairs);
+ return virtio_net_receive_rcu(nc2, buf, size, true);
+ }
+ }
+diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
+index 027d67f10b..ed56ad40b3 100644
+--- a/hw/nvme/ctrl.c
++++ b/hw/nvme/ctrl.c
+@@ -2465,6 +2465,7 @@ next:
+ done:
+ iocb->aiocb = NULL;
+ iocb->common.cb(iocb->common.opaque, iocb->ret);
++ g_free(iocb->range);
+ qemu_aio_unref(iocb);
+ }
+
+diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
+index abd503d168..c4a9b5956d 100644
+--- a/hw/sd/sdhci.c
++++ b/hw/sd/sdhci.c
+@@ -846,6 +846,7 @@ static void sdhci_do_adma(SDHCIState *s)
+ }
+ }
+ if (res != MEMTX_OK) {
++ s->data_count = 0;
+ if (s->errintstsen & SDHC_EISEN_ADMAERR) {
+ trace_sdhci_error("Set ADMA error flag");
+ s->errintsts |= SDHC_EIS_ADMAERR;
+diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
+index 1227e3d692..d0d13f4766 100644
+--- a/hw/virtio/virtio.c
++++ b/hw/virtio/virtio.c
+@@ -1153,6 +1153,60 @@ int virtio_queue_empty(VirtQueue *vq)
+ }
+ }
+
++static bool virtio_queue_split_poll(VirtQueue *vq, unsigned shadow_idx)
++{
++ if (unlikely(!vq->vring.avail)) {
++ return false;
++ }
++
++ return (uint16_t)shadow_idx != vring_avail_idx(vq);
++}
++
++static bool virtio_queue_packed_poll(VirtQueue *vq, unsigned shadow_idx)
++{
++ VRingPackedDesc desc;
++ VRingMemoryRegionCaches *caches;
++
++ if (unlikely(!vq->vring.desc)) {
++ return false;
++ }
++
++ caches = vring_get_region_caches(vq);
++ if (!caches) {
++ return false;
++ }
++
++ vring_packed_desc_read(vq->vdev, &desc, &caches->desc,
++ shadow_idx, true);
++
++ return is_desc_avail(desc.flags, vq->shadow_avail_wrap_counter);
++}
++
++static bool virtio_queue_poll(VirtQueue *vq, unsigned shadow_idx)
++{
++ if (virtio_device_disabled(vq->vdev)) {
++ return false;
++ }
++
++ if (virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED)) {
++ return virtio_queue_packed_poll(vq, shadow_idx);
++ } else {
++ return virtio_queue_split_poll(vq, shadow_idx);
++ }
++}
++
++bool virtio_queue_enable_notification_and_check(VirtQueue *vq,
++ int opaque)
++{
++ virtio_queue_set_notification(vq, 1);
++
++ if (opaque >= 0) {
++ return virtio_queue_poll(vq, (unsigned)opaque);
++ } else {
++ return false;
++ }
++}
++
+ static void virtqueue_unmap_sg(VirtQueue *vq, const VirtQueueElement *elem,
+ unsigned int len)
+ {
+@@ -1727,9 +1781,9 @@ err:
+ goto done;
+ }
+
+-void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
+- unsigned int *out_bytes,
+- unsigned max_in_bytes, unsigned max_out_bytes)
++int virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
++ unsigned int *out_bytes, unsigned max_in_bytes,
++ unsigned max_out_bytes)
+ {
+ uint16_t desc_size;
+ VRingMemoryRegionCaches *caches;
+@@ -1762,7 +1816,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned
int *in_bytes,
+ caches);
+ }
+
+- return;
++ return (int)vq->shadow_avail_idx;
+ err:
+ if (in_bytes) {
+ *in_bytes = 0;
+@@ -1770,6 +1824,8 @@ err:
+ if (out_bytes) {
+ *out_bytes = 0;
+ }
++
++ return -1;
+ }
+
+ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
+diff --git a/include/block/nbd.h b/include/block/nbd.h
+index 4ede3b2bd0..88be104e31 100644
+--- a/include/block/nbd.h
++++ b/include/block/nbd.h
+@@ -27,6 +27,19 @@
+
+ extern const BlockExportDriver blk_exp_nbd;
+
++/*
++ * NBD_DEFAULT_HANDSHAKE_MAX_SECS: Number of seconds in which client must
++ * succeed at NBD_OPT_GO before being forcefully dropped as too slow.
++ */
++#define NBD_DEFAULT_HANDSHAKE_MAX_SECS 10
++
++/*
++ * NBD_DEFAULT_MAX_CONNECTIONS: Number of client sockets to allow at
++ * once; must be large enough to allow a MULTI_CONN-aware client like
++ * nbdcopy to create its typical number of 8-16 sockets.
++ */
++#define NBD_DEFAULT_MAX_CONNECTIONS 100
++
+ /* Handshake phase structs - this struct is passed on the wire */
+
+ struct NBDOption {
+@@ -338,9 +351,12 @@ AioContext *nbd_export_aio_context(NBDExport *exp);
+ NBDExport *nbd_export_find(const char *name);
+
+ void nbd_client_new(QIOChannelSocket *sioc,
++ uint32_t handshake_max_secs,
+ QCryptoTLSCreds *tlscreds,
+ const char *tlsauthz,
+- void (*close_fn)(NBDClient *, bool));
++ void (*close_fn)(NBDClient *, bool),
++ void *owner);
++void *nbd_client_owner(NBDClient *client);
+ void nbd_client_get(NBDClient *client);
+ void nbd_client_put(NBDClient *client);
+
+diff --git a/include/exec/ramlist.h b/include/exec/ramlist.h
+index 2ad2a81acc..d9cfe530be 100644
+--- a/include/exec/ramlist.h
++++ b/include/exec/ramlist.h
+@@ -50,6 +50,7 @@ typedef struct RAMList {
+ /* RCU-enabled, writes protected by the ramlist lock. */
+ QLIST_HEAD(, RAMBlock) blocks;
+ DirtyMemoryBlocks *dirty_memory[DIRTY_MEMORY_NUM];
++ unsigned int num_dirty_blocks;
+ uint32_t version;
+ QLIST_HEAD(, RAMBlockNotifier) ramblock_notifiers;
+ } RAMList;
+diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
+index c1a7c9bd3b..ab3eb182f4 100644
+--- a/include/hw/virtio/virtio.h
++++ b/include/hw/virtio/virtio.h
+@@ -237,9 +237,13 @@ void qemu_put_virtqueue_element(VirtIODevice *vdev,
QEMUFile *f,
+ VirtQueueElement *elem);
+ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
+ unsigned int out_bytes);
+-void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
+- unsigned int *out_bytes,
+- unsigned max_in_bytes, unsigned max_out_bytes);
++/**
++ * Return <0 on error or an opaque >=0 to pass to
++ * virtio_queue_enable_notification_and_check on success.
++ */
++int virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
++ unsigned int *out_bytes, unsigned max_in_bytes,
++ unsigned max_out_bytes);
+
+ void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueue *vq);
+ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
+@@ -266,6 +270,15 @@ int virtio_queue_ready(VirtQueue *vq);
+
+ int virtio_queue_empty(VirtQueue *vq);
+
++/**
++ * Enable notification and check whether guest has added some
++ * buffers since last call to virtqueue_get_avail_bytes.
++ *
++ * @opaque: value returned from virtqueue_get_avail_bytes
++ */
++bool virtio_queue_enable_notification_and_check(VirtQueue *vq,
++ int opaque);
++
+ /* Host binding interface. */
+
+ uint32_t virtio_config_readb(VirtIODevice *vdev, uint32_t addr);
+@@ -425,9 +438,9 @@ static inline bool virtio_device_started(VirtIODevice
*vdev, uint8_t status)
+ * @vdev - the VirtIO device
+ * @status - the devices status bits
+ *
+- * This is similar to virtio_device_started() but also encapsulates a
+- * check on the VM status which would prevent a device starting
+- * anyway.
++ * This is similar to virtio_device_started() but ignores vdev->started
++ * and also encapsulates a check on the VM status which would prevent a
++ * device from starting anyway.
+ */
+ static inline bool virtio_device_should_start(VirtIODevice *vdev, uint8_t
status)
+ {
+@@ -435,7 +448,7 @@ static inline bool virtio_device_should_start(VirtIODevice
*vdev, uint8_t status
+ return false;
+ }
+
+- return virtio_device_started(vdev, status);
++ return status & VIRTIO_CONFIG_S_DRIVER_OK;
+ }
+
+ static inline void virtio_set_started(VirtIODevice *vdev, bool started)
+diff --git a/meson.build b/meson.build
+index 787f91855e..16dc9627e0 100644
+--- a/meson.build
++++ b/meson.build
+@@ -1831,6 +1831,10 @@ config_host_data.set('CONFIG_LZO', lzo.found())
+ config_host_data.set('CONFIG_MPATH', mpathpersist.found())
+ config_host_data.set('CONFIG_MPATH_NEW_API', mpathpersist_new_api)
+ config_host_data.set('CONFIG_BLKIO', blkio.found())
++if blkio.found()
++ config_host_data.set('CONFIG_BLKIO_WRITE_ZEROS_FUA',
++ blkio.version().version_compare('>=1.4.0'))
++endif
+ config_host_data.set('CONFIG_CURL', curl.found())
+ config_host_data.set('CONFIG_CURSES', curses.found())
+ config_host_data.set('CONFIG_GBM', gbm.found())
+diff --git a/nbd/server.c b/nbd/server.c
+index 74edb2815b..bfa8d47dad 100644
+--- a/nbd/server.c
++++ b/nbd/server.c
+@@ -120,10 +120,12 @@ typedef struct NBDExportMetaContexts {
+ struct NBDClient {
+ int refcount;
+ void (*close_fn)(NBDClient *client, bool negotiated);
++ void *owner;
+
+ NBDExport *exp;
+ QCryptoTLSCreds *tlscreds;
+ char *tlsauthz;
++ uint32_t handshake_max_secs;
+ QIOChannelSocket *sioc; /* The underlying data channel */
+ QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
+
+@@ -2748,33 +2750,63 @@ static void nbd_client_receive_next_request(NBDClient
*client)
+ }
+ }
+
++static void nbd_handshake_timer_cb(void *opaque)
++{
++ QIOChannel *ioc = opaque;
++
++ trace_nbd_handshake_timer_cb();
++ qio_channel_shutdown(ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
++}
++
+ static coroutine_fn void nbd_co_client_start(void *opaque)
+ {
+ NBDClient *client = opaque;
+ Error *local_err = NULL;
++ QEMUTimer *handshake_timer = NULL;
+
+ qemu_co_mutex_init(&client->send_lock);
+
++ /*
++ * Create a timer to bound the time spent in negotiation. If the
++ * timer expires, it is likely nbd_negotiate will fail because the
++ * socket was shutdown.
++ */
++ if (client->handshake_max_secs > 0) {
++ handshake_timer = aio_timer_new(qemu_get_aio_context(),
++ QEMU_CLOCK_REALTIME,
++ SCALE_NS,
++ nbd_handshake_timer_cb,
++ client->sioc);
++ timer_mod(handshake_timer,
++ qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
++ client->handshake_max_secs * NANOSECONDS_PER_SECOND);
++ }
++
+ if (nbd_negotiate(client, &local_err)) {
+ if (local_err) {
+ error_report_err(local_err);
+ }
++ timer_free(handshake_timer);
+ client_close(client, false);
+ return;
+ }
+
++ timer_free(handshake_timer);
+ nbd_client_receive_next_request(client);
+ }
+
+ /*
+- * Create a new client listener using the given channel @sioc.
++ * Create a new client listener using the given channel @sioc and @owner.
+ * Begin servicing it in a coroutine. When the connection closes, call
+- * @close_fn with an indication of whether the client completed negotiation.
++ * @close_fn with an indication of whether the client completed negotiation
++ * within @handshake_max_secs seconds (0 for unbounded).
+ */
+ void nbd_client_new(QIOChannelSocket *sioc,
++ uint32_t handshake_max_secs,
+ QCryptoTLSCreds *tlscreds,
+ const char *tlsauthz,
+- void (*close_fn)(NBDClient *, bool))
++ void (*close_fn)(NBDClient *, bool),
++ void *owner)
+ {
+ NBDClient *client;
+ Coroutine *co;
+@@ -2786,12 +2818,20 @@ void nbd_client_new(QIOChannelSocket *sioc,
+ object_ref(OBJECT(client->tlscreds));
+ }
+ client->tlsauthz = g_strdup(tlsauthz);
++ client->handshake_max_secs = handshake_max_secs;
+ client->sioc = sioc;
+ object_ref(OBJECT(client->sioc));
+ client->ioc = QIO_CHANNEL(sioc);
+ object_ref(OBJECT(client->ioc));
+ client->close_fn = close_fn;
++ client->owner = owner;
+
+ co = qemu_coroutine_create(nbd_co_client_start, client);
+ qemu_coroutine_enter(co);
+ }
++
++void *
++nbd_client_owner(NBDClient *client)
++{
++ return client->owner;
++}
+diff --git a/nbd/trace-events b/nbd/trace-events
+index b7032ca277..675f880fa1 100644
+--- a/nbd/trace-events
++++ b/nbd/trace-events
+@@ -73,6 +73,7 @@ nbd_co_receive_request_decode_type(uint64_t handle, uint16_t
type, const char *n
+ nbd_co_receive_request_payload_received(uint64_t handle, uint32_t len)
"Payload received: handle = %" PRIu64 ", len = %" PRIu32
+ nbd_co_receive_align_compliance(const char *op, uint64_t from, uint32_t len,
uint32_t align) "client sent non-compliant unaligned %s request: from=0x%"
PRIx64 ", len=0x%" PRIx32 ", align=0x%" PRIx32
+ nbd_trip(void) "Reading request"
++nbd_handshake_timer_cb(void) "client took too long to negotiate"
+
+ # client-connection.c
+ nbd_connect_thread_sleep(uint64_t timeout) "timeout %" PRIu64
+diff --git a/qapi/block-export.json b/qapi/block-export.json
+index 4627bbc4e6..67d2337f91 100644
+--- a/qapi/block-export.json
++++ b/qapi/block-export.json
+@@ -24,7 +24,7 @@
+ # @max-connections: The maximum number of connections to allow at the same
+ # time, 0 for unlimited. Setting this to 1 also stops
+ # the server from advertising multiple client support
+-# (since 5.2; default: 0)
++# (since 5.2; default: 100)
+ #
+ # Since: 4.2
+ ##
+@@ -55,7 +55,7 @@
+ # @max-connections: The maximum number of connections to allow at the same
+ # time, 0 for unlimited. Setting this to 1 also stops
+ # the server from advertising multiple client support
+-# (since 5.2; default: 0).
++# (since 5.2; default: 100).
+ #
+ # Returns: error if the server is already running.
+ #
+diff --git a/qapi/qom.json b/qapi/qom.json
+index 30e76653ad..694bb81948 100644
+--- a/qapi/qom.json
++++ b/qapi/qom.json
+@@ -860,7 +860,8 @@
+ # @ObjectType:
+ #
+ # Features:
+-# @unstable: Member @x-remote-object is experimental.
++# @unstable: Members @x-remote-object and @x-vfio-user-server are
++# experimental.
+ #
+ # Since: 6.0
+ ##
+diff --git a/qemu-nbd.c b/qemu-nbd.c
+index f71f5125d8..16b220bdad 100644
+--- a/qemu-nbd.c
++++ b/qemu-nbd.c
+@@ -369,7 +369,9 @@ static void nbd_accept(QIONetListener *listener,
QIOChannelSocket *cioc,
+
+ nb_fds++;
+ nbd_update_server_watch();
+- nbd_client_new(cioc, tlscreds, tlsauthz, nbd_client_closed);
++ /* TODO - expose handshake timeout as command line option */
++ nbd_client_new(cioc, NBD_DEFAULT_HANDSHAKE_MAX_SECS,
++ tlscreds, tlsauthz, nbd_client_closed, NULL);
+ }
+
+ static void nbd_update_server_watch(void)
+diff --git a/softmmu/physmem.c b/softmmu/physmem.c
+index 1b606a3002..5b176581f6 100644
+--- a/softmmu/physmem.c
++++ b/softmmu/physmem.c
+@@ -1660,18 +1660,6 @@ static ram_addr_t find_ram_offset(ram_addr_t size)
+ return offset;
+ }
+
+-static unsigned long last_ram_page(void)
+-{
+- RAMBlock *block;
+- ram_addr_t last = 0;
+-
+- RCU_READ_LOCK_GUARD();
+- RAMBLOCK_FOREACH(block) {
+- last = MAX(last, block->offset + block->max_length);
+- }
+- return last >> TARGET_PAGE_BITS;
+-}
+-
+ static void qemu_ram_setup_dump(void *addr, ram_addr_t size)
+ {
+ int ret;
+@@ -1919,13 +1907,11 @@ void qemu_ram_msync(RAMBlock *block, ram_addr_t start,
ram_addr_t length)
+ }
+
+ /* Called with ram_list.mutex held */
+-static void dirty_memory_extend(ram_addr_t old_ram_size,
+- ram_addr_t new_ram_size)
++static void dirty_memory_extend(ram_addr_t new_ram_size)
+ {
+- ram_addr_t old_num_blocks = DIV_ROUND_UP(old_ram_size,
+- DIRTY_MEMORY_BLOCK_SIZE);
+- ram_addr_t new_num_blocks = DIV_ROUND_UP(new_ram_size,
+- DIRTY_MEMORY_BLOCK_SIZE);
++ unsigned int old_num_blocks = ram_list.num_dirty_blocks;
++ unsigned int new_num_blocks = DIV_ROUND_UP(new_ram_size,
++ DIRTY_MEMORY_BLOCK_SIZE);
+ int i;
+
+ /* Only need to extend if block count increased */
+@@ -1957,6 +1943,8 @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
+ g_free_rcu(old_blocks, rcu);
+ }
+ }
++
++ ram_list.num_dirty_blocks = new_num_blocks;
+ }
+
+ static void ram_block_add(RAMBlock *new_block, Error **errp)
+@@ -1965,11 +1953,9 @@ static void ram_block_add(RAMBlock *new_block, Error
**errp)
+ const bool shared = qemu_ram_is_shared(new_block);
+ RAMBlock *block;
+ RAMBlock *last_block = NULL;
+- ram_addr_t old_ram_size, new_ram_size;
++ ram_addr_t ram_size;
+ Error *err = NULL;
+
+- old_ram_size = last_ram_page();
+-
+ qemu_mutex_lock_ramlist();
+ new_block->offset = find_ram_offset(new_block->max_length);
+
+@@ -1997,11 +1983,8 @@ static void ram_block_add(RAMBlock *new_block, Error
**errp)
+ }
+ }
+
+- new_ram_size = MAX(old_ram_size,
+- (new_block->offset + new_block->max_length) >>
TARGET_PAGE_BITS);
+- if (new_ram_size > old_ram_size) {
+- dirty_memory_extend(old_ram_size, new_ram_size);
+- }
++ ram_size = (new_block->offset + new_block->max_length) >>
TARGET_PAGE_BITS;
++ dirty_memory_extend(ram_size);
+ /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
+ * QLIST (which has an RCU-friendly variant) does not have insertion at
+ * tail, so save the last element in last_block.
+diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
+index d2d544a696..d33fbcd8fd 100644
+--- a/target/arm/helper-sme.h
++++ b/target/arm/helper-sme.h
+@@ -122,7 +122,7 @@ DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void,
ptr, ptr, ptr, ptr, i32)
+ DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr,
i32)
+
+ DEF_HELPER_FLAGS_7(sme_fmopa_h, TCG_CALL_NO_RWG,
+- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
++ void, ptr, ptr, ptr, ptr, ptr, env, i32)
+ DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
+ DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
+diff --git a/target/arm/helper.c b/target/arm/helper.c
+index acc0470e86..5c22626b80 100644
+--- a/target/arm/helper.c
++++ b/target/arm/helper.c
+@@ -6335,7 +6335,7 @@ uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el,
bool sm)
+ if (el <= 1 && !el_is_in_host(env, el)) {
+ len = MIN(len, 0xf & (uint32_t)cr[1]);
+ }
+- if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
++ if (el <= 2 && arm_is_el2_enabled(env)) {
+ len = MIN(len, 0xf & (uint32_t)cr[2]);
+ }
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
+diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
+index d592c78ec9..98a4840970 100644
+--- a/target/arm/sme_helper.c
++++ b/target/arm/sme_helper.c
+@@ -949,7 +949,7 @@ void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm,
void *vpn,
+ if (pb & 1) {
+ uint32_t *a = vza_row + H1_4(col);
+ uint32_t *m = vzm + H1_4(col);
+- *a = float32_muladd(n, *m, *a, 0, vst);
++ *a = float32_muladd(n, *m, *a, 0, &fpst);
+ }
+ col += 4;
+ pb >>= 4;
+@@ -1009,12 +1009,23 @@ static inline uint32_t f16mop_adj_pair(uint32_t pair,
uint32_t pg, uint32_t neg)
+ }
+
+ static float32 f16_dotadd(float32 sum, uint32_t e1, uint32_t e2,
+- float_status *s_std, float_status *s_odd)
++ float_status *s_f16, float_status *s_std,
++ float_status *s_odd)
+ {
+- float64 e1r = float16_to_float64(e1 & 0xffff, true, s_std);
+- float64 e1c = float16_to_float64(e1 >> 16, true, s_std);
+- float64 e2r = float16_to_float64(e2 & 0xffff, true, s_std);
+- float64 e2c = float16_to_float64(e2 >> 16, true, s_std);
++ /*
++ * We need three different float_status for different parts of this
++ * operation:
++ * - the input conversion of the float16 values must use the
++ * f16-specific float_status, so that the FPCR.FZ16 control is applied
++ * - operations on float32 including the final accumulation must use
++ * the normal float_status, so that FPCR.FZ is applied
++ * - we have pre-set-up copy of s_std which is set to round-to-odd,
++ * for the multiply (see below)
++ */
++ float64 e1r = float16_to_float64(e1 & 0xffff, true, s_f16);
++ float64 e1c = float16_to_float64(e1 >> 16, true, s_f16);
++ float64 e2r = float16_to_float64(e2 & 0xffff, true, s_f16);
++ float64 e2c = float16_to_float64(e2 >> 16, true, s_f16);
+ float64 t64;
+ float32 t32;
+
+@@ -1036,20 +1047,23 @@ static float32 f16_dotadd(float32 sum, uint32_t e1,
uint32_t e2,
+ }
+
+ void HELPER(sme_fmopa_h)(void *vza, void *vzn, void *vzm, void *vpn,
+- void *vpm, void *vst, uint32_t desc)
++ void *vpm, CPUARMState *env, uint32_t desc)
+ {
+ intptr_t row, col, oprsz = simd_maxsz(desc);
+ uint32_t neg = simd_data(desc) * 0x80008000u;
+ uint16_t *pn = vpn, *pm = vpm;
+- float_status fpst_odd, fpst_std;
++ float_status fpst_odd, fpst_std, fpst_f16;
+
+ /*
+- * Make a copy of float_status because this operation does not
+- * update the cumulative fp exception status. It also produces
+- * default nans. Make a second copy with round-to-odd -- see above.
++ * Make copies of fp_status and fp_status_f16, because this operation
++ * does not update the cumulative fp exception status. It also
++ * produces default NaNs. We also need a second copy of fp_status with
++ * round-to-odd -- see above.
+ */
+- fpst_std = *(float_status *)vst;
++ fpst_f16 = env->vfp.fp_status_f16;
++ fpst_std = env->vfp.fp_status;
+ set_default_nan_mode(true, &fpst_std);
++ set_default_nan_mode(true, &fpst_f16);
+ fpst_odd = fpst_std;
+ set_float_rounding_mode(float_round_to_odd, &fpst_odd);
+
+@@ -1069,7 +1083,8 @@ void HELPER(sme_fmopa_h)(void *vza, void *vzn, void
*vzm, void *vpn,
+ uint32_t m = *(uint32_t *)(vzm + H1_4(col));
+
+ m = f16mop_adj_pair(m, pcol, 0);
+- *a = f16_dotadd(*a, n, m, &fpst_std, &fpst_odd);
++ *a = f16_dotadd(*a, n, m,
++ &fpst_f16, &fpst_std, &fpst_odd);
+ }
+ col += 4;
+ pcol >>= 4;
+@@ -1167,10 +1182,10 @@ static uint64_t NAME(uint64_t n, uint64_t m, uint64_t
a, uint8_t p, bool neg) \
+ uint64_t sum = 0; \
+ /* Apply P to N as a mask, making the inactive elements 0. */ \
+ n &= expand_pred_h(p); \
+- sum += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); \
+- sum += (NTYPE)(n >> 16) * (MTYPE)(m >> 16); \
+- sum += (NTYPE)(n >> 32) * (MTYPE)(m >> 32); \
+- sum += (NTYPE)(n >> 48) * (MTYPE)(m >> 48); \
++ sum += (int64_t)(NTYPE)(n >> 0) * (MTYPE)(m >> 0); \
++ sum += (int64_t)(NTYPE)(n >> 16) * (MTYPE)(m >> 16); \
++ sum += (int64_t)(NTYPE)(n >> 32) * (MTYPE)(m >> 32); \
++ sum += (int64_t)(NTYPE)(n >> 48) * (MTYPE)(m >> 48); \
+ return neg ? a - sum : a + sum; \
+ }
+
+diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
+index 65f8495bdd..c864bd016c 100644
+--- a/target/arm/translate-sme.c
++++ b/target/arm/translate-sme.c
+@@ -56,7 +56,15 @@ static TCGv_ptr get_tile_rowcol(DisasContext *s, int esz,
int rs,
+ /* Prepare a power-of-two modulo via extraction of @len bits. */
+ len = ctz32(streaming_vec_reg_size(s)) - esz;
+
+- if (vertical) {
++ if (!len) {
++ /*
++ * SVL is 128 and the element size is 128. There is exactly
++ * one 128x128 tile in the ZA storage, and so we calculate
++ * (Rs + imm) MOD 1, which is always 0. We need to special case
++ * this because TCG doesn't allow deposit ops with len 0.
++ */
++ tcg_gen_movi_i32(tmp, 0);
++ } else if (vertical) {
+ /*
+ * Compute the byte offset of the index within the tile:
+ * (index % (svl / size)) * size
+@@ -340,6 +348,7 @@ static bool do_outprod(DisasContext *s, arg_op *a, MemOp
esz,
+ }
+
+ static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
++ ARMFPStatusFlavour e_fpst,
+ gen_helper_gvec_5_ptr *fn)
+ {
+ int svl = streaming_vec_reg_size(s);
+@@ -355,7 +364,7 @@ static bool do_outprod_fpst(DisasContext *s, arg_op *a,
MemOp esz,
+ zm = vec_full_reg_ptr(s, a->zm);
+ pn = pred_full_reg_ptr(s, a->pn);
+ pm = pred_full_reg_ptr(s, a->pm);
+- fpst = fpstatus_ptr(FPST_FPCR);
++ fpst = fpstatus_ptr(e_fpst);
+
+ fn(za, zn, zm, pn, pm, fpst, tcg_constant_i32(desc));
+
+@@ -367,9 +376,33 @@ static bool do_outprod_fpst(DisasContext *s, arg_op *a,
MemOp esz,
+ return true;
+ }
+
+-TRANS_FEAT(FMOPA_h, aa64_sme, do_outprod_fpst, a, MO_32,
gen_helper_sme_fmopa_h)
+-TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a, MO_32,
gen_helper_sme_fmopa_s)
+-TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a, MO_64,
gen_helper_sme_fmopa_d)
++static bool do_outprod_env(DisasContext *s, arg_op *a, MemOp esz,
++ gen_helper_gvec_5_ptr *fn)
++{
++ int svl = streaming_vec_reg_size(s);
++ uint32_t desc = simd_desc(svl, svl, a->sub);
++ TCGv_ptr za, zn, zm, pn, pm;
++
++ if (!sme_smza_enabled_check(s)) {
++ return true;
++ }
++
++ za = get_tile(s, esz, a->zad);
++ zn = vec_full_reg_ptr(s, a->zn);
++ zm = vec_full_reg_ptr(s, a->zm);
++ pn = pred_full_reg_ptr(s, a->pn);
++ pm = pred_full_reg_ptr(s, a->pm);
++
++ fn(za, zn, zm, pn, pm, cpu_env, tcg_constant_i32(desc));
++ return true;
++}
++
++TRANS_FEAT(FMOPA_h, aa64_sme, do_outprod_env, a,
++ MO_32, gen_helper_sme_fmopa_h)
++TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a,
++ MO_32, FPST_FPCR, gen_helper_sme_fmopa_s)
++TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a,
++ MO_64, FPST_FPCR, gen_helper_sme_fmopa_d)
+
+ /* TODO: FEAT_EBF16 */
+ TRANS_FEAT(BFMOPA, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_bfmopa)
+diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
+index 7388e1dbc7..034e816491 100644
+--- a/target/arm/translate-sve.c
++++ b/target/arm/translate-sve.c
+@@ -61,13 +61,27 @@ static int tszimm_esz(DisasContext *s, int x)
+
+ static int tszimm_shr(DisasContext *s, int x)
+ {
+- return (16 << tszimm_esz(s, x)) - x;
++ /*
++ * We won't use the tszimm_shr() value if tszimm_esz() returns -1 (the
++ * trans function will check for esz < 0), so we can return any
++ * value we like from here in that case as long as we avoid UB.
++ */
++ int esz = tszimm_esz(s, x);
++ if (esz < 0) {
++ return esz;
++ }
++ return (16 << esz) - x;
+ }
+
+ /* See e.g. LSL (immediate, predicated). */
+ static int tszimm_shl(DisasContext *s, int x)
+ {
+- return x - (8 << tszimm_esz(s, x));
++ /* As with tszimm_shr(), value will be unused if esz < 0 */
++ int esz = tszimm_esz(s, x);
++ if (esz < 0) {
++ return esz;
++ }
++ return x - (8 << esz);
+ }
+
+ /* The SH bit is in bit 8. Extract the low 8 and shift. */
+diff --git a/target/i386/tcg/decode-new.c.inc
b/target/i386/tcg/decode-new.c.inc
+index 1dfc368456..88de92ed16 100644
+--- a/target/i386/tcg/decode-new.c.inc
++++ b/target/i386/tcg/decode-new.c.inc
+@@ -1176,7 +1176,10 @@ static bool decode_op(DisasContext *s, CPUX86State
*env, X86DecodedInsn *decode,
+ op->unit = X86_OP_SSE;
+ }
+ get_reg:
+- op->n = ((get_modrm(s, env) >> 3) & 7) | REX_R(s);
++ op->n = ((get_modrm(s, env) >> 3) & 7);
++ if (op->unit != X86_OP_MMX) {
++ op->n |= REX_R(s);
++ }
+ break;
+
+ case X86_TYPE_E: /* ALU modrm operand */
+diff --git a/target/rx/translate.c b/target/rx/translate.c
+index 87a3f54adb..4233622c4e 100644
+--- a/target/rx/translate.c
++++ b/target/rx/translate.c
+@@ -83,7 +83,8 @@ static uint32_t decode_load_bytes(DisasContext *ctx,
uint32_t insn,
+
+ static uint32_t li(DisasContext *ctx, int sz)
+ {
+- int32_t tmp, addr;
++ target_ulong addr;
++ uint32_t tmp;
+ CPURXState *env = ctx->env;
+ addr = ctx->base.pc_next;
+
+diff --git a/tests/docker/dockerfiles/debian-armel-cross.docker
b/tests/docker/dockerfiles/debian-armel-cross.docker
+deleted file mode 100644
+index d5c08714e4..0000000000
+--- a/tests/docker/dockerfiles/debian-armel-cross.docker
++++ /dev/null
+@@ -1,170 +0,0 @@
+-# THIS FILE WAS AUTO-GENERATED
+-#
+-# $ lcitool dockerfile --layers all --cross armv6l debian-11 qemu
+-#
+-# https://gitlab.com/libvirt/libvirt-ci
+-
+-FROM docker.io/library/debian:11-slim
+-
+-RUN export DEBIAN_FRONTEND=noninteractive && \
+- apt-get update && \
+- apt-get install -y eatmydata && \
+- eatmydata apt-get dist-upgrade -y && \
+- eatmydata apt-get install --no-install-recommends -y \
+- bash \
+- bc \
+- bison \
+- bsdextrautils \
+- bzip2 \
+- ca-certificates \
+- ccache \
+- dbus \
+- debianutils \
+- diffutils \
+- exuberant-ctags \
+- findutils \
+- flex \
+- gcovr \
+- genisoimage \
+- gettext \
+- git \
+- hostname \
+- libglib2.0-dev \
+- libpcre2-dev \
+- libsndio-dev \
+- libspice-protocol-dev \
+- llvm \
+- locales \
+- make \
+- meson \
+- ncat \
+- ninja-build \
+- openssh-client \
+- perl-base \
+- pkgconf \
+- python3 \
+- python3-numpy \
+- python3-opencv \
+- python3-pillow \
+- python3-pip \
+- python3-sphinx \
+- python3-sphinx-rtd-theme \
+- python3-venv \
+- python3-yaml \
+- rpm2cpio \
+- sed \
+- sparse \
+- tar \
+- tesseract-ocr \
+- tesseract-ocr-eng \
+- texinfo && \
+- eatmydata apt-get autoremove -y && \
+- eatmydata apt-get autoclean -y && \
+- sed -Ei 's,^# (en_US\.UTF-8 .*)$,\1,' /etc/locale.gen && \
+- dpkg-reconfigure locales
+-
+-ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
+-ENV LANG "en_US.UTF-8"
+-ENV MAKE "/usr/bin/make"
+-ENV NINJA "/usr/bin/ninja"
+-ENV PYTHON "/usr/bin/python3"
+-
+-RUN export DEBIAN_FRONTEND=noninteractive && \
+- dpkg --add-architecture armel && \
+- eatmydata apt-get update && \
+- eatmydata apt-get dist-upgrade -y && \
+- eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
+- eatmydata apt-get install --no-install-recommends -y \
+- g++-arm-linux-gnueabi \
+- gcc-arm-linux-gnueabi \
+- libaio-dev:armel \
+- libasan5:armel \
+- libasound2-dev:armel \
+- libattr1-dev:armel \
+- libbpf-dev:armel \
+- libbrlapi-dev:armel \
+- libbz2-dev:armel \
+- libc6-dev:armel \
+- libcacard-dev:armel \
+- libcap-ng-dev:armel \
+- libcapstone-dev:armel \
+- libcmocka-dev:armel \
+- libcurl4-gnutls-dev:armel \
+- libdaxctl-dev:armel \
+- libdrm-dev:armel \
+- libepoxy-dev:armel \
+- libfdt-dev:armel \
+- libffi-dev:armel \
+- libfuse3-dev:armel \
+- libgbm-dev:armel \
+- libgcrypt20-dev:armel \
+- libglib2.0-dev:armel \
+- libglusterfs-dev:armel \
+- libgnutls28-dev:armel \
+- libgtk-3-dev:armel \
+- libibumad-dev:armel \
+- libibverbs-dev:armel \
+- libiscsi-dev:armel \
+- libjemalloc-dev:armel \
+- libjpeg62-turbo-dev:armel \
+- libjson-c-dev:armel \
+- liblttng-ust-dev:armel \
+- liblzo2-dev:armel \
+- libncursesw5-dev:armel \
+- libnfs-dev:armel \
+- libnuma-dev:armel \
+- libpam0g-dev:armel \
+- libpixman-1-dev:armel \
+- libpng-dev:armel \
+- libpulse-dev:armel \
+- librbd-dev:armel \
+- librdmacm-dev:armel \
+- libsasl2-dev:armel \
+- libsdl2-dev:armel \
+- libsdl2-image-dev:armel \
+- libseccomp-dev:armel \
+- libselinux1-dev:armel \
+- libslirp-dev:armel \
+- libsnappy-dev:armel \
+- libspice-server-dev:armel \
+- libssh-gcrypt-dev:armel \
+- libsystemd-dev:armel \
+- libtasn1-6-dev:armel \
+- libubsan1:armel \
+- libudev-dev:armel \
+- liburing-dev:armel \
+- libusb-1.0-0-dev:armel \
+- libusbredirhost-dev:armel \
+- libvdeplug-dev:armel \
+- libvirglrenderer-dev:armel \
+- libvte-2.91-dev:armel \
+- libzstd-dev:armel \
+- nettle-dev:armel \
+- systemtap-sdt-dev:armel \
+- xfslibs-dev:armel \
+- zlib1g-dev:armel && \
+- eatmydata apt-get autoremove -y && \
+- eatmydata apt-get autoclean -y && \
+- mkdir -p /usr/local/share/meson/cross && \
+- echo "[binaries]\n\
+-c = '/usr/bin/arm-linux-gnueabi-gcc'\n\
+-ar = '/usr/bin/arm-linux-gnueabi-gcc-ar'\n\
+-strip = '/usr/bin/arm-linux-gnueabi-strip'\n\
+-pkgconfig = '/usr/bin/arm-linux-gnueabi-pkg-config'\n\
+-\n\
+-[host_machine]\n\
+-system = 'linux'\n\
+-cpu_family = 'arm'\n\
+-cpu = 'arm'\n\
+-endian = 'little'" > /usr/local/share/meson/cross/arm-linux-gnueabi && \
+- dpkg-query --showformat '${Package}_${Version}_${Architecture}\n' --show
> /packages.txt && \
+- mkdir -p /usr/libexec/ccache-wrappers && \
+- ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/arm-linux-gnueabi-c++
&& \
+- ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/arm-linux-gnueabi-cc
&& \
+- ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/arm-linux-gnueabi-g++
&& \
+- ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/arm-linux-gnueabi-gcc
+-
+-ENV ABI "arm-linux-gnueabi"
+-ENV MESON_OPTS "--cross-file=arm-linux-gnueabi"
+-ENV QEMU_CONFIGURE_OPTS --cross-prefix=arm-linux-gnueabi-
+-ENV DEF_TARGET_LIST arm-softmmu,arm-linux-user,armeb-linux-user
+diff --git a/tests/lcitool/refresh b/tests/lcitool/refresh
+index 7a4cd6fd32..6ef732521d 100755
+--- a/tests/lcitool/refresh
++++ b/tests/lcitool/refresh
+@@ -131,11 +131,6 @@ try:
+ trailer=cross_build("aarch64-linux-gnu-",
+
"aarch64-softmmu,aarch64-linux-user"))
+
+- generate_dockerfile("debian-armel-cross", "debian-11",
+- cross="armv6l",
+- trailer=cross_build("arm-linux-gnueabi-",
+-
"arm-softmmu,arm-linux-user,armeb-linux-user"))
+-
+ generate_dockerfile("debian-armhf-cross", "debian-11",
+ cross="armv7l",
+ trailer=cross_build("arm-linux-gnueabihf-",
+diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
+index 75de1b4691..4da95cff2a 100755
+--- a/tests/qemu-iotests/check
++++ b/tests/qemu-iotests/check
+@@ -70,7 +70,7 @@ def make_argparser() -> argparse.ArgumentParser:
+ p.set_defaults(imgfmt='raw', imgproto='file')
+
+ format_list = ['raw', 'bochs', 'cloop', 'parallels', 'qcow', 'qcow2',
+- 'qed', 'vdi', 'vpc', 'vhdx', 'vmdk', 'luks', 'dmg']
++ 'qed', 'vdi', 'vpc', 'vhdx', 'vmdk', 'luks', 'dmg',
'vvfat']
+ g_fmt = p.add_argument_group(
+ ' image format options',
+ 'The following options set the IMGFMT environment variable. '
+diff --git a/tests/qemu-iotests/fat16.py b/tests/qemu-iotests/fat16.py
+new file mode 100644
+index 0000000000..7d2d052413
+--- /dev/null
++++ b/tests/qemu-iotests/fat16.py
+@@ -0,0 +1,690 @@
++# A simple FAT16 driver that is used to test the `vvfat` driver in QEMU.
++#
++# Copyright (C) 2024 Amjad Alsharafi <amjadsharaf...@gmail.com>
++#
++# This program is free software; you can redistribute it and/or modify
++# it under the terms of the GNU General Public License as published by
++# the Free Software Foundation; either version 2 of the License, or
++# (at your option) any later version.
++#
++# This program is distributed in the hope that it will be useful,
++# but WITHOUT ANY WARRANTY; without even the implied warranty of
++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++# GNU General Public License for more details.
++#
++# You should have received a copy of the GNU General Public License
++# along with this program. If not, see <http://www.gnu.org/licenses/>.
++
++from typing import Callable, List, Optional, Protocol, Set
++import string
++
++SECTOR_SIZE = 512
++DIRENTRY_SIZE = 32
++ALLOWED_FILE_CHARS = set(
++ "!#$%&'()-@^_`{}~" + string.digits + string.ascii_uppercase
++)
++
++
++class MBR:
++ def __init__(self, data: bytes):
++ assert len(data) == 512
++ self.partition_table = []
++ for i in range(4):
++ partition = data[446 + i * 16 : 446 + (i + 1) * 16]
++ self.partition_table.append(
++ {
++ "status": partition[0],
++ "start_head": partition[1],
++ "start_sector": partition[2] & 0x3F,
++ "start_cylinder": ((partition[2] & 0xC0) << 2)
++ | partition[3],
++ "type": partition[4],
++ "end_head": partition[5],
++ "end_sector": partition[6] & 0x3F,
++ "end_cylinder": ((partition[6] & 0xC0) << 2)
++ | partition[7],
++ "start_lba": int.from_bytes(partition[8:12], "little"),
++ "size": int.from_bytes(partition[12:16], "little"),
++ }
++ )
++
++ def __str__(self):
++ return "\n".join(
++ [
++ f"{i}: {partition}"
++ for i, partition in enumerate(self.partition_table)
++ ]
++ )
++
++
++class FatBootSector:
++ # pylint: disable=too-many-instance-attributes
++ def __init__(self, data: bytes):
++ assert len(data) == 512
++ self.bytes_per_sector = int.from_bytes(data[11:13], "little")
++ self.sectors_per_cluster = data[13]
++ self.reserved_sectors = int.from_bytes(data[14:16], "little")
++ self.fat_count = data[16]
++ self.root_entries = int.from_bytes(data[17:19], "little")
++ total_sectors_16 = int.from_bytes(data[19:21], "little")
++ self.media_descriptor = data[21]
++ self.sectors_per_fat = int.from_bytes(data[22:24], "little")
++ self.sectors_per_track = int.from_bytes(data[24:26], "little")
++ self.heads = int.from_bytes(data[26:28], "little")
++ self.hidden_sectors = int.from_bytes(data[28:32], "little")
++ total_sectors_32 = int.from_bytes(data[32:36], "little")
++ assert (
++ total_sectors_16 == 0 or total_sectors_32 == 0
++ ), "Both total sectors (16 and 32) fields are non-zero"
++ self.total_sectors = total_sectors_16 or total_sectors_32
++ self.drive_number = data[36]
++ self.volume_id = int.from_bytes(data[39:43], "little")
++ self.volume_label = data[43:54].decode("ascii").strip()
++ self.fs_type = data[54:62].decode("ascii").strip()
++
++ def root_dir_start(self):
++ """
++ Calculate the start sector of the root directory.
++ """
++ return self.reserved_sectors + self.fat_count * self.sectors_per_fat
++
++ def root_dir_size(self):
++ """
++ Calculate the size of the root directory in sectors.
++ """
++ return (
++ self.root_entries * DIRENTRY_SIZE + self.bytes_per_sector - 1
++ ) // self.bytes_per_sector
++
++ def data_sector_start(self):
++ """
++ Calculate the start sector of the data region.
++ """
++ return self.root_dir_start() + self.root_dir_size()
++
++ def first_sector_of_cluster(self, cluster: int) -> int:
++ """
++ Calculate the first sector of the given cluster.
++ """
++ return (
++ self.data_sector_start() + (cluster - 2) *
self.sectors_per_cluster
++ )
++
++ def cluster_bytes(self):
++ """
++ Calculate the number of bytes in a cluster.
++ """
++ return self.bytes_per_sector * self.sectors_per_cluster
++
++ def __str__(self):
++ return (
++ f"Bytes per sector: {self.bytes_per_sector}\n"
++ f"Sectors per cluster: {self.sectors_per_cluster}\n"
++ f"Reserved sectors: {self.reserved_sectors}\n"
++ f"FAT count: {self.fat_count}\n"
++ f"Root entries: {self.root_entries}\n"
++ f"Total sectors: {self.total_sectors}\n"
++ f"Media descriptor: {self.media_descriptor}\n"
++ f"Sectors per FAT: {self.sectors_per_fat}\n"
++ f"Sectors per track: {self.sectors_per_track}\n"
++ f"Heads: {self.heads}\n"
++ f"Hidden sectors: {self.hidden_sectors}\n"
++ f"Drive number: {self.drive_number}\n"
++ f"Volume ID: {self.volume_id}\n"
++ f"Volume label: {self.volume_label}\n"
++ f"FS type: {self.fs_type}\n"
++ )
++
++
++class FatDirectoryEntry:
++ # pylint: disable=too-many-instance-attributes
++ def __init__(self, data: bytes, sector: int, offset: int):
++ self.name = data[0:8].decode("ascii").strip()
++ self.ext = data[8:11].decode("ascii").strip()
++ self.attributes = data[11]
++ self.reserved = data[12]
++ self.create_time_tenth = data[13]
++ self.create_time = int.from_bytes(data[14:16], "little")
++ self.create_date = int.from_bytes(data[16:18], "little")
++ self.last_access_date = int.from_bytes(data[18:20], "little")
++ high_cluster = int.from_bytes(data[20:22], "little")
++ self.last_mod_time = int.from_bytes(data[22:24], "little")
++ self.last_mod_date = int.from_bytes(data[24:26], "little")
++ low_cluster = int.from_bytes(data[26:28], "little")
++ self.cluster = (high_cluster << 16) | low_cluster
++ self.size_bytes = int.from_bytes(data[28:32], "little")
++
++ # extra (to help write back to disk)
++ self.sector = sector
++ self.offset = offset
++
++ def as_bytes(self) -> bytes:
++ return (
++ self.name.ljust(8, " ").encode("ascii")
++ + self.ext.ljust(3, " ").encode("ascii")
++ + self.attributes.to_bytes(1, "little")
++ + self.reserved.to_bytes(1, "little")
++ + self.create_time_tenth.to_bytes(1, "little")
++ + self.create_time.to_bytes(2, "little")
++ + self.create_date.to_bytes(2, "little")
++ + self.last_access_date.to_bytes(2, "little")
++ + (self.cluster >> 16).to_bytes(2, "little")
++ + self.last_mod_time.to_bytes(2, "little")
++ + self.last_mod_date.to_bytes(2, "little")
++ + (self.cluster & 0xFFFF).to_bytes(2, "little")
++ + self.size_bytes.to_bytes(4, "little")
++ )
++
++ def whole_name(self):
++ if self.ext:
++ return f"{self.name}.{self.ext}"
++ else:
++ return self.name
++
++ def __str__(self):
++ return (
++ f"Name: {self.name}\n"
++ f"Ext: {self.ext}\n"
++ f"Attributes: {self.attributes}\n"
++ f"Reserved: {self.reserved}\n"
++ f"Create time tenth: {self.create_time_tenth}\n"
++ f"Create time: {self.create_time}\n"
++ f"Create date: {self.create_date}\n"
++ f"Last access date: {self.last_access_date}\n"
++ f"Last mod time: {self.last_mod_time}\n"
++ f"Last mod date: {self.last_mod_date}\n"
++ f"Cluster: {self.cluster}\n"
++ f"Size: {self.size_bytes}\n"
++ )
++
++ def __repr__(self):
++ # convert to dict
++ return str(vars(self))
++
++
++class SectorReader(Protocol):
++ def __call__(self, start_sector: int, num_sectors: int = 1) -> bytes: ...
++
++# pylint: disable=broad-exception-raised
++class Fat16:
++ def __init__(
++ self,
++ start_sector: int,
++ size: int,
++ sector_reader: SectorReader,
++ sector_writer: Callable[[int, bytes], None]
++ ):
++ self.start_sector = start_sector
++ self.size_in_sectors = size
++ self.sector_reader = sector_reader
++ self.sector_writer = sector_writer
++
++ self.boot_sector = FatBootSector(self.sector_reader(start_sector, 1))
++
++ fat_size_in_sectors = (
++ self.boot_sector.sectors_per_fat * self.boot_sector.fat_count
++ )
++ self.fats = self.read_sectors(
++ self.boot_sector.reserved_sectors, fat_size_in_sectors
++ )
++ self.fats_dirty_sectors: Set[int] = set()
++
++ def read_sectors(self, start_sector: int, num_sectors: int) -> bytes:
++ return self.sector_reader(start_sector + self.start_sector,
++ num_sectors)
++
++ def write_sectors(self, start_sector: int, data: bytes) -> None:
++ return self.sector_writer(start_sector + self.start_sector, data)
++
++ def directory_from_bytes(
++ self, data: bytes, start_sector: int
++ ) -> List[FatDirectoryEntry]:
++ """
++ Convert `bytes` into a list of `FatDirectoryEntry` objects.
++ Will ignore long file names.
++ Will stop when it encounters a 0x00 byte.
++ """
++
++ entries = []
++ for i in range(0, len(data), DIRENTRY_SIZE):
++ entry = data[i : i + DIRENTRY_SIZE]
++
++ current_sector = start_sector + (i // SECTOR_SIZE)
++ current_offset = i % SECTOR_SIZE
++
++ if entry[0] == 0:
++ break
++
++ if entry[0] == 0xE5:
++ # Deleted file
++ continue
++
++ if entry[11] & 0xF == 0xF:
++ # Long file name
++ continue
++
++ entries.append(
++ FatDirectoryEntry(entry, current_sector, current_offset)
++ )
++ return entries
++
++ def read_root_directory(self) -> List[FatDirectoryEntry]:
++ root_dir = self.read_sectors(
++ self.boot_sector.root_dir_start(),
self.boot_sector.root_dir_size()
++ )
++ return self.directory_from_bytes(
++ root_dir, self.boot_sector.root_dir_start()
++ )
++
++ def read_fat_entry(self, cluster: int) -> int:
++ """
++ Read the FAT entry for the given cluster.
++ """
++ fat_offset = cluster * 2 # FAT16
++ return int.from_bytes(self.fats[fat_offset : fat_offset + 2],
"little")
++
++ def write_fat_entry(self, cluster: int, value: int) -> None:
++ """
++ Write the FAT entry for the given cluster.
++ """
++ fat_offset = cluster * 2
++ self.fats = (
++ self.fats[:fat_offset]
++ + value.to_bytes(2, "little")
++ + self.fats[fat_offset + 2 :]
++ )
++ self.fats_dirty_sectors.add(fat_offset // SECTOR_SIZE)
++
++ def flush_fats(self) -> None:
++ """
++ Write the FATs back to the disk.
++ """
++ for sector in self.fats_dirty_sectors:
++ data = self.fats[sector * SECTOR_SIZE : (sector + 1) *
SECTOR_SIZE]
++ sector = self.boot_sector.reserved_sectors + sector
++ self.write_sectors(sector, data)
++ self.fats_dirty_sectors = set()
++
++ def next_cluster(self, cluster: int) -> Optional[int]:
++ """
++ Get the next cluster in the chain.
++ If its `None`, then its the last cluster.
++ The function will crash if the next cluster
++ is `FREE` (unexpected) or invalid entry.
++ """
++ fat_entry = self.read_fat_entry(cluster)
++ if fat_entry == 0:
++ raise Exception("Unexpected: FREE cluster")
++ if fat_entry == 1:
++ raise Exception("Unexpected: RESERVED cluster")
++ if fat_entry >= 0xFFF8:
++ return None
++ if fat_entry >= 0xFFF7:
++ raise Exception("Invalid FAT entry")
++
++ return fat_entry
++
++ def next_free_cluster(self) -> int:
++ """
++ Find the next free cluster.
++ """
++ # simple linear search
++ for i in range(2, 0xFFFF):
++ if self.read_fat_entry(i) == 0:
++ return i
++ raise Exception("No free clusters")
++
++ def next_free_cluster_non_continuous(self) -> int:
++ """
++ Find the next free cluster, but makes sure
++ that the cluster before and after it are not allocated.
++ """
++ # simple linear search
++ before = False
++ for i in range(2, 0xFFFF):
++ if self.read_fat_entry(i) == 0:
++ if before and self.read_fat_entry(i + 1) == 0:
++ return i
++ else:
++ before = True
++ else:
++ before = False
++
++ raise Exception("No free clusters")
++
++ def read_cluster(self, cluster: int) -> bytes:
++ """
++ Read the cluster at the given cluster.
++ """
++ return self.read_sectors(
++ self.boot_sector.first_sector_of_cluster(cluster),
++ self.boot_sector.sectors_per_cluster,
++ )
++
++ def write_cluster(self, cluster: int, data: bytes) -> None:
++ """
++ Write the cluster at the given cluster.
++ """
++ assert len(data) == self.boot_sector.cluster_bytes()
++ self.write_sectors(
++ self.boot_sector.first_sector_of_cluster(cluster),
++ data,
++ )
++
++ def read_directory(
++ self, cluster: Optional[int]
++ ) -> List[FatDirectoryEntry]:
++ """
++ Read the directory at the given cluster.
++ """
++ entries = []
++ while cluster is not None:
++ data = self.read_cluster(cluster)
++ entries.extend(
++ self.directory_from_bytes(
++ data, self.boot_sector.first_sector_of_cluster(cluster)
++ )
++ )
++ cluster = self.next_cluster(cluster)
++ return entries
++
++ def add_direntry(
++ self, cluster: Optional[int], name: str, ext: str, attributes: int
++ ) -> FatDirectoryEntry:
++ """
++ Add a new directory entry to the given cluster.
++ If the cluster is `None`, then it will be added to the root directory.
++ """
++
++ def find_free_entry(data: bytes) -> Optional[int]:
++ for i in range(0, len(data), DIRENTRY_SIZE):
++ entry = data[i : i + DIRENTRY_SIZE]
++ if entry[0] == 0 or entry[0] == 0xE5:
++ return i
++ return None
++
++ assert len(name) <= 8, "Name must be 8 characters or less"
++ assert len(ext) <= 3, "Ext must be 3 characters or less"
++ assert attributes % 0x15 != 0x15, "Invalid attributes"
++
++ # initial dummy data
++ new_entry = FatDirectoryEntry(b"\0" * 32, 0, 0)
++ new_entry.name = name.ljust(8, " ")
++ new_entry.ext = ext.ljust(3, " ")
++ new_entry.attributes = attributes
++ new_entry.reserved = 0
++ new_entry.create_time_tenth = 0
++ new_entry.create_time = 0
++ new_entry.create_date = 0
++ new_entry.last_access_date = 0
++ new_entry.last_mod_time = 0
++ new_entry.last_mod_date = 0
++ new_entry.cluster = self.next_free_cluster()
++ new_entry.size_bytes = 0
++
++ # mark as EOF
++ self.write_fat_entry(new_entry.cluster, 0xFFFF)
++
++ if cluster is None:
++ for i in range(self.boot_sector.root_dir_size()):
++ sector_data = self.read_sectors(
++ self.boot_sector.root_dir_start() + i, 1
++ )
++ offset = find_free_entry(sector_data)
++ if offset is not None:
++ new_entry.sector = self.boot_sector.root_dir_start() + i
++ new_entry.offset = offset
++ self.update_direntry(new_entry)
++ return new_entry
++ else:
++ while cluster is not None:
++ data = self.read_cluster(cluster)
++ offset = find_free_entry(data)
++ if offset is not None:
++ new_entry.sector = (
++ self.boot_sector.first_sector_of_cluster(cluster)
++ + (offset // SECTOR_SIZE))
++ new_entry.offset = offset % SECTOR_SIZE
++ self.update_direntry(new_entry)
++ return new_entry
++ cluster = self.next_cluster(cluster)
++
++ raise Exception("No free directory entries")
++
++ def update_direntry(self, entry: FatDirectoryEntry) -> None:
++ """
++ Write the directory entry back to the disk.
++ """
++ sector = self.read_sectors(entry.sector, 1)
++ sector = (
++ sector[: entry.offset]
++ + entry.as_bytes()
++ + sector[entry.offset + DIRENTRY_SIZE :]
++ )
++ self.write_sectors(entry.sector, sector)
++
++ def find_direntry(self, path: str) -> Optional[FatDirectoryEntry]:
++ """
++ Find the directory entry for the given path.
++ """
++ assert path[0] == "/", "Path must start with /"
++
++ path = path[1:] # remove the leading /
++ parts = path.split("/")
++ directory = self.read_root_directory()
++
++ current_entry = None
++
++ for i, part in enumerate(parts):
++ is_last = i == len(parts) - 1
++
++ for entry in directory:
++ if entry.whole_name() == part:
++ current_entry = entry
++ break
++ if current_entry is None:
++ return None
++
++ if is_last:
++ return current_entry
++
++ if current_entry.attributes & 0x10 == 0:
++ raise Exception(
++ f"{current_entry.whole_name()} is not a directory"
++ )
++
++ directory = self.read_directory(current_entry.cluster)
++
++ assert False, "Exited loop with is_last == False"
++
++ def read_file(self, entry: Optional[FatDirectoryEntry]) ->
Optional[bytes]:
++ """
++ Read the content of the file at the given path.
++ """
++ if entry is None:
++ return None
++ if entry.attributes & 0x10 != 0:
++ raise Exception(f"{entry.whole_name()} is a directory")
++
++ data = b""
++ cluster: Optional[int] = entry.cluster
++ while cluster is not None and len(data) <= entry.size_bytes:
++ data += self.read_cluster(cluster)
++ cluster = self.next_cluster(cluster)
++ return data[: entry.size_bytes]
++
++ def truncate_file(
++ self,
++ entry: FatDirectoryEntry,
++ new_size: int,
++ allocate_non_continuous: bool = False,
++ ) -> None:
++ """
++ Truncate the file at the given path to the new size.
++ """
++ if entry is None:
++ raise Exception("entry is None")
++ if entry.attributes & 0x10 != 0:
++ raise Exception(f"{entry.whole_name()} is a directory")
++
++ def clusters_from_size(size: int) -> int:
++ return (
++ size + self.boot_sector.cluster_bytes() - 1
++ ) // self.boot_sector.cluster_bytes()
++
++ # First, allocate new FATs if we need to
++ required_clusters = clusters_from_size(new_size)
++ current_clusters = clusters_from_size(entry.size_bytes)
++
++ affected_clusters = set()
++
++ # Keep at least one cluster, easier to manage this way
++ if required_clusters == 0:
++ required_clusters = 1
++ if current_clusters == 0:
++ current_clusters = 1
++
++ cluster: Optional[int]
++
++ if required_clusters > current_clusters:
++ # Allocate new clusters
++ cluster = entry.cluster
++ to_add = required_clusters
++ for _ in range(current_clusters - 1):
++ to_add -= 1
++ assert cluster is not None, "Cluster is None"
++ affected_clusters.add(cluster)
++ cluster = self.next_cluster(cluster)
++ assert required_clusters > 0, "No new clusters to allocate"
++ assert cluster is not None, "Cluster is None"
++ assert (
++ self.next_cluster(cluster) is None
++ ), "Cluster is not the last cluster"
++
++ # Allocate new clusters
++ for _ in range(to_add - 1):
++ if allocate_non_continuous:
++ new_cluster = self.next_free_cluster_non_continuous()
++ else:
++ new_cluster = self.next_free_cluster()
++ self.write_fat_entry(cluster, new_cluster)
++ self.write_fat_entry(new_cluster, 0xFFFF)
++ cluster = new_cluster
++
++ elif required_clusters < current_clusters:
++ # Truncate the file
++ cluster = entry.cluster
++ for _ in range(required_clusters - 1):
++ assert cluster is not None, "Cluster is None"
++ cluster = self.next_cluster(cluster)
++ assert cluster is not None, "Cluster is None"
++
++ next_cluster = self.next_cluster(cluster)
++ # mark last as EOF
++ self.write_fat_entry(cluster, 0xFFFF)
++ # free the rest
++ while next_cluster is not None:
++ cluster = next_cluster
++ next_cluster = self.next_cluster(next_cluster)
++ self.write_fat_entry(cluster, 0)
++
++ self.flush_fats()
++
++ # verify number of clusters
++ cluster = entry.cluster
++ count = 0
++ while cluster is not None:
++ count += 1
++ affected_clusters.add(cluster)
++ cluster = self.next_cluster(cluster)
++ assert (
++ count == required_clusters
++ ), f"Expected {required_clusters} clusters, got {count}"
++
++ # update the size
++ entry.size_bytes = new_size
++ self.update_direntry(entry)
++
++ # trigger every affected cluster
++ for cluster in affected_clusters:
++ first_sector = self.boot_sector.first_sector_of_cluster(cluster)
++ first_sector_data = self.read_sectors(first_sector, 1)
++ self.write_sectors(first_sector, first_sector_data)
++
++ def write_file(self, entry: FatDirectoryEntry, data: bytes) -> None:
++ """
++ Write the content of the file at the given path.
++ """
++ if entry is None:
++ raise Exception("entry is None")
++ if entry.attributes & 0x10 != 0:
++ raise Exception(f"{entry.whole_name()} is a directory")
++
++ data_len = len(data)
++
++ self.truncate_file(entry, data_len)
++
++ cluster: Optional[int] = entry.cluster
++ while cluster is not None:
++ data_to_write = data[: self.boot_sector.cluster_bytes()]
++ if len(data_to_write) < self.boot_sector.cluster_bytes():
++ old_data = self.read_cluster(cluster)
++ data_to_write += old_data[len(data_to_write) :]
++
++ self.write_cluster(cluster, data_to_write)
++ data = data[self.boot_sector.cluster_bytes() :]
++ if len(data) == 0:
++ break
++ cluster = self.next_cluster(cluster)
++
++ assert (
++ len(data) == 0
++ ), "Data was not written completely, clusters missing"
++
++ def create_file(self, path: str) -> Optional[FatDirectoryEntry]:
++ """
++ Create a new file at the given path.
++ """
++ assert path[0] == "/", "Path must start with /"
++
++ path = path[1:] # remove the leading /
++
++ parts = path.split("/")
++
++ directory_cluster = None
++ directory = self.read_root_directory()
++
++ parts, filename = parts[:-1], parts[-1]
++
++ for _, part in enumerate(parts):
++ current_entry = None
++ for entry in directory:
++ if entry.whole_name() == part:
++ current_entry = entry
++ break
++ if current_entry is None:
++ return None
++
++ if current_entry.attributes & 0x10 == 0:
++ raise Exception(
++ f"{current_entry.whole_name()} is not a directory"
++ )
++
++ directory = self.read_directory(current_entry.cluster)
++ directory_cluster = current_entry.cluster
++
++ # add new entry to the directory
++
++ filename, ext = filename.split(".")
++
++ if len(ext) > 3:
++ raise Exception("Ext must be 3 characters or less")
++ if len(filename) > 8:
++ raise Exception("Name must be 8 characters or less")
++
++ for c in filename + ext:
++
++ if c not in ALLOWED_FILE_CHARS:
++ raise Exception("Invalid character in filename")
++
++ return self.add_direntry(directory_cluster, filename, ext, 0)
+diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
+index a864c74b12..23307fa2f4 100644
+--- a/tests/qemu-iotests/testenv.py
++++ b/tests/qemu-iotests/testenv.py
+@@ -250,7 +250,7 @@ def __init__(self, imgfmt: str, imgproto: str, aiomode:
str,
+ self.qemu_img_options = os.getenv('QEMU_IMG_OPTIONS')
+ self.qemu_nbd_options = os.getenv('QEMU_NBD_OPTIONS')
+
+- is_generic = self.imgfmt not in ['bochs', 'cloop', 'dmg']
++ is_generic = self.imgfmt not in ['bochs', 'cloop', 'dmg', 'vvfat']
+ self.imgfmt_generic = 'true' if is_generic else 'false'
+
+ self.qemu_io_options = f'--cache {self.cachemode} --aio
{self.aiomode}'
+diff --git a/tests/qemu-iotests/tests/vvfat b/tests/qemu-iotests/tests/vvfat
+new file mode 100755
+index 0000000000..acdc6ce8ff
+--- /dev/null
++++ b/tests/qemu-iotests/tests/vvfat
+@@ -0,0 +1,485 @@
++#!/usr/bin/env python3
++# group: rw vvfat
++#
++# Test vvfat driver implementation
++# Here, we use a simple FAT16 implementation and check the behavior of
++# the vvfat driver.
++#
++# Copyright (C) 2024 Amjad Alsharafi <amjadsharaf...@gmail.com>
++#
++# This program is free software; you can redistribute it and/or modify
++# it under the terms of the GNU General Public License as published by
++# the Free Software Foundation; either version 2 of the License, or
++# (at your option) any later version.
++#
++# This program is distributed in the hope that it will be useful,
++# but WITHOUT ANY WARRANTY; without even the implied warranty of
++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++# GNU General Public License for more details.
++#
++# You should have received a copy of the GNU General Public License
++# along with this program. If not, see <http://www.gnu.org/licenses/>.
++
++import os
++import shutil
++import iotests
++from iotests import imgfmt, QMPTestCase
++from fat16 import MBR, Fat16, DIRENTRY_SIZE
++
++filesystem = os.path.join(iotests.test_dir, "filesystem")
++
++nbd_sock = iotests.file_path("nbd.sock", base_dir=iotests.sock_dir)
++nbd_uri = "nbd+unix:///disk?socket=" + nbd_sock
++
++SECTOR_SIZE = 512
++
++
++class TestVVFatDriver(QMPTestCase):
++ # pylint: disable=broad-exception-raised
++ def setUp(self) -> None:
++ if os.path.exists(filesystem):
++ if os.path.isdir(filesystem):
++ shutil.rmtree(filesystem)
++ else:
++ raise Exception(f"{filesystem} exists and is not a directory")
++
++ os.mkdir(filesystem)
++
++ # Add some text files to the filesystem
++ for i in range(10):
++ with open(os.path.join(filesystem, f"file{i}.txt"),
++ "w", encoding="ascii") as f:
++ f.write(f"Hello, world! {i}\n")
++
++ # Add 2 large files, above the cluster size (8KB)
++ with open(os.path.join(filesystem, "large1.txt"), "wb") as f:
++ # write 'A' * 1KB, 'B' * 1KB, 'C' * 1KB, ...
++ for i in range(8 * 2): # two clusters
++ f.write(bytes([0x41 + i] * 1024))
++
++ with open(os.path.join(filesystem, "large2.txt"), "wb") as f:
++ # write 'A' * 1KB, 'B' * 1KB, 'C' * 1KB, ...
++ for i in range(8 * 3): # 3 clusters
++ f.write(bytes([0x41 + i] * 1024))
++
++ self.vm = iotests.VM()
++
++ self.vm.add_blockdev(
++ self.vm.qmp_to_opts(
++ {
++ "driver": imgfmt,
++ "node-name": "disk",
++ "rw": "true",
++ "fat-type": "16",
++ "dir": filesystem,
++ }
++ )
++ )
++
++ self.vm.launch()
++
++ self.vm.qmp_log("block-dirty-bitmap-add", **{
++ "node": "disk",
++ "name": "bitmap0",
++ })
++
++ # attach nbd server
++ self.vm.qmp_log(
++ "nbd-server-start",
++ **{"addr": {"type": "unix", "data": {"path": nbd_sock}}},
++ filters=[],
++ )
++
++ self.vm.qmp_log(
++ "nbd-server-add",
++ **{"device": "disk", "writable": True, "bitmap": "bitmap0"},
++ )
++
++ self.qio = iotests.QemuIoInteractive("-f", "raw", nbd_uri)
++
++ def tearDown(self) -> None:
++ self.qio.close()
++ self.vm.shutdown()
++ # print(self.vm.get_log())
++ shutil.rmtree(filesystem)
++
++ def read_sectors(self, sector: int, num: int = 1) -> bytes:
++ """
++ Read `num` sectors starting from `sector` from the `disk`.
++ This uses `QemuIoInteractive` to read the sectors into `stdout` and
++ then parse the output.
++ """
++ self.assertGreater(num, 0)
++
++ # The output contains the content of the sector in hex dump format
++ # We need to extract the content from it
++ output = self.qio.cmd(
++ f"read -v {sector * SECTOR_SIZE} {num * SECTOR_SIZE}")
++
++ # Each row is 16 bytes long, and we are writing `num` sectors
++ rows = num * SECTOR_SIZE // 16
++ output_rows = output.split("\n")[:rows]
++
++ hex_content = "".join(
++ [(row.split(": ")[1]).split(" ")[0] for row in output_rows]
++ )
++ bytes_content = bytes.fromhex(hex_content)
++
++ self.assertEqual(len(bytes_content), num * SECTOR_SIZE)
++
++ return bytes_content
++
++ def write_sectors(self, sector: int, data: bytes) -> None:
++ """
++ Write `data` to the `disk` starting from `sector`.
++ This uses `QemuIoInteractive` to write the data into the disk.
++ """
++
++ self.assertGreater(len(data), 0)
++ self.assertEqual(len(data) % SECTOR_SIZE, 0)
++
++ temp_file = os.path.join(iotests.test_dir, "temp.bin")
++ with open(temp_file, "wb") as f:
++ f.write(data)
++
++ self.qio.cmd(
++ f"write -s {temp_file} {sector * SECTOR_SIZE} {len(data)}"
++ )
++
++ os.remove(temp_file)
++
++ def init_fat16(self):
++ mbr = MBR(self.read_sectors(0))
++ return Fat16(
++ mbr.partition_table[0]["start_lba"],
++ mbr.partition_table[0]["size"],
++ self.read_sectors,
++ self.write_sectors,
++ )
++
++ # Tests
++
++ def test_fat_filesystem(self):
++ """
++ Test that vvfat produce a valid FAT16 and MBR sectors
++ """
++ mbr = MBR(self.read_sectors(0))
++
++ self.assertEqual(mbr.partition_table[0]["status"], 0x80)
++ self.assertEqual(mbr.partition_table[0]["type"], 6)
++
++ fat16 = Fat16(
++ mbr.partition_table[0]["start_lba"],
++ mbr.partition_table[0]["size"],
++ self.read_sectors,
++ self.write_sectors,
++ )
++ self.assertEqual(fat16.boot_sector.bytes_per_sector, 512)
++ self.assertEqual(fat16.boot_sector.volume_label, "QEMU VVFAT")
++
++ def test_read_root_directory(self):
++ """
++ Test the content of the root directory
++ """
++ fat16 = self.init_fat16()
++
++ root_dir = fat16.read_root_directory()
++
++ self.assertEqual(len(root_dir), 13) # 12 + 1 special file
++
++ files = {
++ "QEMU VVF.AT": 0, # special empty file
++ "FILE0.TXT": 16,
++ "FILE1.TXT": 16,
++ "FILE2.TXT": 16,
++ "FILE3.TXT": 16,
++ "FILE4.TXT": 16,
++ "FILE5.TXT": 16,
++ "FILE6.TXT": 16,
++ "FILE7.TXT": 16,
++ "FILE8.TXT": 16,
++ "FILE9.TXT": 16,
++ "LARGE1.TXT": 0x2000 * 2,
++ "LARGE2.TXT": 0x2000 * 3,
++ }
++
++ for entry in root_dir:
++ self.assertIn(entry.whole_name(), files)
++ self.assertEqual(entry.size_bytes, files[entry.whole_name()])
++
++ def test_direntry_as_bytes(self):
++ """
++ Test if we can convert Direntry back to bytes, so that we can write it
++ back to the disk safely.
++ """
++ fat16 = self.init_fat16()
++
++ root_dir = fat16.read_root_directory()
++ first_entry_bytes = fat16.read_sectors(
++ fat16.boot_sector.root_dir_start(), 1)
++
++ # The first entry won't be deleted, so we can compare it with the
first
++ # entry in the root directory
++ self.assertEqual(root_dir[0].as_bytes(),
++ first_entry_bytes[:DIRENTRY_SIZE])
++
++ def test_read_files(self):
++ """
++ Test reading the content of the files
++ """
++ fat16 = self.init_fat16()
++
++ for i in range(10):
++ file = fat16.find_direntry(f"/FILE{i}.TXT")
++ self.assertIsNotNone(file)
++ self.assertEqual(
++ fat16.read_file(file), f"Hello, world! {i}\n".encode("ascii")
++ )
++
++ # test large files
++ large1 = fat16.find_direntry("/LARGE1.TXT")
++ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
++ self.assertEqual(fat16.read_file(large1), f.read())
++
++ large2 = fat16.find_direntry("/LARGE2.TXT")
++ self.assertIsNotNone(large2)
++ with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
++ self.assertEqual(fat16.read_file(large2), f.read())
++
++ def test_write_file_same_content_direct(self):
++ """
++ Similar to `test_write_file_in_same_content`, but we write the file
++ directly clusters and thus we don't go through the modification of
++ direntry.
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/FILE0.TXT")
++ self.assertIsNotNone(file)
++
++ data = fat16.read_cluster(file.cluster)
++ fat16.write_cluster(file.cluster, data)
++
++ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
++ self.assertEqual(fat16.read_file(file), f.read())
++
++ def test_write_file_in_same_content(self):
++ """
++ Test writing the same content to the file back to it
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/FILE0.TXT")
++ self.assertIsNotNone(file)
++
++ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
++
++ fat16.write_file(file, b"Hello, world! 0\n")
++ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
++
++ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
++ self.assertEqual(f.read(), b"Hello, world! 0\n")
++
++ def test_modify_content_same_clusters(self):
++ """
++ Test modifying the content of the file without changing the number of
++ clusters
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/FILE0.TXT")
++ self.assertIsNotNone(file)
++
++ new_content = b"Hello, world! Modified\n"
++ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
++
++ fat16.write_file(file, new_content)
++ self.assertEqual(fat16.read_file(file), new_content)
++
++ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_truncate_file_same_clusters_less(self):
++ """
++ Test truncating the file without changing number of clusters
++ Test decreasing the file size
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/FILE0.TXT")
++ self.assertIsNotNone(file)
++
++ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
++
++ fat16.truncate_file(file, 5)
++ new_content = fat16.read_file(file)
++ self.assertEqual(new_content, b"Hello")
++
++ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_truncate_file_same_clusters_more(self):
++ """
++ Test truncating the file without changing number of clusters
++ Test increase the file size
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/FILE0.TXT")
++ self.assertIsNotNone(file)
++
++ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
++
++ fat16.truncate_file(file, 20)
++ new_content = fat16.read_file(file)
++ self.assertIsNotNone(new_content)
++
++ # random pattern will be appended to the file, and its not always the
++ # same
++ self.assertEqual(new_content[:16], b"Hello, world! 0\n")
++ self.assertEqual(len(new_content), 20)
++
++ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_write_large_file(self):
++ """
++ Test writing a large file
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/LARGE1.TXT")
++ self.assertIsNotNone(file)
++
++ # The content of LARGE1 is A * 1KB, B * 1KB, C * 1KB, ..., P * 1KB
++ # Lets change it to be Z * 1KB, Y * 1KB, X * 1KB, ..., K * 1KB
++ # without changing the number of clusters or filesize
++ new_content = b"".join([bytes([0x5A - i] * 1024) for i in range(16)])
++ fat16.write_file(file, new_content)
++ self.assertEqual(fat16.read_file(file), new_content)
++
++ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_truncate_file_change_clusters_less(self):
++ """
++ Test truncating a file by reducing the number of clusters
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/LARGE1.TXT")
++ self.assertIsNotNone(file)
++
++ fat16.truncate_file(file, 1)
++ self.assertEqual(fat16.read_file(file), b"A")
++
++ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
++ self.assertEqual(f.read(), b"A")
++
++ def test_write_file_change_clusters_less(self):
++ """
++ Test truncating a file by reducing the number of clusters
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/LARGE2.TXT")
++ self.assertIsNotNone(file)
++
++ new_content = b"X" * 8 * 1024 + b"Y" * 8 * 1024
++ fat16.write_file(file, new_content)
++ self.assertEqual(fat16.read_file(file), new_content)
++
++ with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_write_file_change_clusters_more(self):
++ """
++ Test truncating a file by increasing the number of clusters
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/LARGE2.TXT")
++ self.assertIsNotNone(file)
++
++ # from 3 clusters to 4 clusters
++ new_content = (
++ b"W" * 8 * 1024 +
++ b"X" * 8 * 1024 +
++ b"Y" * 8 * 1024 +
++ b"Z" * 8 * 1024
++ )
++ fat16.write_file(file, new_content)
++ self.assertEqual(fat16.read_file(file), new_content)
++
++ with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_write_file_change_clusters_more_non_contiguous_2_mappings(self):
++ """
++ Test truncating a file by increasing the number of clusters Here we
++ allocate the new clusters in a way that makes them non-contiguous so
++ that we will get 2 cluster mappings for the file
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/LARGE1.TXT")
++ self.assertIsNotNone(file)
++
++ # from 2 clusters to 3 clusters with non-contiguous allocation
++ fat16.truncate_file(file, 3 * 0x2000, allocate_non_continuous=True)
++ new_content = b"X" * 8 * 1024 + b"Y" * 8 * 1024 + b"Z" * 8 * 1024
++ fat16.write_file(file, new_content)
++ self.assertEqual(fat16.read_file(file), new_content)
++
++ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_write_file_change_clusters_more_non_contiguous_3_mappings(self):
++ """
++ Test truncating a file by increasing the number of clusters Here we
++ allocate the new clusters in a way that makes them non-contiguous so
++ that we will get 3 cluster mappings for the file
++ """
++ fat16 = self.init_fat16()
++
++ file = fat16.find_direntry("/LARGE1.TXT")
++ self.assertIsNotNone(file)
++
++ # from 2 clusters to 4 clusters with non-contiguous allocation
++ fat16.truncate_file(file, 4 * 0x2000, allocate_non_continuous=True)
++ new_content = (
++ b"W" * 8 * 1024 +
++ b"X" * 8 * 1024 +
++ b"Y" * 8 * 1024 +
++ b"Z" * 8 * 1024
++ )
++ fat16.write_file(file, new_content)
++ self.assertEqual(fat16.read_file(file), new_content)
++
++ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ def test_create_file(self):
++ """
++ Test creating a new file
++ """
++ fat16 = self.init_fat16()
++
++ new_file = fat16.create_file("/NEWFILE.TXT")
++
++ self.assertIsNotNone(new_file)
++ self.assertEqual(new_file.size_bytes, 0)
++
++ new_content = b"Hello, world! New file\n"
++ fat16.write_file(new_file, new_content)
++ self.assertEqual(fat16.read_file(new_file), new_content)
++
++ with open(os.path.join(filesystem, "newfile.txt"), "rb") as f:
++ self.assertEqual(f.read(), new_content)
++
++ # TODO: support deleting files
++
++
++if __name__ == "__main__":
++ # This is a specific test for vvfat driver
++ iotests.main(supported_fmts=["vvfat"], supported_protocols=["file"])
+diff --git a/tests/qemu-iotests/tests/vvfat.out
b/tests/qemu-iotests/tests/vvfat.out
+new file mode 100755
+index 0000000000..b6f257674e
+--- /dev/null
++++ b/tests/qemu-iotests/tests/vvfat.out
+@@ -0,0 +1,5 @@
++................
++----------------------------------------------------------------------
++Ran 16 tests
++
++OK
+diff --git a/tests/unit/ptimer-test.c b/tests/unit/ptimer-test.c
+index 04b5f4e3d0..08240594bb 100644
+--- a/tests/unit/ptimer-test.c
++++ b/tests/unit/ptimer-test.c
+@@ -763,6 +763,33 @@ static void check_oneshot_with_load_0(gconstpointer arg)
+ ptimer_free(ptimer);
+ }
+
++static void check_freq_more_than_1000M(gconstpointer arg)
++{
++ const uint8_t *policy = arg;
++ ptimer_state *ptimer = ptimer_init(ptimer_trigger, NULL, *policy);
++ bool no_round_down = (*policy & PTIMER_POLICY_NO_COUNTER_ROUND_DOWN);
++
++ triggered = false;
++
++ ptimer_transaction_begin(ptimer);
++ ptimer_set_freq(ptimer, 2000000000);
++ ptimer_set_limit(ptimer, 8, 1);
++ ptimer_run(ptimer, 1);
++ ptimer_transaction_commit(ptimer);
++
++ qemu_clock_step(3);
++
++ g_assert_cmpuint(ptimer_get_count(ptimer), ==, no_round_down ? 3 : 2);
++ g_assert_false(triggered);
++
++ qemu_clock_step(1);
++
++ g_assert_cmpuint(ptimer_get_count(ptimer), ==, 0);
++ g_assert_true(triggered);
++
++ ptimer_free(ptimer);
++}
++
+ static void add_ptimer_tests(uint8_t policy)
+ {
+ char policy_name[256] = "";
+@@ -857,6 +884,12 @@ static void add_ptimer_tests(uint8_t policy)
+ policy_name),
+ g_memdup2(&policy, 1), check_oneshot_with_load_0, g_free);
+ g_free(tmp);
++
++ g_test_add_data_func_full(
++ tmp = g_strdup_printf("/ptimer/freq_more_than_1000M policy=%s",
++ policy_name),
++ g_memdup2(&policy, 1), check_freq_more_than_1000M, g_free);
++ g_free(tmp);
+ }
+
+ static void add_all_ptimer_policies_comb_tests(void)
+diff --git a/util/async.c b/util/async.c
+index a1f07fc5a7..0cc3037e0c 100644
+--- a/util/async.c
++++ b/util/async.c
+@@ -744,7 +744,7 @@ void aio_context_set_thread_pool_params(AioContext *ctx,
int64_t min,
+ int64_t max, Error **errp)
+ {
+
+- if (min > max || !max || min > INT_MAX || max > INT_MAX) {
++ if (min > max || max <= 0 || min < 0 || min > INT_MAX || max > INT_MAX) {
+ error_setg(errp, "bad thread-pool-min/thread-pool-max values");
+ return;
+ }
+diff --git a/util/module.c b/util/module.c
+index 32e263163c..3eb0f06df1 100644
+--- a/util/module.c
++++ b/util/module.c
+@@ -354,13 +354,13 @@ int module_load_qom(const char *type, Error **errp)
+ void module_load_qom_all(void)
+ {
+ const QemuModinfo *modinfo;
+- Error *local_err = NULL;
+
+ if (module_loaded_qom_all) {
+ return;
+ }
+
+ for (modinfo = module_info; modinfo->name != NULL; modinfo++) {
++ Error *local_err = NULL;
+ if (!modinfo->objs) {
+ continue;
+ }
--- End Message ---