Re: [PATCH qemu 1/4] drive-mirror: add support for sync=bitmap mode=never
Fabian Grünbichler writes: > From: John Snow > > This patch adds support for the "BITMAP" sync mode to drive-mirror and > blockdev-mirror. It adds support only for the BitmapSyncMode "never," > because it's the simplest mode. > > This mode simply uses a user-provided bitmap as an initial copy > manifest, and then does not clear any bits in the bitmap at the > conclusion of the operation. > > Any new writes dirtied during the operation are copied out, in contrast > to backup. Note that whether these writes are reflected in the bitmap > at the conclusion of the operation depends on whether that bitmap is > actually recording! > > This patch was originally based on one by Ma Haocong, but it has since > been modified pretty heavily. > > Suggested-by: Ma Haocong > Signed-off-by: Ma Haocong > Signed-off-by: John Snow > Signed-off-by: Fabian Grünbichler > --- [...] > diff --git a/qapi/block-core.json b/qapi/block-core.json > index 2d94873ca0..dac5497084 100644 > --- a/qapi/block-core.json > +++ b/qapi/block-core.json > @@ -1961,10 +1961,19 @@ > #(all the disk, only the sectors allocated in the topmost image, or > #only new I/O). > # > +# @bitmap: The name of a bitmap to use for sync=bitmap mode. This argument > must > +# be present for bitmap mode and absent otherwise. The bitmap's What is "bitmap mode"? Do you mean "present when @bitmap-mode is, else absent"? > +# granularity is used instead of @granularity (since 5.2). > +# > +# @bitmap-mode: Specifies the type of data the bitmap should contain after > +# the operation concludes. Must be present if sync is "bitmap". > +# Must NOT be present otherwise. (Since 5.2) > +# > # @granularity: granularity of the dirty bitmap, default is 64K > # if the image format doesn't have clusters, 4K if the clusters > # are smaller than that, else the cluster size. Must be a > -# power of 2 between 512 and 64M (since 1.4). > +# power of 2 between 512 and 64M. Must not be specified if > +# @bitmap is present (since 1.4). > # Is @granularity forbidden with @bitmap because it makes no sense? If yes, then it doesn't actually default to anything then, does it? Looks like we have sync'bitmap'anything else - bitmap requiredforbidden bitmap-mode requiredforbidden granularity forbidden optional Correct? > # @buf-size: maximum amount of data in flight from source to > #target (since 1.4). > @@ -2002,7 +2011,9 @@ > { 'struct': 'DriveMirror', >'data': { '*job-id': 'str', 'device': 'str', 'target': 'str', > '*format': 'str', '*node-name': 'str', '*replaces': 'str', > -'sync': 'MirrorSyncMode', '*mode': 'NewImageMode', > +'sync': 'MirrorSyncMode', '*bitmap': 'str', > +'*bitmap-mode': 'BitmapSyncMode', > +'*mode': 'NewImageMode', > '*speed': 'int', '*granularity': 'uint32', > '*buf-size': 'int', '*on-source-error': 'BlockdevOnError', > '*on-target-error': 'BlockdevOnError', [Same for blockdev-mirror...]
Re: [PATCH v2 1/3] util/cutils: Introduce freq_to_str() to display Hertz units
On 18:43 Thu 01 Oct , Philippe Mathieu-Daudé wrote: > Introduce freq_to_str() to convert frequency values in human > friendly units using the SI units for Hertz. > > Suggested-by: Luc Michel > Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Luc Michel > --- > include/qemu/cutils.h | 12 > util/cutils.c | 14 ++ > 2 files changed, 26 insertions(+) > > diff --git a/include/qemu/cutils.h b/include/qemu/cutils.h > index 3a86ec0321..4bbf4834ea 100644 > --- a/include/qemu/cutils.h > +++ b/include/qemu/cutils.h > @@ -158,6 +158,18 @@ int qemu_strtosz_metric(const char *nptr, const char > **end, uint64_t *result); > > char *size_to_str(uint64_t val); > > +/** > + * freq_to_str: > + * @freq_hz: frequency to stringify > + * > + * Return human readable string for frequency @freq_hz. > + * Use SI units like KHz, MHz, and so forth. > + * > + * The caller is responsible for releasing the value returned > + * with g_free() after use. > + */ > +char *freq_to_str(uint64_t freq_hz); > + > /* used to print char* safely */ > #define STR_OR_NULL(str) ((str) ? (str) : "null") > > diff --git a/util/cutils.c b/util/cutils.c > index 8da34e04b0..be4e43a9ef 100644 > --- a/util/cutils.c > +++ b/util/cutils.c > @@ -885,6 +885,20 @@ char *size_to_str(uint64_t val) > return g_strdup_printf("%0.3g %sB", (double)val / div, suffixes[i]); > } > > +char *freq_to_str(uint64_t freq_hz) > +{ > +static const char *const suffixes[] = { "", "K", "M", "G", "T", "P", "E" > }; > +double freq = freq_hz; > +size_t idx = 0; > + > +while (freq >= 1000.0 && idx < ARRAY_SIZE(suffixes)) { > +freq /= 1000.0; > +idx++; > +} > + > +return g_strdup_printf("%0.3g %sHz", freq, suffixes[idx]); > +} > + > int qemu_pstrcmp0(const char **str1, const char **str2) > { > return g_strcmp0(*str1, *str2); > -- > 2.26.2 > --
Re: [PATCH] ppc/pnv: Increase max firware size
On Fri, 2 Oct 2020 08:15:46 +0200 Cédric Le Goater wrote: > That was sent a bit fast. Can you please add in the commit log : > > Builds enabling GCOV can be bigger than 4MB and the limit on FSP > systems is 16MB. > > Thanks > And also s/firware/firmware in the title. Cheers, -- Greg > C. > > On 10/2/20 8:13 AM, Cédric Le Goater wrote: > > Signed-off-by: Cédric Le Goater > > --- > > hw/ppc/pnv.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c > > index 6670967e26a9..d9e52873ea70 100644 > > --- a/hw/ppc/pnv.c > > +++ b/hw/ppc/pnv.c > > @@ -61,7 +61,7 @@ > > > > #define FW_FILE_NAME"skiboot.lid" > > #define FW_LOAD_ADDR0x0 > > -#define FW_MAX_SIZE (4 * MiB) > > +#define FW_MAX_SIZE (16 * MiB) > > > > #define KERNEL_LOAD_ADDR0x2000 > > #define KERNEL_MAX_SIZE (256 * MiB) > > > >
Re: io_uring possibly the culprit for qemu hang (linux-5.4.y)
Hi Ju, On Thu, Oct 01, 2020 at 11:30:14PM +0900, Ju Hyung Park wrote: > Hi Stefano, > > On Thu, Oct 1, 2020 at 5:59 PM Stefano Garzarella wrote: > > Please, can you share the qemu command line that you are using? > > This can be useful for the analysis. > > Sure. Thanks for sharing. The issue seems related to io_uring and the new io_uring fd monitoring implementation available from QEMU 5.0. I'll try to reproduce. For now, as a workaround, you can rebuild qemu by disabling io-uring support: ../configure --disable-linux-io-uring ... Thanks, Stefano > > QEMU: > /usr/bin/qemu-system-x86_64 -name guest=win10,debug-threads=on -S > -object > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-win10/master-key.aes > -blockdev > {"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} > -blockdev > {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} > -blockdev > {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/win10_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} > -blockdev > {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} > -machine > pc-q35-5.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format > -cpu > Skylake-Client-IBRS,ss=on,vmx=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,pdpe1gb=on,ibpb=on,amd-ssbd=on,fma=off,avx=off,f16c=off,rdrand=off,bmi1=off,hle=off,avx2=off,bmi2=off,rtm=off,rdseed=off,adx=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-runtime,hv-synic,hv-stimer,hv-reset > -m 8192 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu/1-win10 > -overcommit mem-lock=off -smp 4,sockets=1,dies=1,cores=2,threads=2 > -uuid 7ccc3031-1dab-4267-b72a-d60065b5ff7f -display none > -no-user-config -nodefaults -chardev > socket,id=charmonitor,fd=32,server,nowait -mon > chardev=charmonitor,id=monitor,mode=control -rtc > base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=delay > -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global > ICH9-LPC.disable_s4=1 -boot menu=off,strict=on -device > pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 > -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 > -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 > -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 > -device pcie-pci-bridge,id=pci.5,bus=pci.2,addr=0x0 -device > qemu-xhci,id=usb,bus=pci.1,addr=0x0 -blockdev > {"driver":"host_device","filename":"/dev/disk/by-partuuid/05c3750b-060f-4703-95ea-6f5e546bf6e9","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"} > -blockdev > {"node-name":"libvirt-1-format","read-only":false,"discard":"unmap","detect-zeroes":"unmap","cache":{"direct":false,"no-flush":true},"driver":"raw","file":"libvirt-1-storage"} > -device > virtio-blk-pci,scsi=off,bus=pcie.0,addr=0xa,drive=libvirt-1-format,id=virtio-disk0,bootindex=1,write-cache=on > -netdev tap,fd=34,id=hostnet0 -device > e1000,netdev=hostnet0,id=net0,mac=52:54:00:c6:bb:bc,bus=pcie.0,addr=0x3 > -device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x4 -device > hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device > vfio-pci,host=:00:02.0,id=hostdev0,bus=pcie.0,addr=0x2,rombar=0 > -device virtio-balloon-pci,id=balloon0,bus=pcie.0,addr=0x8 -object > rng-random,id=objrng0,filename=/dev/urandom -device > virtio-rng-pci,rng=objrng0,id=rng0,bus=pcie.0,addr=0x9 -msg > timestamp=on > > And I use libvirt 6.3.0 to manage the VM. Here's an xml of my VM. > > > win10 > 7ccc3031-1dab-4267-b72a-d60065b5ff7f > > xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0";> > http://microsoft.com/win/10"/> > > > 8388608 > 8388608 > > > > > 4 > > > > > > > > hvm > /usr/share/OVMF/OVMF_CODE.fd > /var/lib/libvirt/qemu/nvram/win10_VARS.fd > > > > > > > > > > > > > > > > > > > > > > > > > > > > destroy > restart > destroy > > > > > > /usr/bin/qemu-system-x86_64 > >detect_zeroes="unmap"/> >dev="/dev/disk/by-partuuid/05c3750b-060f-4703-95ea-6f5e546bf6e9"/> > >function="0x0"/> > > > > > >function="0x0" multifunction="on"/> > > > > >function="0x1"/> > > > > >function="0x2"/> > > > > >function="0x3"/> > > > >funct
[PATCH v4 1/1] qapi: Restrict code generated for user-mode
A lot of QAPI generated code is never used by user-mode. Split out qapi_system_modules and qapi_system_or_tools_modules from the qapi_all_modules array. We now have 4 groups: - always used - only used by system-mode - not used by user-mode - not used by tools Signed-off-by: Philippe Mathieu-Daudé --- qapi/meson.build | 51 ++-- 1 file changed, 36 insertions(+), 15 deletions(-) diff --git a/qapi/meson.build b/qapi/meson.build index 7c4a89a882..10cf01ef65 100644 --- a/qapi/meson.build +++ b/qapi/meson.build @@ -14,39 +14,60 @@ util_ss.add(files( )) qapi_all_modules = [ + 'common', + 'introspect', + 'misc', +] + +qapi_system_modules = [ 'acpi', 'audio', + 'dump', + 'machine-target', + 'migration', + 'misc-target', + 'net', + 'pci', + 'rdma', + 'rocker', + 'tpm', + 'trace', +] + +qapi_system_or_user_modules = [ + 'machine', # X86CPUFeatureWordInfo + 'qdev', +] + +qapi_system_or_tools_modules = [ 'authz', 'block-core', 'block', 'char', - 'common', 'control', 'crypto', - 'dump', 'error', - 'introspect', 'job', - 'machine', - 'machine-target', - 'migration', - 'misc', - 'misc-target', - 'net', 'pragma', - 'qdev', - 'pci', 'qom', - 'rdma', - 'rocker', 'run-state', 'sockets', - 'tpm', - 'trace', 'transaction', 'ui', ] +if have_system + qapi_all_modules += qapi_system_modules +endif + +if have_system or have_user + qapi_all_modules += qapi_system_or_user_modules +endif + +if have_system or have_tools + qapi_all_modules += qapi_system_or_tools_modules +endif + qapi_storage_daemon_modules = [ 'block-core', 'char', -- 2.26.2
[PATCH v4 0/1] user-mode: Prune build dependencies (part 3)
This is the third part of a series reducing user-mode dependencies. By stripping out unused code, the build and testing time is reduced (as is space used by objects). Part 3: - Reduce user-mode QAPI generated files Since v3: - Keep qdev.json in user-mode (no need for qdev-system stub for qapi_event_send_device_deleted, Paolo) - Keep machine.json in user-mode (no need to restrict X86CPUFeatureWord to x86 architecture, Eduardo) Since v2: - Fixed UuidInfo placed in incorrect json - Rebased on Meson - Include X86CPUFeatureWord unmerged from part 2 Since v1: - Addressed Richard and Paolo review comments v3: https://www.mail-archive.com/qemu-devel@nongnu.org/msg746423.html v2: https://www.mail-archive.com/qemu-devel@nongnu.org/msg688879.html v1: https://www.mail-archive.com/qemu-devel@nongnu.org/msg688486.html Based on https://github.com/ehabkost/qemu.git machine-next Philippe Mathieu-Daudé (1): qapi: Restrict code generated for user-mode qapi/meson.build | 51 ++-- 1 file changed, 36 insertions(+), 15 deletions(-) -- 2.26.2
Re: [PATCH v2] hw/arm: Restrict APEI tables generation to the 'virt' machine
On 10/01/20 18:22, Philippe Mathieu-Daudé wrote: > While APEI is a generic ACPI feature (usable by X86 and ARM64), only > the 'virt' machine uses it, by enabling the RAS Virtualization. See > commit 2afa8c8519: "hw/arm/virt: Introduce a RAS machine option"). > > Restrict the APEI tables generation code to the single user: the virt > machine. If another machine wants to use it, it simply has to 'select > ACPI_APEI' in its Kconfig. > > Fixes: aa16508f1d ("ACPI: Build related register address fields via hardware > error fw_cfg blob") > Acked-by: Michael S. Tsirkin > Reviewed-by: Dongjiu Geng > Signed-off-by: Philippe Mathieu-Daudé > --- > v2: Reworded > > Cc: Laszlo Ersek > Cc: Xiang Zheng > Cc: Jonathan Cameron > Cc: Igor Mammedov > --- > default-configs/arm-softmmu.mak | 1 - > hw/arm/Kconfig | 1 + > 2 files changed, 1 insertion(+), 1 deletion(-) > > diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak > index 9a94ebd0be..08a32123b4 100644 > --- a/default-configs/arm-softmmu.mak > +++ b/default-configs/arm-softmmu.mak > @@ -43,4 +43,3 @@ CONFIG_FSL_IMX7=y > CONFIG_FSL_IMX6UL=y > CONFIG_SEMIHOSTING=y > CONFIG_ALLWINNER_H3=y > -CONFIG_ACPI_APEI=y > diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig > index f303c6bead..7d040827af 100644 > --- a/hw/arm/Kconfig > +++ b/hw/arm/Kconfig > @@ -26,6 +26,7 @@ config ARM_VIRT > select ACPI_MEMORY_HOTPLUG > select ACPI_HW_REDUCED > select ACPI_NVDIMM > +select ACPI_APEI > > config CHEETAH > bool > Acked-by: Laszlo Ersek
Re: [PATCH v7 03/13] monitor: Use getter/setter functions for cur_mon
Additional nitpick detail on Kevin's request. Kevin Wolf writes: > cur_mon really needs to be coroutine-local as soon as we move monitor > command handlers to coroutines and let them yield. As a first step, just > remove all direct accesses to cur_mon so that we can implement this in > the getter function later. > > Signed-off-by: Kevin Wolf [...] > diff --git a/tests/test-util-sockets.c b/tests/test-util-sockets.c > index af9f5c0c70..6c50dbf051 100644 > --- a/tests/test-util-sockets.c > +++ b/tests/test-util-sockets.c > @@ -52,6 +52,7 @@ static void test_fd_is_socket_good(void) > > static int mon_fd = -1; > static const char *mon_fdname; > +__thread Monitor *cur_mon; > > int monitor_get_fd(Monitor *mon, const char *fdname, Error **errp) > { > @@ -66,15 +67,12 @@ int monitor_get_fd(Monitor *mon, const char *fdname, > Error **errp) > > /* > * Syms of stubs in libqemuutil.a are discarded at .o file granularity. > - * To replace monitor_get_fd() we must ensure everything in > - * stubs/monitor.c is defined, to make sure monitor.o is discarded > - * otherwise we get duplicate syms at link time. > + * To replace monitor_get_fd() and monitor_cur(), we must ensure that we also > + * replace any other symbol that is used in the binary and would be taken > from > + * the same stub object file, otherwise we get duplicate syms at link time. Wrapping the comment around column 70 or so would make it easier to read. File has no maintainers. Up to you. > */ > -__thread Monitor *cur_mon; > +Monitor *monitor_cur(void) { return cur_mon; } > int monitor_vprintf(Monitor *mon, const char *fmt, va_list ap) { abort(); } > -void monitor_init_qmp(Chardev *chr, bool pretty, Error **errp) {} > -void monitor_init_hmp(Chardev *chr, bool use_readline, Error **errp) {} > - > > static void test_socket_fd_pass_name_good(void) > { [...]
Re: [PATCH] qcow2: Use L1E_SIZE in qcow2_write_l1_entry()
Am 28.09.2020 um 18:23 hat Alberto Garcia geschrieben: > We overlooked these in 02b1ecfa100e7ecc2306560cd27a4a2622bfeb04 > > Signed-off-by: Alberto Garcia Thanks, applied to the block branch. Kevin
Re: [PATCH v7 07/13] monitor: Make current monitor a per-coroutine property
Additional nitpick detail on Kevin's request. Kevin Wolf writes: > This way, a monitor command handler will still be able to access the > current monitor, but when it yields, all other code code will correctly > get NULL from monitor_cur(). > > This uses a hash table to map the coroutine pointer to the current > monitor of that coroutine. Outside of coroutine context, we associate > the current monitor with the leader coroutine of the current thread. > > Approaches to implement some form of coroutine local storage directly in > the coroutine core code have been considered and discarded because they > didn't end up being much more generic than the hash table and their > performance impact on coroutines not using coroutine local storage was > unclear. As the block layer uses a coroutine per I/O request, this is a > fast path and we have to be careful. It's safest to just stay out of > this path with code only used by the monitor. > > Signed-off-by: Kevin Wolf > Reviewed-by: Eric Blake > --- > include/monitor/monitor.h | 2 +- > monitor/hmp.c | 4 ++-- > monitor/monitor.c | 34 +++--- > qapi/qmp-dispatch.c | 4 ++-- > stubs/monitor-core.c | 2 +- > 5 files changed, 33 insertions(+), 13 deletions(-) > > diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h > index 7b0ad1de12..026f8a31b2 100644 > --- a/include/monitor/monitor.h > +++ b/include/monitor/monitor.h > @@ -13,7 +13,7 @@ typedef struct MonitorOptions MonitorOptions; > extern QemuOptsList qemu_mon_opts; > > Monitor *monitor_cur(void); > -Monitor *monitor_set_cur(Monitor *mon); > +Monitor *monitor_set_cur(Coroutine *co, Monitor *mon); > bool monitor_cur_is_qmp(void); > > void monitor_init_globals(void); > diff --git a/monitor/hmp.c b/monitor/hmp.c > index 896c670183..4b66ca1cd6 100644 > --- a/monitor/hmp.c > +++ b/monitor/hmp.c > @@ -1081,9 +1081,9 @@ void handle_hmp_command(MonitorHMP *mon, const char > *cmdline) > } > > /* old_mon is non-NULL when called from qmp_human_monitor_command() */ > -old_mon = monitor_set_cur(&mon->common); > +old_mon = monitor_set_cur(qemu_coroutine_self(), &mon->common); > cmd->cmd(&mon->common, qdict); > -monitor_set_cur(old_mon); > +monitor_set_cur(qemu_coroutine_self(), old_mon); > > qobject_unref(qdict); > } > diff --git a/monitor/monitor.c b/monitor/monitor.c > index be3839a7aa..629aa073ee 100644 > --- a/monitor/monitor.c > +++ b/monitor/monitor.c > @@ -58,29 +58,48 @@ IOThread *mon_iothread; > /* Bottom half to dispatch the requests received from I/O thread */ > QEMUBH *qmp_dispatcher_bh; > > -/* Protects mon_list, monitor_qapi_event_state, monitor_destroyed. */ > +/* > + * Protects mon_list, monitor_qapi_event_state, coroutine_mon, > + * monitor_destroyed. > + */ > QemuMutex monitor_lock; > static GHashTable *monitor_qapi_event_state; > +static GHashTable *coroutine_mon; /* Maps Coroutine* to Monitor* */ > > MonitorList mon_list; > int mon_refcount; > static bool monitor_destroyed; > > -static __thread Monitor *cur_monitor; > - > Monitor *monitor_cur(void) > { > -return cur_monitor; > +Monitor *mon; > + > +qemu_mutex_lock(&monitor_lock); > +mon = g_hash_table_lookup(coroutine_mon, qemu_coroutine_self()); > +qemu_mutex_unlock(&monitor_lock); > + > +return mon; > } > > /** > * Sets a new current monitor and returns the old one. > + * > + * If a non-NULL monitor is set for a coroutine, another call resetting it to > + * NULL is required before the coroutine terminates, otherwise a stale entry > + * would remain in the hash table. Wrapping the comment around column 70 or so would make it easier to read. > */ > -Monitor *monitor_set_cur(Monitor *mon) > +Monitor *monitor_set_cur(Coroutine *co, Monitor *mon) > { > -Monitor *old_monitor = cur_monitor; > +Monitor *old_monitor = monitor_cur(); > + > +qemu_mutex_lock(&monitor_lock); > +if (mon) { > +g_hash_table_replace(coroutine_mon, co, mon); > +} else { > +g_hash_table_remove(coroutine_mon, co); > +} > +qemu_mutex_unlock(&monitor_lock); > > -cur_monitor = mon; > return old_monitor; > } > [...]
Re: [PATCH v7 08/13] qapi: Add a 'coroutine' flag for commands
Additional nitpick detail on Kevin's request. Kevin Wolf writes: > This patch adds a new 'coroutine' flag to QMP command definitions that > tells the QMP dispatcher that the command handler is safe to be run in a > coroutine. > > The documentation of the new flag pretends that this flag is already > used as intended, which it isn't yet after this patch. We'll implement > this in another patch in this series. > > Signed-off-by: Kevin Wolf > Reviewed-by: Marc-André Lureau > Reviewed-by: Markus Armbruster [...] > diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py > index 2942520399..928cd1eb5c 100644 > --- a/scripts/qapi/expr.py > +++ b/scripts/qapi/expr.py > @@ -88,10 +88,16 @@ def check_flags(expr, info): > if key in expr and expr[key] is not False: > raise QAPISemError( > info, "flag '%s' may only use false value" % key) > -for key in ['boxed', 'allow-oob', 'allow-preconfig']: > +for key in ['boxed', 'allow-oob', 'allow-preconfig', 'coroutine']: > if key in expr and expr[key] is not True: > raise QAPISemError( > info, "flag '%s' may only use true value" % key) > +if 'allow-oob' in expr and 'coroutine' in expr: > +# This is not necessarily a fundamental incompatibility, but we don't > +# have a use case and the desired semantics isn't obvious. The > simplest > +# solution is to forbid it until we get a use case for it. > +raise QAPISemError(info, "flags 'allow-oob' and 'coroutine' " > + "are incompatible") PEP 8: For flowing long blocks of text with fewer structural restrictions (docstrings or comments), the line length should be limited to 72 characters. PEP 8: You should use two spaces after a sentence-ending period in multi- sentence comments, except after the final sentence. > > > def check_if(expr, info, source): [...]
[PATCH] hw/block/nvme: Simplify timestamp sum
As the 'timestamp' variable is declared as a 48-bit bitfield, we do not need to wrap the sum result. Signed-off-by: Philippe Mathieu-Daudé --- hw/block/nvme.c | 7 +-- 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 63078f6009..44fa5b9076 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1280,12 +1280,7 @@ static inline uint64_t nvme_get_timestamp(const NvmeCtrl *n) union nvme_timestamp ts; ts.all = 0; - -/* - * If the sum of the Timestamp value set by the host and the elapsed - * time exceeds 2^48, the value returned should be reduced modulo 2^48. - */ -ts.timestamp = (n->host_timestamp + elapsed_time) & 0x; +ts.timestamp = n->host_timestamp + elapsed_time; /* If the host timestamp is non-zero, set the timestamp origin */ ts.origin = n->host_timestamp ? 0x01 : 0x00; -- 2.26.2
Re: [PATCH v7 08/13] qapi: Add a 'coroutine' flag for commands
Hit send too fast. Kevin Wolf writes: > This patch adds a new 'coroutine' flag to QMP command definitions that > tells the QMP dispatcher that the command handler is safe to be run in a > coroutine. > > The documentation of the new flag pretends that this flag is already > used as intended, which it isn't yet after this patch. We'll implement > this in another patch in this series. > > Signed-off-by: Kevin Wolf > Reviewed-by: Marc-André Lureau > Reviewed-by: Markus Armbruster [...] > diff --git a/scripts/qapi/schema.py b/scripts/qapi/schema.py > index 78309a00f0..c44d391c3f 100644 > --- a/scripts/qapi/schema.py > +++ b/scripts/qapi/schema.py > @@ -128,7 +128,7 @@ class QAPISchemaVisitor: > > def visit_command(self, name, info, ifcond, features, >arg_type, ret_type, gen, success_response, boxed, > - allow_oob, allow_preconfig): > + allow_oob, allow_preconfig, coroutine): > pass > > def visit_event(self, name, info, ifcond, features, arg_type, boxed): > @@ -713,7 +713,8 @@ class QAPISchemaCommand(QAPISchemaEntity): > > def __init__(self, name, info, doc, ifcond, features, > arg_type, ret_type, > - gen, success_response, boxed, allow_oob, allow_preconfig): > + gen, success_response, boxed, allow_oob, allow_preconfig, > + coroutine): > super().__init__(name, info, doc, ifcond, features) > assert not arg_type or isinstance(arg_type, str) > assert not ret_type or isinstance(ret_type, str) > @@ -726,6 +727,7 @@ class QAPISchemaCommand(QAPISchemaEntity): > self.boxed = boxed > self.allow_oob = allow_oob > self.allow_preconfig = allow_preconfig > +self.coroutine = coroutine > > def check(self, schema): > super().check(schema) > @@ -768,7 +770,7 @@ class QAPISchemaCommand(QAPISchemaEntity): > visitor.visit_command( > self.name, self.info, self.ifcond, self.features, > self.arg_type, self.ret_type, self.gen, self.success_response, > -self.boxed, self.allow_oob, self.allow_preconfig) > +self.boxed, self.allow_oob, self.allow_preconfig, self.coroutine) Recommend to break the line after preconfig, like you do elsewhere. > > > class QAPISchemaEvent(QAPISchemaEntity): > @@ -1074,6 +1076,7 @@ class QAPISchema: > boxed = expr.get('boxed', False) > allow_oob = expr.get('allow-oob', False) > allow_preconfig = expr.get('allow-preconfig', False) > +coroutine = expr.get('coroutine', False) > ifcond = expr.get('if') > features = self._make_features(expr.get('features'), info) > if isinstance(data, OrderedDict): > @@ -1086,7 +1089,8 @@ class QAPISchema: > self._def_entity(QAPISchemaCommand(name, info, doc, ifcond, features, > data, rets, > gen, success_response, > - boxed, allow_oob, > allow_preconfig)) > + boxed, allow_oob, allow_preconfig, > + coroutine)) Preexisting: the arguments are kind of squeezed onto the right margin. Hanging indent would avoid that. Feel free to ignore. > > def _def_event(self, expr, info, doc): > name = expr['event'] [...]
Re: [PATCH v7 09/13] qmp: Move dispatcher to a coroutine
Additional nitpick detail on Kevin's request. Kevin Wolf writes: > This moves the QMP dispatcher to a coroutine and runs all QMP command > handlers that declare 'coroutine': true in coroutine context so they > can avoid blocking the main loop while doing I/O or waiting for other > events. > > For commands that are not declared safe to run in a coroutine, the > dispatcher drops out of coroutine context by calling the QMP command > handler from a bottom half. > > Signed-off-by: Kevin Wolf > Reviewed-by: Markus Armbruster [...] > diff --git a/monitor/monitor.c b/monitor/monitor.c > index 629aa073ee..ac2722bf91 100644 > --- a/monitor/monitor.c > +++ b/monitor/monitor.c > @@ -55,8 +55,32 @@ typedef struct { > /* Shared monitor I/O thread */ > IOThread *mon_iothread; > > -/* Bottom half to dispatch the requests received from I/O thread */ > -QEMUBH *qmp_dispatcher_bh; > +/* Coroutine to dispatch the requests received from I/O thread */ > +Coroutine *qmp_dispatcher_co; > + > +/* Set to true when the dispatcher coroutine should terminate */ > +bool qmp_dispatcher_co_shutdown; > + > +/* > + * qmp_dispatcher_co_busy is used for synchronisation between the > + * monitor thread and the main thread to ensure that the dispatcher > + * coroutine never gets scheduled a second time when it's already > + * scheduled (scheduling the same coroutine twice is forbidden). > + * > + * It is true if the coroutine is active and processing requests. > + * Additional requests may then be pushed onto mon->qmp_requests, > + * and @qmp_dispatcher_co_shutdown may be set without further ado. > + * @qmp_dispatcher_co_busy must not be woken up in this case. > + * > + * If false, you also have to set @qmp_dispatcher_co_busy to true and > + * wake up @qmp_dispatcher_co after pushing the new requests. > + * > + * The coroutine will automatically change this variable back to false > + * before it yields. Nobody else may set the variable to false. > + * > + * Access must be atomic for thread safety. > + */ > +bool qmp_dispatcher_co_busy; > > /* > * Protects mon_list, monitor_qapi_event_state, coroutine_mon, > @@ -623,9 +647,24 @@ void monitor_cleanup(void) > } > qemu_mutex_unlock(&monitor_lock); > > -/* QEMUBHs needs to be deleted before destroying the I/O thread */ > -qemu_bh_delete(qmp_dispatcher_bh); > -qmp_dispatcher_bh = NULL; > +/* > + * The dispatcher needs to stop before destroying the I/O thread. > + * > + * We need to poll both qemu_aio_context and iohandler_ctx to make > + * sure that the dispatcher coroutine keeps making progress and > + * eventually terminates. qemu_aio_context is automatically > + * polled by calling AIO_WAIT_WHILE on it, but we must poll > + * iohandler_ctx manually. > + */ > +qmp_dispatcher_co_shutdown = true; > +if (!atomic_xchg(&qmp_dispatcher_co_busy, true)) { > +aio_co_wake(qmp_dispatcher_co); > +} > + > +AIO_WAIT_WHILE(qemu_get_aio_context(), > + (aio_poll(iohandler_get_aio_context(), false), > +atomic_mb_read(&qmp_dispatcher_co_busy))); > + > if (mon_iothread) { > iothread_destroy(mon_iothread); > mon_iothread = NULL; > @@ -649,9 +688,9 @@ void monitor_init_globals_core(void) > * have commands assuming that context. It would be nice to get > * rid of those assumptions. > */ > -qmp_dispatcher_bh = aio_bh_new(iohandler_get_aio_context(), > - monitor_qmp_bh_dispatcher, > - NULL); > +qmp_dispatcher_co = qemu_coroutine_create(monitor_qmp_dispatcher_co, > NULL); Rather long line, caused by rather long identifiers. Not your fault; you imitated the existing pattern static variable qmp_dispatcher_bh / extern function monitor_qmp_bh_dispatcher(). But the prefix monitor_qmp_ is awfully long, and not just here. Let's leave this for another day. > +atomic_mb_set(&qmp_dispatcher_co_busy, true); > +aio_co_schedule(iohandler_get_aio_context(), qmp_dispatcher_co); > } > > int monitor_init(MonitorOptions *opts, bool allow_hmp, Error **errp) [...] > diff --git a/util/aio-posix.c b/util/aio-posix.c > index f7f13ebfc2..30bb21d699 100644 > --- a/util/aio-posix.c > +++ b/util/aio-posix.c > @@ -15,6 +15,7 @@ > > #include "qemu/osdep.h" > #include "block/block.h" > +#include "qemu/main-loop.h" > #include "qemu/rcu.h" > #include "qemu/rcu_queue.h" > #include "qemu/sockets.h" > @@ -558,8 +559,13 @@ bool aio_poll(AioContext *ctx, bool blocking) > * There cannot be two concurrent aio_poll calls for the same AioContext > (or > * an aio_poll concurrent with a GSource prepare/check/dispatch > callback). > * We rely on this below to avoid slow locked accesses to ctx->notify_me. > + * > + * aio_poll() may only be called in the AioContext's thread. > iohandler_ctx > + * is special in that it runs in the main thread, but that thread's
Re: [PATCH v7 10/13] hmp: Add support for coroutine command handlers
Additional nitpick detail on Kevin's request. Kevin Wolf writes: > Often, QMP command handlers are not only called to handle QMP commands, > but also from a corresponding HMP command handler. In order to give them > a consistent environment, optionally run HMP command handlers in a > coroutine, too. > > The implementation is a lot simpler than in QMP because for HMP, we > still block the VM while the coroutine is running. > > Signed-off-by: Kevin Wolf > --- > monitor/monitor-internal.h | 1 + > monitor/hmp.c | 37 - > 2 files changed, 33 insertions(+), 5 deletions(-) > > diff --git a/monitor/monitor-internal.h b/monitor/monitor-internal.h > index b55d6df07f..ad2e64be13 100644 > --- a/monitor/monitor-internal.h > +++ b/monitor/monitor-internal.h > @@ -74,6 +74,7 @@ typedef struct HMPCommand { > const char *help; > const char *flags; /* p=preconfig */ > void (*cmd)(Monitor *mon, const QDict *qdict); > +bool coroutine; > /* > * @sub_table is a list of 2nd level of commands. If it does not exist, > * cmd should be used. If it exists, sub_table[?].cmd should be > diff --git a/monitor/hmp.c b/monitor/hmp.c > index 4b66ca1cd6..b858b0dbde 100644 > --- a/monitor/hmp.c > +++ b/monitor/hmp.c > @@ -1056,12 +1056,26 @@ fail: > return NULL; > } > > +typedef struct HandleHmpCommandCo { > +Monitor *mon; > +const HMPCommand *cmd; > +QDict *qdict; > +bool done; > +} HandleHmpCommandCo; > + > +static void handle_hmp_command_co(void *opaque) > +{ > +HandleHmpCommandCo *data = opaque; > +data->cmd->cmd(data->mon, data->qdict); > +monitor_set_cur(qemu_coroutine_self(), NULL); > +data->done = true; > +} > + > void handle_hmp_command(MonitorHMP *mon, const char *cmdline) > { > QDict *qdict; > const HMPCommand *cmd; > const char *cmd_start = cmdline; > -Monitor *old_mon; > > trace_handle_hmp_command(mon, cmdline); > > @@ -1080,10 +1094,23 @@ void handle_hmp_command(MonitorHMP *mon, const char > *cmdline) > return; > } > > -/* old_mon is non-NULL when called from qmp_human_monitor_command() */ > -old_mon = monitor_set_cur(qemu_coroutine_self(), &mon->common); > -cmd->cmd(&mon->common, qdict); > -monitor_set_cur(qemu_coroutine_self(), old_mon); > +if (!cmd->coroutine) { > +/* old_mon is non-NULL when called from qmp_human_monitor_command() > */ > +Monitor *old_mon = monitor_set_cur(qemu_coroutine_self(), > &mon->common); Long line. David seems fine with it, and he's the maintainer. > +cmd->cmd(&mon->common, qdict); > +monitor_set_cur(qemu_coroutine_self(), old_mon); > +} else { > +HandleHmpCommandCo data = { > +.mon = &mon->common, > +.cmd = cmd, > +.qdict = qdict, > +.done = false, > +}; > +Coroutine *co = qemu_coroutine_create(handle_hmp_command_co, &data); > +monitor_set_cur(co, &mon->common); > +aio_co_enter(qemu_get_aio_context(), co); > +AIO_WAIT_WHILE(qemu_get_aio_context(), !data.done); > +} > > qobject_unref(qdict); > }
Re: [PATCH] hw/block/nvme: Simplify timestamp sum
On Oct 2 09:57, Philippe Mathieu-Daudé wrote: > As the 'timestamp' variable is declared as a 48-bit bitfield, > we do not need to wrap the sum result. > > Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Klaus Jensen > --- > hw/block/nvme.c | 7 +-- > 1 file changed, 1 insertion(+), 6 deletions(-) > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > index 63078f6009..44fa5b9076 100644 > --- a/hw/block/nvme.c > +++ b/hw/block/nvme.c > @@ -1280,12 +1280,7 @@ static inline uint64_t nvme_get_timestamp(const > NvmeCtrl *n) > > union nvme_timestamp ts; > ts.all = 0; > - > -/* > - * If the sum of the Timestamp value set by the host and the elapsed > - * time exceeds 2^48, the value returned should be reduced modulo 2^48. > - */ > -ts.timestamp = (n->host_timestamp + elapsed_time) & 0x; > +ts.timestamp = n->host_timestamp + elapsed_time; > > /* If the host timestamp is non-zero, set the timestamp origin */ > ts.origin = n->host_timestamp ? 0x01 : 0x00; > -- > 2.26.2 > > -- One of us - No more doubt, silence or taboo about mental illness. signature.asc Description: PGP signature
Re: [PATCH v7 11/13] util/async: Add aio_co_reschedule_self()
Additional nitpick detail on Kevin's request. Kevin Wolf writes: > Add a function that can be used to move the currently running coroutine > to a different AioContext (and therefore potentially a different > thread). > > Signed-off-by: Kevin Wolf > --- > include/block/aio.h | 10 ++ > util/async.c| 30 ++ > 2 files changed, 40 insertions(+) > > diff --git a/include/block/aio.h b/include/block/aio.h > index b2f703fa3f..c37617b404 100644 > --- a/include/block/aio.h > +++ b/include/block/aio.h > @@ -17,6 +17,7 @@ > #ifdef CONFIG_LINUX_IO_URING > #include > #endif > +#include "qemu/coroutine.h" > #include "qemu/queue.h" > #include "qemu/event_notifier.h" > #include "qemu/thread.h" > @@ -654,6 +655,15 @@ static inline bool aio_node_check(AioContext *ctx, bool > is_external) > */ > void aio_co_schedule(AioContext *ctx, struct Coroutine *co); > > +/** > + * aio_co_reschedule_self: > + * @new_ctx: the new context > + * > + * Move the currently running coroutine to new_ctx. If the coroutine is > already > + * running in new_ctx, do nothing. Wrapping the comment around column 70 or so would make it easier to read. Up to you. > + */ > +void coroutine_fn aio_co_reschedule_self(AioContext *new_ctx); > + > /** > * aio_co_wake: > * @co: the coroutine > diff --git a/util/async.c b/util/async.c > index 4266745dee..a609e18693 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -569,6 +569,36 @@ void aio_co_schedule(AioContext *ctx, Coroutine *co) > aio_context_unref(ctx); > } > > +typedef struct AioCoRescheduleSelf { > +Coroutine *co; > +AioContext *new_ctx; > +} AioCoRescheduleSelf; > + > +static void aio_co_reschedule_self_bh(void *opaque) > +{ > +AioCoRescheduleSelf *data = opaque; > +aio_co_schedule(data->new_ctx, data->co); > +} > + > +void coroutine_fn aio_co_reschedule_self(AioContext *new_ctx) > +{ > +AioContext *old_ctx = qemu_get_current_aio_context(); > + > +if (old_ctx != new_ctx) { > +AioCoRescheduleSelf data = { > +.co = qemu_coroutine_self(), > +.new_ctx = new_ctx, > +}; > +/* > + * We can't directly schedule the coroutine in the target context > + * because this would be racy: The other thread could try to enter > the > + * coroutine before it has yielded in this one. > + */ Likewise. > +aio_bh_schedule_oneshot(old_ctx, aio_co_reschedule_self_bh, &data); > +qemu_coroutine_yield(); > +} > +} > + > void aio_co_wake(struct Coroutine *co) > { > AioContext *ctx;
Re: [PATCH v7 06/13] qmp: Call monitor_set_cur() only in qmp_dispatch()
Markus Armbruster writes: > Kevin Wolf writes: [...] >> I'll just change this one in the next version. Changing a single >> well-known instance not a big problem. It's just unfortunate that there >> are "A few more in PATCH 08-11" and I don't know how to identify them. > > When I do that, and you'd rather have a complete list, just ask. Out of > time for today, but I can get it for you first thing tomorrow. > > [...] Done, with the detail level cranked up to "lots" ;)
[PATCH] hw/arm/fsl-imx25: Fix a typo
Signed-off-by: Philippe Mathieu-Daudé --- include/hw/arm/fsl-imx25.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/hw/arm/fsl-imx25.h b/include/hw/arm/fsl-imx25.h index 971f35dd16..c1603b2ac2 100644 --- a/include/hw/arm/fsl-imx25.h +++ b/include/hw/arm/fsl-imx25.h @@ -179,7 +179,7 @@ struct FslIMX25State { * 0xBB00_ 0xBB00_0FFF 4 Kbytes NAND flash main area buffer * 0xBB00_1000 0xBB00_11FF 512 BNAND flash spare area buffer * 0xBB00_1200 0xBB00_1DFF 3 Kbytes Reserved - * 0xBB00_1E00 0xBB00_1FFF 512 BNAND flash control regisers + * 0xBB00_1E00 0xBB00_1FFF 512 BNAND flash control registers * 0xBB01_2000 0xBFFF_ 96 Mbytes (minus 8 Kbytes) Reserved * 0xC000_ 0x_ 1024 Mbytes Reserved */ -- 2.26.2
[PATCH v5 1/1] accel/tcg: Fix computing of is_write for MIPS
Detect all MIPS store instructions in cpu_signal_handler for all available MIPS versions, and set is_write if encountering such store instructions. This fixed the error while dealing with self-modified code for MIPS. Reviewed-by: Richard Henderson Signed-off-by: Kele Huang Signed-off-by: Xu Zou --- accel/tcg/user-exec.c | 39 ++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 5c96819ded..88eccf7900 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -702,6 +702,10 @@ int cpu_signal_handler(int host_signum, void *pinfo, #elif defined(__mips__) +#if defined(__misp16) || defined(__mips_micromips) +#error "Unsupported encoding" +#endif + int cpu_signal_handler(int host_signum, void *pinfo, void *puc) { @@ -709,9 +713,42 @@ int cpu_signal_handler(int host_signum, void *pinfo, ucontext_t *uc = puc; greg_t pc = uc->uc_mcontext.pc; int is_write; +uint32_t insn; -/* XXX: compute is_write */ +/* Detect all store instructions at program counter. */ is_write = 0; +insn = *(uint32_t *)pc; +switch((insn >> 26) & 077) { +case 050: /* SB */ +case 051: /* SH */ +case 052: /* SWL */ +case 053: /* SW */ +case 054: /* SDL */ +case 055: /* SDR */ +case 056: /* SWR */ +case 070: /* SC */ +case 071: /* SWC1 */ +case 074: /* SCD */ +case 075: /* SDC1 */ +case 077: /* SD */ +#if !defined(__mips_isa_rev) || __mips_isa_rev < 6 +case 072: /* SWC2 */ +case 076: /* SDC2 */ +#endif +is_write = 1; +break; +case 023: /* COP1X */ +/* Required in all versions of MIPS64 since + MIPS64r1 and subsequent versions of MIPS32r2. */ +switch (insn & 077) { +case 010: /* SWXC1 */ +case 011: /* SDXC1 */ +case 015: /* SUXC1 */ +is_write = 1; +} +break; +} + return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask); } -- 2.17.1
Re: [PATCH v2 1/2] doc: Remove texi referenced in qemu-img-cmds.hx and target/i386/cpu.c
OK, then skip this On Fri, Oct 2, 2020 at 1:15 PM Markus Armbruster wrote: > > Yonggang Luo writes: > > > There is no texi document anymore > > > > Signed-off-by: Yonggang Luo > > --- > > qemu-img-cmds.hx | 2 +- > > target/i386/cpu.c | 2 +- > > 2 files changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/qemu-img-cmds.hx b/qemu-img-cmds.hx > > index b89c019b76..cab8234235 100644 > > --- a/qemu-img-cmds.hx > > +++ b/qemu-img-cmds.hx > > @@ -1,5 +1,5 @@ > > HXCOMM Keep the list of subcommands sorted by name. > > -HXCOMM Use DEFHEADING() to define headings in both help text and texi > > +HXCOMM Use DEFHEADING() to define headings in both help text and rST > > HXCOMM Text between SRST and ERST are copied to rST version and > > HXCOMM discarded from C version > > HXCOMM DEF(command, callback, arg_string) is used to construct > > diff --git a/target/i386/cpu.c b/target/i386/cpu.c > > index f37eb7b675..f8231f56b6 100644 > > --- a/target/i386/cpu.c > > +++ b/target/i386/cpu.c > > @@ -4127,7 +4127,7 @@ static PropValue tcg_default_props[] = { > > * We resolve CPU model aliases using -v1 when using "-machine > > * none", but this is just for compatibility while libvirt isn't > > * adapted to resolve CPU model versions before creating VMs. > > - * See "Runnability guarantee of CPU models" at * qemu-deprecated.texi. > > + * See "Runnability guarantee of CPU models" at * deprecated.rst. > > */ > > X86CPUVersion default_cpu_version = 1; > > Duplicates my "[PATCH 0/2] Update leftover comments that mention > Texinfo", which Laurent has queued. > -- 此致 礼 罗勇刚 Yours sincerely, Yonggang Luo
Re: [PATCH v3 1/1] accel/tcg: Fix computing of is_write for MIPS
> +case 015: /* SDXC1 */ I just found a comment mistake about SUXC1, and I have rectified it and resent a new patch. On Tue, 29 Sep 2020 at 09:59, Kele Huang wrote: > Thank you so much! > > > On Mon, 28 Sep 2020 at 16:14, Aleksandar Markovic < > aleksandar.qemu.de...@gmail.com> wrote: > >> >> >> On Sunday, September 27, 2020, Kele Huang wrote: >> >>> Sorry about that, I only got maintainers by './scripts/get_maintainer.pl >>> -f accel/tcg/user-exec.c' and missed your advice about maintainers. >>> In another words, I thought I had Cc'ed the TCG MIPS maintainers. 😅 >>> And sorry to maintainers. 😅 >>> >> This is fine, Kele. :) >> >> The granularity of get_maintainer.py is at file level, so this is one of >> the cases where you can use your own judgement and include more email >> addresses, even though get_maintainer.py doesn't list them. >> get_maintainer.py is good most of the time, but not always. But not a big >> deal. >> >> Thanks for the patch! :) >> >> I expect Richard is going to include it in his next tcg queue. >> >> Yours, >> Aleksandar >> >> >>> On Sun, 27 Sep 2020 at 16:41, Philippe Mathieu-Daudé >>> wrote: >>> On 9/27/20 10:20 AM, Kele Huang wrote: > Detect all MIPS store instructions in cpu_signal_handler for all available > MIPS versions, and set is_write if encountering such store instructions. > > This fixed the error while dealing with self-modified code for MIPS. > > Reviewed-by: Richard Henderson > Signed-off-by: Kele Huang > Signed-off-by: Xu Zou I already Cc'ed the TCG MIPS maintainers twice for you, but you don't mind, so this time I won't insist. > --- > accel/tcg/user-exec.c | 39 ++- > 1 file changed, 38 insertions(+), 1 deletion(-) > > diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c > index bb039eb32d..9ecda6c0d0 100644 > --- a/accel/tcg/user-exec.c > +++ b/accel/tcg/user-exec.c > @@ -702,6 +702,10 @@ int cpu_signal_handler(int host_signum, void *pinfo, > > #elif defined(__mips__) > > +#if defined(__misp16) || defined(__mips_micromips) > +#error "Unsupported encoding" > +#endif > + > int cpu_signal_handler(int host_signum, void *pinfo, > void *puc) > { > @@ -709,9 +713,42 @@ int cpu_signal_handler(int host_signum, void *pinfo, > ucontext_t *uc = puc; > greg_t pc = uc->uc_mcontext.pc; > int is_write; > +uint32_t insn; > > -/* XXX: compute is_write */ > +/* Detect all store instructions at program counter. */ > is_write = 0; > +insn = *(uint32_t *)pc; > +switch((insn >> 26) & 077) { > +case 050: /* SB */ > +case 051: /* SH */ > +case 052: /* SWL */ > +case 053: /* SW */ > +case 054: /* SDL */ > +case 055: /* SDR */ > +case 056: /* SWR */ > +case 070: /* SC */ > +case 071: /* SWC1 */ > +case 074: /* SCD */ > +case 075: /* SDC1 */ > +case 077: /* SD */ > +#if !defined(__mips_isa_rev) || __mips_isa_rev < 6 > +case 072: /* SWC2 */ > +case 076: /* SDC2 */ > +#endif > +is_write = 1; > +break; > +case 023: /* COP1X */ > +/* Required in all versions of MIPS64 since > + MIPS64r1 and subsequent versions of MIPS32r2. */ > +switch (insn & 077) { > +case 010: /* SWXC1 */ > +case 011: /* SDXC1 */ > +case 015: /* SDXC1 */ > +is_write = 1; > +} > +break; > +} > + > return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask); > } > >
Re: [PATCH qemu 4/4] iotests: add test for bitmap mirror
On October 1, 2020 7:31 pm, Max Reitz wrote: > On 22.09.20 11:14, Fabian Grünbichler wrote: >> heavily based on/practically forked off iotest 257 for bitmap backups, >> but: >> >> - no writes to filter node 'mirror-top' between completion and >> finalization, as those seem to deadlock? >> - no inclusion of not-yet-available full/top sync modes in combination >> with bitmaps >> - extra set of reference/test mirrors to verify that writes in parallel >> with active mirror work >> >> intentionally keeping copyright and ownership of original test case to >> honor provenance. >> >> Signed-off-by: Fabian Grünbichler >> --- >> tests/qemu-iotests/306 | 546 +++ >> tests/qemu-iotests/306.out | 2846 >> tests/qemu-iotests/group |1 + >> 3 files changed, 3393 insertions(+) >> create mode 100755 tests/qemu-iotests/306 >> create mode 100644 tests/qemu-iotests/306.out >> >> diff --git a/tests/qemu-iotests/306 b/tests/qemu-iotests/306 >> new file mode 100755 >> index 00..1bb8bd4138 >> --- /dev/null >> +++ b/tests/qemu-iotests/306 > > [...] > >> +def test_bitmap_sync(bsync_mode, msync_mode='bitmap', failure=None): >> +""" >> +Test bitmap mirror routines. >> + >> +:param bsync_mode: Is the Bitmap Sync mode, and can be any of: >> +- on-success: This is the "incremental" style mode. Bitmaps are >> + synchronized to what was copied out only on success. >> + (Partial images must be discarded.) >> +- never: This is the "differential" style mode. >> + Bitmaps are never synchronized. >> +- always: This is a "best effort" style mode. >> + Bitmaps are always synchronized, regardless of >> failure. >> + (Partial images must be kept.) >> + >> +:param msync_mode: The mirror sync mode to use for the first mirror. >> + Can be any one of: >> +- bitmap: mirrors based on bitmap manifest. >> +- full: Full mirrors. >> +- top:Full mirrors of the top layer only. >> + >> +:param failure: Is the (optional) failure mode, and can be any of: >> +- None: No failure. Test the normative path. Default. >> +- simulated:Cancel the job right before it completes. >> +This also tests writes "during" the job. >> +- intermediate: This tests a job that fails mid-process and produces >> +an incomplete mirror. Testing limitations prevent >> +testing competing writes. >> +""" >> +with iotests.FilePath('img', 'bsync1', 'bsync2', 'bsync3', >> +'fmirror0', 'fmirror1', 'fmirror2', 'fmirror3') >> as \ > > The indentation is off now. > >> +(img_path, bsync1, bsync2, bsync3, >> + fmirror0, fmirror1, fmirror2, fmirror3), \ >> + iotests.VM() as vm: > Hm. On tmpfs, this test fails for me: > > ($ TEST_DIR=/tmp/iotest ./check -qcow2 306) > > @@ -170,7 +170,7 @@ > "drive0": [ >{ > "busy": false, > -"count": 262144, > +"count": 458752, > "granularity": 65536, > "persistent": false, > "recording": true, > @@ -464,7 +464,7 @@ > "drive0": [ >{ > "busy": false, > -"count": 262144, > +"count": 458752, > "granularity": 65536, > "persistent": false, > "recording": true, > @@ -760,7 +760,7 @@ > "drive0": [ >{ > "busy": false, > -"count": 262144, > +"count": 393216, > "granularity": 65536, > "persistent": false, > "recording": true, > @@ -1056,7 +1056,7 @@ > "drive0": [ >{ > "busy": false, > -"count": 262144, > +"count": 458752, > "granularity": 65536, > "persistent": false, > "recording": true, > @@ -1350,7 +1350,7 @@ > "drive0": [ >{ > "busy": false, > -"count": 262144, > +"count": 458752, > "granularity": 65536, > "persistent": false, > "recording": true, > @@ -2236,7 +2236,7 @@ > "drive0": [ >{ > "busy": false, > -"count": 262144, > +"count": 458752, > "granularity": 65536, > "persistent": false, > "recording": true, > > Can you see the same? yes, but also only on tmpfs. ext4, xfs, ZFS all work fine.. the numbers for tmpfs vary between runs, each wrong count is sometimes 393216 (256k expected + 128k extra), sometimes 458752 (+192k). it's always the dirty bitmap used by the mirror job which is 'wrong', not the passed-in sync bitmap which verifies correctly. the final mirror results also seem correct. for the first diff hunk (bitmap + never + simulated), we did - reference mirror #0 - add sync bitmap 'bitmap
Re: [PATCH qemu 2/4] drive-mirror: add support for conditional and always bitmap sync modes
On October 1, 2020 7:01 pm, Max Reitz wrote: > On 22.09.20 11:14, Fabian Grünbichler wrote: >> From: John Snow >> >> Teach mirror two new tricks for using bitmaps: >> >> Always: no matter what, we synchronize the copy_bitmap back to the >> sync_bitmap. In effect, this allows us resume a failed mirror at a later >> date, since the target bdrv should be exactly in the state referenced by >> the bitmap. >> >> Conditional: On success only, we sync the bitmap. This is akin to >> incremental backup modes; we can use this bitmap to later refresh a >> successfully created mirror, or possibly re-try the whole failed mirror >> if we are able to rollback the target to the state before starting the >> mirror. >> >> Based on original work by John Snow. >> >> Signed-off-by: Fabian Grünbichler >> --- >> block/mirror.c | 28 >> blockdev.c | 3 +++ >> 2 files changed, 23 insertions(+), 8 deletions(-) >> >> diff --git a/block/mirror.c b/block/mirror.c >> index d64c8203ef..bc4f4563d9 100644 >> --- a/block/mirror.c >> +++ b/block/mirror.c > > [...] > >> @@ -1781,8 +1793,8 @@ static BlockJob *mirror_start_job( >> } >> >> if (s->sync_mode == MIRROR_SYNC_MODE_BITMAP) { >> -bdrv_merge_dirty_bitmap(s->dirty_bitmap, s->sync_bitmap, >> -NULL, &local_err); >> +bdrv_dirty_bitmap_merge_internal(s->dirty_bitmap, s->sync_bitmap, >> + NULL, true); > > (Sorry for not catching it in the previous version, but) this hunk needs > to go into patch 1, doesn’t it? yes. this was a result of keeping the original patches #1 and #2, and doing the cleanup on-top as separate patches. I missed folding it into the first instead of (now combined) second patch. > Or, rather... Do we need it at all? Is there anything that would > prohibit just moving this merge call to before the set_busy call? > (Which again is probably something that should be done in patch 1.) > > (If you decide to keep calling *_internal, I think it would be nice to > add a comment above the call stating why we have to use *_internal here > (because the sync bitmap is busy now), and why it’s safe to do so > (because blockdev.c and/or mirror_start_job() have already checked the > bitmap). But if it’s possible to do the merge before marking the > sync_bitmap busy, we probably should rather do that.) I think it should be possible for this instance. for the other end of the job, merging back happens before setting the bitmap to un-busy, so we need to use _internal there. will add a comment for that one why we are allowed to do so. > >> if (local_err) { >> goto fail; >> } >> diff --git a/blockdev.c b/blockdev.c >> index 6baa1a33f5..0fd30a392d 100644 >> --- a/blockdev.c >> +++ b/blockdev.c >> @@ -3019,6 +3019,9 @@ static void blockdev_mirror_common(const char *job_id, >> BlockDriverState *bs, >> if (bdrv_dirty_bitmap_check(bitmap, BDRV_BITMAP_ALLOW_RO, errp)) { >> return; >> } >> +} else if (has_bitmap_mode) { >> +error_setg(errp, "Cannot specify bitmap sync mode without a >> bitmap"); >> +return; >> } > > This too I would move into patch 1. Ack.
Re: [PATCH qemu 1/4] drive-mirror: add support for sync=bitmap mode=never
On October 2, 2020 9:06 am, Markus Armbruster wrote: > Fabian Grünbichler writes: > >> From: John Snow >> >> This patch adds support for the "BITMAP" sync mode to drive-mirror and >> blockdev-mirror. It adds support only for the BitmapSyncMode "never," >> because it's the simplest mode. >> >> This mode simply uses a user-provided bitmap as an initial copy >> manifest, and then does not clear any bits in the bitmap at the >> conclusion of the operation. >> >> Any new writes dirtied during the operation are copied out, in contrast >> to backup. Note that whether these writes are reflected in the bitmap >> at the conclusion of the operation depends on whether that bitmap is >> actually recording! >> >> This patch was originally based on one by Ma Haocong, but it has since >> been modified pretty heavily. >> >> Suggested-by: Ma Haocong >> Signed-off-by: Ma Haocong >> Signed-off-by: John Snow >> Signed-off-by: Fabian Grünbichler >> --- > [...] >> diff --git a/qapi/block-core.json b/qapi/block-core.json >> index 2d94873ca0..dac5497084 100644 >> --- a/qapi/block-core.json >> +++ b/qapi/block-core.json >> @@ -1961,10 +1961,19 @@ >> #(all the disk, only the sectors allocated in the topmost image, or >> #only new I/O). >> # >> +# @bitmap: The name of a bitmap to use for sync=bitmap mode. This argument >> must >> +# be present for bitmap mode and absent otherwise. The bitmap's > > What is "bitmap mode"? Do you mean "present when @bitmap-mode is, else > absent"? bitmap mode is sync=bitmap , as in the first sentence. if you set sync=bitmap, you must specify a bitmap and a bitmap-mode. if you use another sync mode, you must not specify a bitmap or a bitmap-mode. there is also a 'sugar' sync mode 'incremental' that desugars to sync=bitmap with bitmap-mode=on-success. I guess that should also be mentioned somewhere in QAPI, it's mainly there since MirrorSyncMode has it as possible value, it's semantics are straight-forward to map onto this combination, and it's how the sync modes are known from backup jobs. maybe the following is easier to understand and more aligned with bitmap-mode: The name of the bitmap to use for sync=bitmap/sync=incremental mode. Must be present if sync is "bitmap" or "incremental". Must NOT be present otherwise. >> +# granularity is used instead of @granularity (since 5.2). >> +# >> +# @bitmap-mode: Specifies the type of data the bitmap should contain after >> +# the operation concludes. Must be present if sync is >> "bitmap". >> +# Must NOT be present otherwise. (Since 5.2) Specifies the type of data the bitmap should contain after the operation concludes. Must be present if sync is "bitmap". Must be "on-success" or absent if sync is "incremental". Must NOT be present otherwise. >> +# >> # @granularity: granularity of the dirty bitmap, default is 64K >> # if the image format doesn't have clusters, 4K if the >> clusters >> # are smaller than that, else the cluster size. Must be a >> -# power of 2 between 512 and 64M (since 1.4). >> +# power of 2 between 512 and 64M. Must not be specified if >> +# @bitmap is present (since 1.4). >> # > > Is @granularity forbidden with @bitmap because it makes no sense? yes. > > If yes, then it doesn't actually default to anything then, does it? we must use the same granularity as the sync bitmap passed in via 'bitmap', so the caller can't set a different one. > Looks like we have > > sync'bitmap'anything else > - > bitmap requiredforbidden > bitmap-mode requiredforbidden > granularity forbidden optional > > Correct? yes. with the addition of sync=incremental as subset of sync=bitmap, as described above. > >> # @buf-size: maximum amount of data in flight from source to >> #target (since 1.4). >> @@ -2002,7 +2011,9 @@ >> { 'struct': 'DriveMirror', >>'data': { '*job-id': 'str', 'device': 'str', 'target': 'str', >> '*format': 'str', '*node-name': 'str', '*replaces': 'str', >> -'sync': 'MirrorSyncMode', '*mode': 'NewImageMode', >> +'sync': 'MirrorSyncMode', '*bitmap': 'str', >> +'*bitmap-mode': 'BitmapSyncMode', >> +'*mode': 'NewImageMode', >> '*speed': 'int', '*granularity': 'uint32', >> '*buf-size': 'int', '*on-source-error': 'BlockdevOnError', >> '*on-target-error': 'BlockdevOnError', > [Same for blockdev-mirror...] > > > >
Re: [PATCH 3/9] hw/block/nvme: support per-namespace smart log
On Sep 30 15:04, Keith Busch wrote: > Let the user specify a specific namespace if they want to get access > stats for a specific namespace. > > Signed-off-by: Keith Busch > --- > hw/block/nvme.c | 66 +++- > include/block/nvme.h | 1 + > 2 files changed, 41 insertions(+), 26 deletions(-) > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > index 8d2b5be567..41389b2b09 100644 > --- a/hw/block/nvme.c > +++ b/hw/block/nvme.c > static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, > uint64_t off, NvmeRequest *req) > { > uint32_t nsid = le32_to_cpu(req->cmd.nsid); > - > +struct nvme_stats stats = { 0 }; > +NvmeSmartLog smart = { 0 }; > uint32_t trans_len; > +NvmeNamespace *ns; > time_t current_ms; > -uint64_t units_read = 0, units_written = 0; > -uint64_t read_commands = 0, write_commands = 0; > -NvmeSmartLog smart; > - > -if (nsid && nsid != 0x) { > -return NVME_INVALID_FIELD | NVME_DNR; > -} > > if (off >= sizeof(smart)) { > return NVME_INVALID_FIELD | NVME_DNR; > } > > -for (int i = 1; i <= n->num_namespaces; i++) { > -NvmeNamespace *ns = nvme_ns(n, i); > -if (!ns) { > -continue; > -} > - > -BlockAcctStats *s = blk_get_stats(ns->blkconf.blk); > +if (nsid != 0x) { > +ns = nvme_ns(n, nsid); > +if (!ns) > +return NVME_INVALID_NSID | NVME_DNR; Btw, this is failing style check (missing braces). signature.asc Description: PGP signature
Re: [PATCH v4 1/1] qapi: Restrict code generated for user-mode
On 02/10/20 09:36, Philippe Mathieu-Daudé wrote: > A lot of QAPI generated code is never used by user-mode. > > Split out qapi_system_modules and qapi_system_or_tools_modules > from the qapi_all_modules array. We now have 4 groups: > - always used > - only used by system-mode > - not used by user-mode > - not used by tools > > Signed-off-by: Philippe Mathieu-Daudé > --- > qapi/meson.build | 51 ++-- > 1 file changed, 36 insertions(+), 15 deletions(-) > > diff --git a/qapi/meson.build b/qapi/meson.build > index 7c4a89a882..10cf01ef65 100644 > --- a/qapi/meson.build > +++ b/qapi/meson.build > @@ -14,39 +14,60 @@ util_ss.add(files( > )) > > qapi_all_modules = [ > + 'common', > + 'introspect', > + 'misc', > +] > + > +qapi_system_modules = [ >'acpi', >'audio', > + 'dump', > + 'machine-target', > + 'migration', > + 'misc-target', > + 'net', > + 'pci', > + 'rdma', > + 'rocker', > + 'tpm', > + 'trace', > +] > + > +qapi_system_or_user_modules = [ > + 'machine', # X86CPUFeatureWordInfo > + 'qdev', > +] > + > +qapi_system_or_tools_modules = [ >'authz', >'block-core', >'block', >'char', > - 'common', >'control', >'crypto', > - 'dump', >'error', > - 'introspect', >'job', > - 'machine', > - 'machine-target', > - 'migration', > - 'misc', > - 'misc-target', > - 'net', >'pragma', > - 'qdev', > - 'pci', >'qom', > - 'rdma', > - 'rocker', >'run-state', >'sockets', > - 'tpm', > - 'trace', >'transaction', >'ui', > ] > > +if have_system > + qapi_all_modules += qapi_system_modules > +endif > + > +if have_system or have_user > + qapi_all_modules += qapi_system_or_user_modules > +endif > + > +if have_system or have_tools > + qapi_all_modules += qapi_system_or_tools_modules > +endif > + > qapi_storage_daemon_modules = [ >'block-core', >'char', > Reviewed-by: Paolo Bonzini
Re: [PATCH] elfload: use g_new instead of malloc
Thomas Huth writes: > On 02/10/2020 07.05, Markus Armbruster wrote: >> Elena Afanasova writes: >> >>> Signed-off-by: Elena Afanasova >>> --- >>> bsd-user/elfload.c | 92 +++--- >>> 1 file changed, 30 insertions(+), 62 deletions(-) >>> >>> diff --git a/bsd-user/elfload.c b/bsd-user/elfload.c >>> index 32378af7b2..e10ca54eb7 100644 >>> --- a/bsd-user/elfload.c >>> +++ b/bsd-user/elfload.c >>> @@ -867,18 +867,14 @@ static abi_ulong load_elf_interp(struct elfhdr * >>> interp_elf_ex, >>> if (sizeof(struct elf_phdr) * interp_elf_ex->e_phnum > >>> TARGET_PAGE_SIZE) >>> return ~(abi_ulong)0UL; >>> >>> -elf_phdata = (struct elf_phdr *) >>> -malloc(sizeof(struct elf_phdr) * interp_elf_ex->e_phnum); >>> - >>> -if (!elf_phdata) >>> - return ~((abi_ulong)0UL); >>> +elf_phdata = g_new(struct elf_phdr, interp_elf_ex->e_phnum); >>> >>> /* >>> * If the size of this structure has changed, then punt, since >>> * we will be doing the wrong thing. >>> */ >>> if (interp_elf_ex->e_phentsize != sizeof(struct elf_phdr)) { >>> -free(elf_phdata); >>> +g_free(elf_phdata); >>> return ~((abi_ulong)0UL); >>> } >>> >>> @@ -890,9 +886,8 @@ static abi_ulong load_elf_interp(struct elfhdr * >>> interp_elf_ex, >>> } >>> if (retval < 0) { >>> perror("load_elf_interp"); >>> +g_free(elf_phdata); >>> exit(-1); >>> -free (elf_phdata); >>> -return retval; >> >> Deleting return looks wrong. > > Why? There is an exit(-1) right in front of it, so this is dead code... > well, maybe that should be done in a separate patch, or at least > mentioned in the patch description, though. You're right; I missed the exit(-1). Following the unpleasant odour spread by exit(-1)... Aha, the function's behavior on error is inconsistent: sometimes it returns zero, sometimes it exits. Since the problem is bigger than just one dead return, I recommend leaving it to a separate patch, and keeping this one focused on g_new().
[Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
Thanks Lee, builds are most reliably done in Lauchpad IMHO and the Ubuntu Delta is a quilt stack which isn't as easily bisectable. If we end up searching not between Ubuntu Delta I can help to get you qemu built from git for that. But for some initially probing which range of changes we want to actually look at let me provide you PPAs. Try #1 in PPA [1] is the fix you referred [2] to that avoids the double delete to cause unexpected results. It will also add the warnings you have seen. So giving it a test should be a great first try. Let me know what the results with that are. [1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4294/+packages [2]: https://github.com/qemu/qemu/commit/cce8944cc9efab47d4bf29cfffb3470371c3541b -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1894804 Title: Second DEVICE_DELETED event missing during virtio-blk disk device detach Status in QEMU: New Status in qemu package in Ubuntu: Incomplete Bug description: We are in the process of moving OpenStack CI across to use 20.04 Focal as the underlying OS and encountering the following issue in any test attempting to detach disk devices from running QEMU instances. We can see a single DEVICE_DELETED event raised to libvirtd for the /machine/peripheral/virtio-disk1/virtio-backend device but we do not see a second event for the actual disk. As a result the device is still marked as present in libvirt but QEMU reports it as missing in subsequent attempts to remove the device. The following log snippets can also be found in the following pastebin that's slightly easier to gork: http://paste.openstack.org/show/797564/ https://review.opendev.org/#/c/746981/ libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260 nova-ceph-multistore https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt ii libvirt-daemon 6.0.0-0ubuntu8.3 amd64Virtualization daemon ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.3 amd64Virtualization daemon QEMU connection driver ii libvirt-daemon-system6.0.0-0ubuntu8.3 amd64Libvirt daemon configuration files ii libvirt-daemon-system-systemd6.0.0-0ubuntu8.3 amd64Libvirt daemon configuration files (systemd) ii libvirt-dev:amd646.0.0-0ubuntu8.3 amd64development files for the libvirt library ii libvirt0:amd64 6.0.0-0ubuntu8.3 amd64library for interfacing with different virtualization systems [..] ii qemu-block-extra:amd64 1:4.2-3ubuntu6.4 amd64extra block backend modules for qemu-system and qemu-utils ii qemu-slof20191209+dfsg-1 all Slimline Open Firmware -- QEMU PowerPC version ii qemu-system 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries ii qemu-system-arm 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (arm) ii qemu-system-common 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (common files) ii qemu-system-data 1:4.2-3ubuntu6.4 all QEMU full system emulation (data files) ii qemu-system-mips 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (mips) ii qemu-system-misc 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (miscellaneous) ii qemu-system-ppc 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (ppc) ii qemu-system-s390x1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (s390x) ii qemu-system-sparc1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (sparc) ii qemu-system-x86 1:4.2-3ubuntu6.4 amd64QEMU full system emulation binaries (x86) ii qemu-utils 1:4.2-3ubuntu6.4 amd64QEMU utilities https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu /instance-003a_log.txt 2020-09-07 19:29:55.021+: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers Thu, 30 Jul 2020 06:40:28 -
KVM call for agenda for 2020-10-06
Hi Please, send any topic that you are interested in covering. At the end of Monday I will send an email with the agenda or the cancellation of the call, so hurry up. For this call, we have agenda!! John Snow wants to talk about his new (and excting) developments with x-configure. Stay tuned. After discussions on the QEMU Summit, we are going to have always open a KVM call where you can add topics. Call details: By popular demand, a google calendar public entry with it https://www.google.com/calendar/embed?src=dG9iMXRqcXAzN3Y4ZXZwNzRoMHE4a3BqcXNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ (Let me know if you have any problems with the calendar entry. I just gave up about getting right at the same time CEST, CET, EDT and DST). If you need phone number details, contact me privately Thanks, Juan.
[PATCH v2] ppc/pnv: Increase max firmware size
Builds enabling GCOV can be bigger than 4MB and the limit on FSP systems is 16MB. Signed-off-by: Cédric Le Goater --- hw/ppc/pnv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c index 6670967e26a9..d9e52873ea70 100644 --- a/hw/ppc/pnv.c +++ b/hw/ppc/pnv.c @@ -61,7 +61,7 @@ #define FW_FILE_NAME"skiboot.lid" #define FW_LOAD_ADDR0x0 -#define FW_MAX_SIZE (4 * MiB) +#define FW_MAX_SIZE (16 * MiB) #define KERNEL_LOAD_ADDR0x2000 #define KERNEL_MAX_SIZE (256 * MiB) -- 2.25.4
Re: [PATCH] ppc/pnv: Increase max firware size
On 10/2/20 9:26 AM, Greg Kurz wrote: > On Fri, 2 Oct 2020 08:15:46 +0200 > Cédric Le Goater wrote: > >> That was sent a bit fast. Can you please add in the commit log : >> >> Builds enabling GCOV can be bigger than 4MB and the limit on FSP >> systems is 16MB. >> >> Thanks >> > > And also s/firware/firmware in the title. bah. sending a v2. Thanks, C.
[PATCH] gitlab: split deprecated job into build/check stages
While the job is pretty fast for only a few targets we still want to catch breakage of the build. By splitting the test step we can allow_failures for that while still ensuring we don't miss the build breaking. Signed-off-by: Alex Bennée --- .gitlab-ci.yml | 17 - 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 346f23acf7..a51c89554f 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -270,9 +270,24 @@ build-deprecated: variables: IMAGE: debian-all-test-cross CONFIGURE_ARGS: --disable-docs --disable-tools -MAKE_CHECK_ARGS: check-tcg +MAKE_CHECK_ARGS: build-tcg TARGETS: ppc64abi32-linux-user tilegx-linux-user lm32-softmmu unicore32-softmmu + artifacts: +expire_in: 2 days +paths: + - build + +# We split the check-tcg step as test failures are expected but we still +# want to catch the build breaking. +check-deprecated: + <<: *native_test_job_definition + needs: +- job: build-deprecated + artifacts: true + variables: +IMAGE: debian-all-test-cross +MAKE_CHECK_ARGS: check-tcg allow_failure: true build-oss-fuzz: -- 2.20.1
Re: [PATCH v4 1/1] qapi: Restrict code generated for user-mode
On 10/2/20 9:36 AM, Philippe Mathieu-Daudé wrote: > A lot of QAPI generated code is never used by user-mode. > > Split out qapi_system_modules and qapi_system_or_tools_modules > from the qapi_all_modules array. We now have 4 groups: > - always used > - only used by system-mode > - not used by user-mode > - not used by tools > > Signed-off-by: Philippe Mathieu-Daudé > --- > qapi/meson.build | 51 ++-- > 1 file changed, 36 insertions(+), 15 deletions(-) > > diff --git a/qapi/meson.build b/qapi/meson.build > index 7c4a89a882..10cf01ef65 100644 > --- a/qapi/meson.build > +++ b/qapi/meson.build > @@ -14,39 +14,60 @@ util_ss.add(files( > )) > > qapi_all_modules = [ > + 'common', > + 'introspect', > + 'misc', > +] > + > +qapi_system_modules = [ >'acpi', >'audio', > + 'dump', > + 'machine-target', > + 'migration', > + 'misc-target', > + 'net', > + 'pci', > + 'rdma', > + 'rocker', > + 'tpm', > + 'trace', > +] > + > +qapi_system_or_user_modules = [ > + 'machine', # X86CPUFeatureWordInfo > + 'qdev', I forgot the justification for this one: 'qdev', # DEVICE_DELETED > +] > + > +qapi_system_or_tools_modules = [ >'authz', >'block-core', >'block', >'char', > - 'common', >'control', >'crypto', > - 'dump', >'error', > - 'introspect', >'job', > - 'machine', > - 'machine-target', > - 'migration', > - 'misc', > - 'misc-target', > - 'net', >'pragma', > - 'qdev', > - 'pci', >'qom', > - 'rdma', > - 'rocker', >'run-state', >'sockets', > - 'tpm', > - 'trace', >'transaction', >'ui', > ] > > +if have_system > + qapi_all_modules += qapi_system_modules > +endif > + > +if have_system or have_user > + qapi_all_modules += qapi_system_or_user_modules > +endif > + > +if have_system or have_tools > + qapi_all_modules += qapi_system_or_tools_modules > +endif > + > qapi_storage_daemon_modules = [ >'block-core', >'char', >
Re: [PATCH v4 04/46] qapi: modify docstrings to be sphinx-compatible
John Snow writes: > On 10/1/20 4:52 AM, Markus Armbruster wrote: >> John Snow writes: >> >>> On 9/30/20 4:47 AM, Markus Armbruster wrote: John Snow writes: > I did not say "sphinx beautiful", just "sphinx compatible". They will > not throw errors when parsed and interpreted as ReST. "Bang on the keyboard until Sphinx doesn't throw errors anymore" might be good enough for a certain kind of mathematician, but a constructive solution needs a bit more direction. Is there a specification to follow? Other useful resources? >>> >>> I don't know if you are asking this question rhetorically, or in good faith. >> I ask to make sure I understand goals and limitations of your doc >> string >> work in this series. >> Also, even a passing to Sphinx becomes more useful when accompanied >> by a >> link to relevant documentation. >> >>> Let me preface this by saying: This series, and these 119 patches, are >>> not about finding a style guide for our docstring utilization or about >>> proposing one. It is also not about rigorously adding such >>> documentation or about finding ways to meaningfully publish it with >>> e.g. Sphinx, or the styling of such pages. >>> >>> Why bother to add docstrings at all, then? Because I needed them for >>> my own sake when learning this code and I felt it would be a waste to >>> just delete them, and I am of sound mind and able body and believe >>> that some documentation was better than none. They are useful even >>> just as plaintext. >>> >>> Having said that, let's explore the actual style I tend to use. >>> >>> I mentioned before in response to a review comment that there isn't >>> really any standard for docstrings. There are a few competing >>> "styles", but none of which are able to be programmatically checked >>> and validated. >>> >>> The primary guide for docstrings is PEP 257, of which I follow some >>> but not all of the recommendations. >>> >>> https://www.python.org/dev/peps/pep-0257/ >> >> I find PEP 257 frustrating. It leaves me with more questions than >> answers. > > Yeah, sorry. That's what we're dealing with. It's very open-ended. > >>> In general, >>> >>> - Always use triple-double quotes (unenforced) >>> - Modules, classes, and functions should have docstrings (pylint) >>> - No empty lines before or after the docstring (unenforced) >>> - Multi-line docstrings should take the form (unenforced): >>> >>> """ >>> single-line summary of function. >>> >>> Additional prose if needed describing the function. >>> >>> :param x: Input such that blah blah. >>> :raises y: When input ``x`` is unsuitable because blah blah. >>> :returns: A value that blah blah. >> This paragraph is already not PEP 257. >> > > Right -- well, it isn't NOT PEP 257 either. It just suggests you have > to describe these features, but it doesn't say HOW. Yep. Frustrating. >>> """ >>> >>> PEP257 suggests a form where the single-line summary appears on the >>> same line as the opening triple quotes. I don't like this, and prefer >>> symmetry. PEP257 *also* suggests that writing it my way is equivalent >>> to their way, because any docstring processor should trim the first >>> line. I take this as tacit admission that my way is acceptable and has >>> merit. >> I prefer the symmetric form myself. >> >>> What about the syntax or markup inside the docstring itself? there is >>> *absolutely no standard*, but Sphinx autodoc recognizes a few field >>> lists as significant in its parsing, so I prefer using them: >> >> Doc link? > > Hard to search for in my opinion; More reason to provide a link! > you want to search for "sphinx > python domain", and then click on "field lists" on the sidebar. > > https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#info-field-lists > > It has a special understanding for: > > param/parameter/arg/argument/key/keyword: I prefer "param" > here. Possibly key/keyword if we use a **kwargs form with a key that > we specially recognize, but I've not tested that yet. I know pycharm > understands "param" in a semantic way, and that's been good enough for me. > > type: Defines the type of a parameter. In my opinion, do not use > this. Let native type hints do the lifting. Agree. > raises/raise/except/exception: I prefer "raises". "raises ErrorType > when" is a good sentence. > > var/ivar/cvar: Describes a variable, presumably in the body of the > function below. I've never used this, I always describe it in prose > instead. > > vartype: Defines a type for a variable; I would again defer to the > native type system instead now. > > returns/return: I prefer "returns" for grammatical reasons > again. ("Returns such-and-such when...") "Return such-and-such when..." is just as correct: imperative mood. I prefer imperative mood for function contracts. > rtype: again, type information. Don't use. > > meta: For directives to sphinx, e.g. :meta private: or :meta public: > to tog
[Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
Thanks Christian, I'll confirm that we see error from [2] with that PPA shortly. In the meantime I've removed the device detach retry logic from OpenStack Nova and now always see two DEVICE_DELETED events raised by QEMU after a single device_del request from libvirt: http://paste.openstack.org/show/798637/ # zgrep 'debug.*"event": "DEVICE_DELETED"' libvirtd_log.txt.gz 2020-10-01 21:37:33.180+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588253, "microseconds": 179881}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}] 2020-10-01 21:37:33.242+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588253, "microseconds": 242565}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}] 2020-10-01 21:40:17.062+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588417, "microseconds": 62704}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}] 2020-10-01 21:40:17.114+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588417, "microseconds": 113888}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}] 2020-10-01 21:40:20.911+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588420, "microseconds": 911560}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}] 2020-10-01 21:40:20.985+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588420, "microseconds": 985753}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}] 2020-10-01 21:42:53.528+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588573, "microseconds": 528330}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}] 2020-10-01 21:42:53.583+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588573, "microseconds": 583063}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}] 2020-10-01 21:44:21.156+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588661, "microseconds": 155903}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}] 2020-10-01 21:44:21.214+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588661, "microseconds": 213714}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}] 2020-10-01 21:45:55.422+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588755, "microseconds": 422238}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}] 2020-10-01 21:45:55.479+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588755, "microseconds": 479250}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}] 2020-10-01 21:46:09.001+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588769, "microseconds": 970}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}] 2020-10-01 21:46:09.059+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588769, "microseconds": 58799}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}] 2020-10-01 21:48:05.182+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 160155, "microseconds": 182382}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}] 2020-10-01 21:48:05.246+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 160155, "microseconds": 246077}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}] 2020-10-01 21:48:08.211+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 160158, "microseconds": 211386}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}] 2020-10-01 21:48:08.269+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 160158, "microseconds": 269371}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}] 2020-10-01 21:51:58.384+: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589118, "microseconds": 384455}, "event": "DE
Re: [PATCH] gitlab: split deprecated job into build/check stages
On 02/10/2020 11.15, Alex Bennée wrote: > While the job is pretty fast for only a few targets we still want to > catch breakage of the build. By splitting the test step we can > allow_failures for that while still ensuring we don't miss the build > breaking. Is there already something broken? Otherwise, I'd rather simply remove the "allow_failure: true" tag instead ... these jobs otherwise tend to get ignored. > Signed-off-by: Alex Bennée > --- > .gitlab-ci.yml | 17 - > 1 file changed, 16 insertions(+), 1 deletion(-) > > diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml > index 346f23acf7..a51c89554f 100644 > --- a/.gitlab-ci.yml > +++ b/.gitlab-ci.yml > @@ -270,9 +270,24 @@ build-deprecated: >variables: > IMAGE: debian-all-test-cross > CONFIGURE_ARGS: --disable-docs --disable-tools > -MAKE_CHECK_ARGS: check-tcg > +MAKE_CHECK_ARGS: build-tcg > TARGETS: ppc64abi32-linux-user tilegx-linux-user lm32-softmmu >unicore32-softmmu > + artifacts: > +expire_in: 2 days > +paths: > + - build > + > +# We split the check-tcg step as test failures are expected but we still > +# want to catch the build breaking. > +check-deprecated: > + <<: *native_test_job_definition > + needs: > +- job: build-deprecated > + artifacts: true > + variables: > +IMAGE: debian-all-test-cross > +MAKE_CHECK_ARGS: check-tcg Anyway, that's better than before, so: Reviewed-by: Thomas Huth
Re: [PATCH v4] introduce vfio-user protocol specification
> On Sep 30, 2020, at 3:24 PM, Stefan Hajnoczi wrote: > > On Tue, Sep 29, 2020 at 09:21:54AM -0700, John G Johnson wrote: >>> On Sep 29, 2020, at 3:37 AM, Stefan Hajnoczi wrote: >>> >>> On Mon, Sep 28, 2020 at 09:58:37AM +, Thanos Makatos wrote: > It should be accompanied by a test in tests/. PCI-level testing APIS for > BARs, configuration space, interrupts, etc are available in > tests/qtest/libqos/pci.h. The test case needs to include a vfio-user > device backend interact with QEMU's vfio-user-pci implementation. We plan to use a libmuser-based backend for testing. This, I suppose, will make libmuser a dependency of QEMU (either as a submodule or as a library), which for now can be disabled in the default configuration. Is this acceptable? >>> >>> If there are no other dependencies and libmuser supports all host >>> operating systems that QEMU's -device vfio-user supports, then I think >>> it's a good idea to use libmuser for at least one in-tree test in QEMU. >>> > Also please let us know who is working on what so additional people can > get involved in areas that need work! Swapnil and I will be working on libmuser and the test in QEMU, John and the mp-qemu folks will be working on the patches for implementing --device vfio-user-pci. >>> >>> Great! >>> >>> John: Will mpqemu use libmuser to implement the remote PCI host >>> controller? >>> >> >> >> The vfio-user-pci plan is to use libmuser on the server side. > > Okay. Using libmuser in tests seems like a good choice in that case. > > We'll need to figure out the details of how to do it because the > traditional shared library dependency approach is not well-suited to > in-development code. It would involve shipping libmuser distro packages > so QEMU's build system can declare a library dependency (with details > provided in a pkg-config file). > > Here are approaches that are better for in-development libraries: > 1. Keep the libmuser code in qemu.git. > 2. A copy of libmuser in qemu.git with changes being sent upstream > (allows more flexibility in case QEMU-specific issues require > experimentation). > 3. Git submodules. > > #1 if you're happy to use the QEMU development process for merging > libmuser code then it's easiest to officially host the code in qemu.git. > libmuser gets a subdirectory in the qemu.git tree and you (the > maintainers) send pull requests. A libmuser library build target > provides installable static and shared libraries so external > applications can link against libmuser too. The big advantage here is > that QEMU can instantly use the latest libmuser code changes. I think there's a couple of limitations here which we should keep in mind. 1. Does putting it in qemu.git precludes it being BSD-3? There's been evidence of people using (or at least trying out) muser from where it currently lives. That doesn't mean we can't move it, but I'm wondering if it means we have to make it GPL. 2. What about other projects that need libmuser code? What worries me more is projects like SPDK/DPDK wanting to link against the library and having to clone the entire QEMU repo as a submodule. That sounds a lot more expensive than option 3 and probably have further complications if they aren't GPL. > > #2 works best if the library is a small (just a few source files) with > no fancy build system requirements. The risk here is that they go out of sync. There's the same (or even more) maintenance burden as point 3 below, with the added risk that someone could patch the files and make cherry-picks non-trivial. > > #3 is used in QEMU for several other components. Submodules are a pain > to sync (requires sending a qemu.git patch to move to a new commit ID), > so this isn't good for a dependency that moves quickly. I argue this is no worse than option 2. It's what I think aligns best, but let's keep weighing pros/cons and come to a conclusion together. The list of maintainers for muser.git should be extended to include more QEMU stakeholders and probably other projects that will use it (as) heavily. The topic has been raised in SPDK's Slack team on whether the client library should live in a repo of its own (eg. libvfio-user.git). Given the reference implementation is in libmuser, I still think muser.git is accurate (but can easily be persuaded otherwise). Cheers, Felipe > > Stefan
Re: ARM semihosting issue
On Thu, 1 Oct 2020 at 22:21, Bruno Prado wrote: > Thanks for the reply... I am attaching some code and output: > > #include > int main() { >char name[50] = "Nobody"; >FILE* file = fopen("name", "r"); >printf("What is your name?\n"); >fprintf(stdout, "Reading from file...\n"); >fscanf(file, "%s", name); >fscanf(stdin, "%s", name); >printf("My name is %s\n", name); >fprintf(stderr, "I am alive!!!\n"); >fclose(file); >return 0; > } This is not making direct semihosting calls. The behaviour of these function calls will depend on whatever the C standard library implementation you're linking with is doing. You're not checking for errors from any of your function calls, incidentally. thanks -- PMM
[PATCH v2] gitlab: move linux-user plugins test across to gitlab
Even with the recent split moving beefier plugins into contrib and dropping them from the check-tcg tests we are still hitting time limits. This possibly points to a slow down of --debug-tcg but seeing as we are migrating stuff to gitlab we might as well move there and bump the timeout. Signed-off-by: Alex Bennée --- v2 - use timeout instead of split build --- .gitlab-ci.yml | 11 +++ .travis.yml| 11 --- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index c265e7f8ab..346f23acf7 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -244,6 +244,17 @@ build-user: CONFIGURE_ARGS: --disable-tools --disable-system MAKE_CHECK_ARGS: check-tcg +# Run check-tcg against linux-user (with plugins) +# we skip sparc64-linux-user until it has been fixed somewhat +# we skip cris-linux-user as it doesn't use the common run loop +build-user-plugins: + <<: *native_build_job_definition + variables: +IMAGE: debian-all-test-cross +CONFIGURE_ARGS: --disable-tools --disable-system --enable-plugins --enable-debug-tcg --target-list-exclude=sparc64-linux-user,cris-linux-user +MAKE_CHECK_ARGS: check-tcg + timeout: 1h 30m + build-clang: <<: *native_build_job_definition variables: diff --git a/.travis.yml b/.travis.yml index c255c331a7..519e62432d 100644 --- a/.travis.yml +++ b/.travis.yml @@ -311,17 +311,6 @@ jobs: - CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-debug-tcg" -# Run check-tcg against linux-user (with plugins) -# we skip sparc64-linux-user until it has been fixed somewhat -# we skip cris-linux-user as it doesn't use the common run loop -- name: "GCC plugins check-tcg (user)" - env: -- CONFIG="--disable-system --enable-plugins --enable-debug-tcg --target-list-exclude=sparc64-linux-user,cris-linux-user" -- TEST_BUILD_CMD="make build-tcg" -- TEST_CMD="make check-tcg" -- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-debug-tcg" - - # Run check-tcg against softmmu targets - name: "GCC check-tcg (some-softmmu)" env: -- 2.20.1
Re: [PATCH v2] gitlab: move linux-user plugins test across to gitlab
On 02/10/2020 12.32, Alex Bennée wrote: > Even with the recent split moving beefier plugins into contrib and > dropping them from the check-tcg tests we are still hitting time > limits. This possibly points to a slow down of --debug-tcg but seeing > as we are migrating stuff to gitlab we might as well move there and > bump the timeout. > > Signed-off-by: Alex Bennée > > --- > v2 > - use timeout instead of split build > --- > .gitlab-ci.yml | 11 +++ > .travis.yml| 11 --- > 2 files changed, 11 insertions(+), 11 deletions(-) Reviewed-by: Thomas Huth
Re: [PULL 00/17] Block patches
On Thu, 1 Oct 2020 at 16:03, Stefan Hajnoczi wrote: > Please rerun with make -j1 V=1 so the full command is printed. I'm not > sure what is emitting these errors. Vladimir's guess was correct: /usr/bin/python3 /home/pm215/qemu/block/../scripts/block-coroutine-wrapper.py block/block-gen.c ../../block/../include/block/block.h ../../block/coroutines.h && if test -e block/block-gen.c; then printf '%s\n' block/block-gen.c > block/block-gen.c.stamp; fi YAML:1:83: error: unknown enumerated scalar {"IndentWidth": 4, "BraceWrapping": {"AfterFunction": true}, "BreakBeforeBraces": "Custom", "SortIncludes": false, "MaxEmptyLinesToKeep": 2} ^~~~ Error parsing -style: Invalid argument, using LLVM style YAML:1:83: error: unknown enumerated scalar {"IndentWidth": 4, "BraceWrapping": {"AfterFunction": true}, "BreakBeforeBraces": "Custom", "SortIncludes": false, "MaxEmptyLinesToKeep": 2} ^~~~ Error parsing -style: Invalid argument, using LLVM style thanks -- PMM
Re: Use of "?" for help has been deprecated for 8 years, can we drop it?
Hi, > > Did we ever issue a warning when it was used? It's easier to argue that > > it can be dropped if users had notice of some form or another. That > > said, I'm not heartbroken if we yank it immediately instead of letting > > it live for 2 more releases. > > How about keeping it as a convenience? I find it easier to type ? than help > and often use it instead. Well, ? has the problem that it is a shell glob char[1], which is for me enough reason to avoid it at least standalone ("-device ?"). When querying properties ("-device foo,?") it is much less likely to trip you up. On the other hand I don't see any harm in keeping it. What would we save when we drop it? Given we already have a helper function to check it is only 2-3 lines of extra code I guess? take care, Gerd [1] IIRC that was the reason to add "help" in the first place.
[PATCH] meson.build: Don't look for libudev for static builds
commit f01496a314d916 moved the logic for detecting libudev from configure to meson.build, but in the process it dropped the condition that meant we only ask pkg-config about libudev for a non-static build. This breaks static builds of the system emulators on at least Ubuntu 18.04.4, because on that host there is no static libudev but pkg-config still claims it exists. Reinstate the logic that we had in the configure check. Signed-off-by: Peter Maydell --- We could certainly do something cleverer here, but basic "convert from configure to meson" should in general not also be changing the detection logic IMHO. We can make the logic smarter as a follow-on patch if desired. --- meson.build | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/meson.build b/meson.build index 3161c1f037a..07da66e1d81 100644 --- a/meson.build +++ b/meson.build @@ -271,7 +271,7 @@ if 'CONFIG_CURL' in config_host link_args: config_host['CURL_LIBS'].split()) endif libudev = not_found -if targetos == 'linux' and (have_system or have_tools) +if targetos == 'linux' and (have_system or have_tools) and not enable_static libudev = dependency('libudev', required: get_option('mpath').enabled(), static: enable_static) -- 2.20.1
Re: ARM semihosting issue
I am including some syscall functions: int _fstat(int file, struct stat* st) { register int value asm("r0"); uint32_t p[] = { file }; R0(0x0C); R1(p); BKPT(); return value; } int _read(int file, char* ptr, int len) { register int value asm("r0"); uint32_t p[] = { file, (uint32_t)(ptr), len }; R0(0x06); R1(p); BKPT(); return value; } int _write(int file, char* ptr, int len) { register int value asm("r0"); uint32_t p[] = { file, (uint32_t)(ptr), len }; R0(0x05); R1(p); BKPT(); return value; } Also the interruption output from execution: $ qemu-system-arm -M netduino2 -nographic -semihosting -kernel vp2.bin -d int Taking exception 16 [Semihosting call] ...handling as semihosting call 0x1 Taking exception 16 [Semihosting call] ...handling as semihosting call 0x1 Taking exception 16 [Semihosting call] ...handling as semihosting call 0x1 Taking exception 16 [Semihosting call] ...handling as semihosting call 0x1 Taking exception 16 [Semihosting call] ...handling as semihosting call 0xc Taking exception 16 [Semihosting call] ...handling as semihosting call 0x5 What is your name? Taking exception 16 [Semihosting call] ...handling as semihosting call 0x5 Reading from file... Taking exception 16 [Semihosting call] ...handling as semihosting call 0xc Taking exception 16 [Semihosting call] ...handling as semihosting call 0x6 Taking exception 16 [Semihosting call] ...handling as semihosting call 0xc Taking exception 16 [Semihosting call] ...handling as semihosting call 0x6 Taking exception 16 [Semihosting call] ...handling as semihosting call 0x5 My name is Turing Taking exception 16 [Semihosting call] ...handling as semihosting call 0x5 I am alive!!! Taking exception 16 [Semihosting call] ...handling as semihosting call 0xa Taking exception 16 [Semihosting call] ...handling as semihosting call 0xa Taking exception 16 [Semihosting call] ...handling as semihosting call 0x2 Taking exception 16 [Semihosting call] ...handling as semihosting call 0x20 Could you please provide any working example using ARM semihosting on stdin? Thanks, Bruno Prado On Fri, Oct 2, 2020 at 7:25 AM Peter Maydell wrote: > On Thu, 1 Oct 2020 at 22:21, Bruno Prado wrote: > > Thanks for the reply... I am attaching some code and output: > > > > #include > > int main() { > >char name[50] = "Nobody"; > >FILE* file = fopen("name", "r"); > >printf("What is your name?\n"); > >fprintf(stdout, "Reading from file...\n"); > >fscanf(file, "%s", name); > >fscanf(stdin, "%s", name); > >printf("My name is %s\n", name); > >fprintf(stderr, "I am alive!!!\n"); > >fclose(file); > >return 0; > > } > > This is not making direct semihosting calls. The behaviour > of these function calls will depend on whatever the C > standard library implementation you're linking with is doing. > > You're not checking for errors from any of your function > calls, incidentally. > > thanks > -- PMM >
Re: [PATCH] meson.build: Don't look for libudev for static builds
So the better way is pkg-config handling sttaic properly? On Fri, Oct 2, 2020 at 6:53 PM Peter Maydell wrote: > > commit f01496a314d916 moved the logic for detecting libudev from > configure to meson.build, but in the process it dropped the condition > that meant we only ask pkg-config about libudev for a non-static > build. > > This breaks static builds of the system emulators on at least Ubuntu > 18.04.4, because on that host there is no static libudev but > pkg-config still claims it exists. > > Reinstate the logic that we had in the configure check. > > Signed-off-by: Peter Maydell > --- > We could certainly do something cleverer here, but basic "convert > from configure to meson" should in general not also be changing the > detection logic IMHO. We can make the logic smarter as a follow-on > patch if desired. > --- > meson.build | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/meson.build b/meson.build > index 3161c1f037a..07da66e1d81 100644 > --- a/meson.build > +++ b/meson.build > @@ -271,7 +271,7 @@ if 'CONFIG_CURL' in config_host > link_args: config_host['CURL_LIBS'].split()) > endif > libudev = not_found > -if targetos == 'linux' and (have_system or have_tools) > +if targetos == 'linux' and (have_system or have_tools) and not enable_static >libudev = dependency('libudev', > required: get_option('mpath').enabled(), > static: enable_static) > -- > 2.20.1 > > -- 此致 礼 罗勇刚 Yours sincerely, Yonggang Luo
Re: [PATCH v2] gitlab: move linux-user plugins test across to gitlab
On Fri, 2020-10-02 at 11:32 +0100, Alex Bennée wrote: > Even with the recent split moving beefier plugins into contrib and > dropping them from the check-tcg tests we are still hitting time > limits. This possibly points to a slow down of --debug-tcg but seeing > as we are migrating stuff to gitlab we might as well move there and > bump the timeout. > > Signed-off-by: Alex Bennée Hi Alex, Unrelated to the patch: do we have custom runners on gitlab? I'm exploring ideas to run vm based tests there. Fam
Re: [PATCH] gitlab: split deprecated job into build/check stages
Thomas Huth writes: > On 02/10/2020 11.15, Alex Bennée wrote: >> While the job is pretty fast for only a few targets we still want to >> catch breakage of the build. By splitting the test step we can >> allow_failures for that while still ensuring we don't miss the build >> breaking. > > Is there already something broken? > > Otherwise, I'd rather simply remove the "allow_failure: true" tag > instead ... these jobs otherwise tend to get ignored. Yeah I got a recurring ppc64abi32-linux-user failure as I was testing the PR. > >> Signed-off-by: Alex Bennée >> --- >> .gitlab-ci.yml | 17 - >> 1 file changed, 16 insertions(+), 1 deletion(-) >> >> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml >> index 346f23acf7..a51c89554f 100644 >> --- a/.gitlab-ci.yml >> +++ b/.gitlab-ci.yml >> @@ -270,9 +270,24 @@ build-deprecated: >>variables: >> IMAGE: debian-all-test-cross >> CONFIGURE_ARGS: --disable-docs --disable-tools >> -MAKE_CHECK_ARGS: check-tcg >> +MAKE_CHECK_ARGS: build-tcg >> TARGETS: ppc64abi32-linux-user tilegx-linux-user lm32-softmmu >>unicore32-softmmu >> + artifacts: >> +expire_in: 2 days >> +paths: >> + - build >> + >> +# We split the check-tcg step as test failures are expected but we still >> +# want to catch the build breaking. >> +check-deprecated: >> + <<: *native_test_job_definition >> + needs: >> +- job: build-deprecated >> + artifacts: true >> + variables: >> +IMAGE: debian-all-test-cross >> +MAKE_CHECK_ARGS: check-tcg > > Anyway, that's better than before, so: > > Reviewed-by: Thomas Huth -- Alex Bennée
[PATCH v3 1/2] block: drop moderated sheepdog mailing list from MAINTAINERS file
The sheepdog mailing list is setup to stop and queue messages from non-subscribers, pending moderator approval. Unfortunately it seems that the moderation queue is not actively dealt with. Even when messages are approved, the sender is never added to the whitelist, so every future mail from the same sender continues to get stopped for moderation. MAINTAINERS entries should be responsive and not unneccessarily block mails from QEMU contributors, so drop the sheepdog mailing list. Reviewed-by: Thomas Huth Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Daniel P. Berrangé --- MAINTAINERS | 1 - 1 file changed, 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index b76fb31861..c7d6601adf 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2874,7 +2874,6 @@ F: block/rbd.c Sheepdog M: Liu Yuan L: qemu-bl...@nongnu.org -L: sheep...@lists.wpkg.org S: Odd Fixes F: block/sheepdog.c -- 2.26.2
[PATCH v3 0/2] block: deprecate the sheepdog driver
2 years back I proposed dropping the sheepdog mailing list from the MAINTAINERS file, but somehow the patch never got picked up: https://lists.gnu.org/archive/html/qemu-block/2018-03/msg01048.html So here I am with the same patch again. This time I go further and deprecate the sheepdog driver entirely. See the rationale in the second patch commit message. Changes in v3: - A few minor text changes - Don't initialize static variable to false Daniel P. Berrangé (2): block: drop moderated sheepdog mailing list from MAINTAINERS file block: deprecate the sheepdog block driver MAINTAINERS| 1 - block/sheepdog.c | 14 ++ configure | 5 +++-- docs/system/deprecated.rst | 9 + 4 files changed, 26 insertions(+), 3 deletions(-) -- 2.26.2
[PATCH v3 2/2] block: deprecate the sheepdog block driver
This thread from a little over a year ago: http://lists.wpkg.org/pipermail/sheepdog/2019-March/thread.html states that sheepdog is no longer actively developed. The only mentioned users are some companies who are said to have it for legacy reasons with plans to replace it by Ceph. There is talk about cutting out existing features to turn it into a simple demo of how to write a distributed block service. There is no evidence of anyone working on that idea: https://github.com/sheepdog/sheepdog/commits/master No real commits to git since Jan 2018, and before then just some minor technical debt cleanup. There is essentially no activity on the mailing list aside from patches to QEMU that get CC'd due to our MAINTAINERS entry. Fedora packages for sheepdog failed to build from upstream source because of the more strict linker that no longer merges duplicate global symbols. Fedora patches it to add the missing "extern" annotations and presumably other distros do to, but upstream source remains broken. There is only basic compile testing, no functional testing of the driver. Since there are no build pre-requisites the sheepdog driver is currently enabled unconditionally. This would result in configure issuing a deprecation warning by default for all users. Thus the configure default is changed to disable it, requiring users to pass --enable-sheepdog to build the driver. Reviewed-by: Markus Armbruster Reviewed-by: Thomas Huth Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Daniel P. Berrangé --- block/sheepdog.c | 14 ++ configure | 5 +++-- docs/system/deprecated.rst | 9 + 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/block/sheepdog.c b/block/sheepdog.c index 2f5c0eb376..e270f2022c 100644 --- a/block/sheepdog.c +++ b/block/sheepdog.c @@ -242,6 +242,16 @@ typedef struct SheepdogInode { */ #define FNV1A_64_INIT ((uint64_t)0xcbf29ce484222325ULL) +static void deprecation_warning(void) +{ +static bool warned; + +if (!warned) { +warn_report("the sheepdog block driver is deprecated"); +warned = true; +} +} + /* * 64 bit Fowler/Noll/Vo FNV-1a hash code */ @@ -1548,6 +1558,8 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags, char *buf = NULL; QemuOpts *opts; +deprecation_warning(); + s->bs = bs; s->aio_context = bdrv_get_aio_context(bs); @@ -2007,6 +2019,8 @@ static int sd_co_create(BlockdevCreateOptions *options, Error **errp) assert(options->driver == BLOCKDEV_DRIVER_SHEEPDOG); +deprecation_warning(); + s = g_new0(BDRVSheepdogState, 1); /* Steal SocketAddress from QAPI, set NULL to prevent double free */ diff --git a/configure b/configure index ca9b458ea0..c5971fe560 100755 --- a/configure +++ b/configure @@ -533,7 +533,7 @@ vdi="yes" vvfat="yes" qed="yes" parallels="yes" -sheepdog="yes" +sheepdog="no" libxml2="" debug_mutex="no" libpmem="" @@ -1940,7 +1940,7 @@ disabled with --disable-FEATURE, default is enabled if available: vvfat vvfat image format support qed qed image format support parallels parallels image format support - sheepdogsheepdog block driver support + sheepdogsheepdog block driver support (deprecated) crypto-afalgLinux AF_ALG crypto backend driver capstonecapstone disassembler support debug-mutex mutex debugging support @@ -7200,6 +7200,7 @@ if test "$parallels" = "yes" ; then echo "CONFIG_PARALLELS=y" >> $config_host_mak fi if test "$sheepdog" = "yes" ; then + add_to deprecated_features "sheepdog" echo "CONFIG_SHEEPDOG=y" >> $config_host_mak fi if test "$pty_h" = "yes" ; then diff --git a/docs/system/deprecated.rst b/docs/system/deprecated.rst index da862201ba..fbc67b189b 100644 --- a/docs/system/deprecated.rst +++ b/docs/system/deprecated.rst @@ -376,6 +376,15 @@ The above, converted to the current supported format:: json:{"file.driver":"rbd", "file.pool":"rbd", "file.image":"name"} +``sheepdog`` driver (since 5.2.0) +^ + +The ``sheepdog`` block device driver is deprecated. The corresponding upstream +server project is no longer actively maintained. Users are recommended to switch +to an alternative distributed block device driver such as RBD. The +``qemu-img convert`` command can be used to liberate existing data by moving +it out of sheepdog volumes into an alternative storage backend. + linux-user mode CPUs -- 2.26.2
[PULL 06/14] tests/docker: Use Fedora containers for MinGW cross-builds in the gitlab-CI
From: Thomas Huth According to our support policy, we do not support Debian 9 in QEMU anymore, and we only support building the Windows binaries with a very recent version of the MinGW toolchain. So we should not test the MinGW cross-compilation with Debian 9 anymore, but switch to something newer like Fedora. To do this, we need a separate Fedora container for each build that provides the QEMU_CONFIGURE_OPTS environment variable. Unfortunately, the MinGW 64-bit compiler seems to be a little bit slow, so we also have to disable some features like "capstone" in the build here to make sure that the CI pipelines still finish within a reasonable amount of time. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Message-Id: <20200921174320.46062-2-th...@redhat.com> Message-Id: <20200925154027.12672-10-alex.ben...@linaro.org> diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml index 8c89efeb6d..15e7b564f9 100644 --- a/.gitlab-ci.d/containers.yml +++ b/.gitlab-ci.d/containers.yml @@ -248,6 +248,16 @@ i386-fedora-cross-container: variables: NAME: fedora-i386-cross +win32-fedora-cross-container: + <<: *container_job_definition + variables: +NAME: fedora-win32-cross + +win64-fedora-cross-container: + <<: *container_job_definition + variables: +NAME: fedora-win64-cross + amd64-ubuntu1804-container: <<: *container_job_definition variables: diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml index 4ec7226b5c..510cfec03b 100644 --- a/.gitlab-ci.d/crossbuilds.yml +++ b/.gitlab-ci.d/crossbuilds.yml @@ -105,9 +105,9 @@ cross-s390x-user: cross-win32-system: <<: *cross_system_build_job_definition variables: -IMAGE: debian-win32-cross +IMAGE: fedora-win32-cross cross-win64-system: <<: *cross_system_build_job_definition variables: -IMAGE: debian-win64-cross +IMAGE: fedora-win64-cross diff --git a/tests/docker/dockerfiles/fedora-win32-cross.docker b/tests/docker/dockerfiles/fedora-win32-cross.docker new file mode 100644 index 00..5903e1b0b4 --- /dev/null +++ b/tests/docker/dockerfiles/fedora-win32-cross.docker @@ -0,0 +1,42 @@ +FROM fedora:32 + +# Please keep this list sorted alphabetically +ENV PACKAGES \ +bc \ +bzip2 \ +diffutils \ +findutils \ +gcc \ +gettext \ +git \ +hostname \ +make \ +meson \ +mingw32-bzip2 \ +mingw32-curl \ +mingw32-glib2 \ +mingw32-gmp \ +mingw32-gnutls \ +mingw32-gtk3 \ +mingw32-libjpeg-turbo \ +mingw32-libpng \ +mingw32-libtasn1 \ +mingw32-nettle \ +mingw32-nsis \ +mingw32-pixman \ +mingw32-pkg-config \ +mingw32-SDL2 \ +perl \ +perl-Test-Harness \ +python3 \ +python3-PyYAML \ +python3-setuptools \ +tar \ +which + +RUN dnf install -y $PACKAGES +RUN rpm -q $PACKAGES | sort > /packages.txt +ENV FEATURES mingw + +# Specify the cross prefix for this image (see tests/docker/common.rc) +ENV QEMU_CONFIGURE_OPTS --cross-prefix=i686-w64-mingw32- diff --git a/tests/docker/dockerfiles/fedora-win64-cross.docker b/tests/docker/dockerfiles/fedora-win64-cross.docker new file mode 100644 index 00..7f03cd8ffc --- /dev/null +++ b/tests/docker/dockerfiles/fedora-win64-cross.docker @@ -0,0 +1,38 @@ +FROM fedora:32 + +# Please keep this list sorted alphabetically +ENV PACKAGES \ +bc \ +bzip2 \ +diffutils \ +findutils \ +gcc \ +gettext \ +git \ +hostname \ +make \ +meson \ +mingw64-bzip2 \ +mingw64-curl \ +mingw64-glib2 \ +mingw64-gmp \ +mingw64-gtk3 \ +mingw64-libjpeg-turbo \ +mingw64-libpng \ +mingw64-libtasn1 \ +mingw64-pixman \ +mingw64-pkg-config \ +perl \ +perl-Test-Harness \ +python3 \ +python3-PyYAML \ +python3-setuptools \ +tar \ +which + +RUN dnf install -y $PACKAGES +RUN rpm -q $PACKAGES | sort > /packages.txt +ENV FEATURES mingw + +# Specify the cross prefix for this image (see tests/docker/common.rc) +ENV QEMU_CONFIGURE_OPTS --cross-prefix=x86_64-w64-mingw32- --disable-capstone -- 2.20.1
[PULL 01/14] migration: Silence compiler warning in global_state_store_running()
From: Thomas Huth GCC 9.3.0 on Ubuntu complains: In file included from /usr/include/string.h:495, from /home/travis/build/huth/qemu/include/qemu/osdep.h:87, from ../migration/global_state.c:13: In function ‘strncpy’, inlined from ‘global_state_store_running’ at ../migration/global_state.c:47:5: /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: error: ‘__builtin_strncpy’ specified bound 100 equals destination size [-Werror=stringop-truncation] 106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest)); | ^~ ... but we apparently really want to do a strncpy here - the size is already checked with the assert() statement right in front of it. To silence the warning, simply replace it with our strpadcpy() function. Suggested-by: Philippe Mathieu-Daudé (two years ago) Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Message-Id: <20200918103430.297167-4-th...@redhat.com> Message-Id: <20200925154027.12672-5-alex.ben...@linaro.org> diff --git a/migration/global_state.c b/migration/global_state.c index 25311479a4..a33947ca32 100644 --- a/migration/global_state.c +++ b/migration/global_state.c @@ -44,8 +44,8 @@ void global_state_store_running(void) { const char *state = RunState_str(RUN_STATE_RUNNING); assert(strlen(state) < sizeof(global_state.runstate)); -strncpy((char *)global_state.runstate, - state, sizeof(global_state.runstate)); +strpadcpy((char *)global_state.runstate, sizeof(global_state.runstate), + state, '\0'); } bool global_state_received(void) -- 2.20.1
[PULL 04/14] travis.yml: Drop the superfluous Python 3.6 build
From: Thomas Huth Python 3.6 is already the default Python in the jobs that are based on Ubuntu Bionic, so it does not make much sense to test this again separately. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Message-Id: <20200918103430.297167-7-th...@redhat.com> Message-Id: <20200925154027.12672-8-alex.ben...@linaro.org> diff --git a/.travis.yml b/.travis.yml index 65b825ff64..990dd11e6f 100644 --- a/.travis.yml +++ b/.travis.yml @@ -262,14 +262,6 @@ jobs: python: 3.5 -- name: "GCC Python 3.6 (x86_64-softmmu)" - env: -- CONFIG="--target-list=x86_64-softmmu" -- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-default" - language: python - python: 3.6 - - # Using newer GCC with sanitizers - name: "GCC9 with sanitizers (softmmu)" dist: bionic -- 2.20.1
[PULL 00/14] testing updates (python, plugins)
The following changes since commit b5ce42f5d138d7546f9faa2decbd6ee8702243a3: Merge remote-tracking branch 'remotes/jsnow-gitlab/tags/ide-pull-request' into staging (2020-10-01 19:55:10 +0100) are available in the Git repository at: https://github.com/stsquad/qemu.git tags/pull-testing-and-python-021020-1 for you to fetch changes up to 2614670b7585ce4ec503546bc3023844d392f270: gitlab: split deprecated job into build/check stages (2020-10-02 12:31:34 +0100) Python testing updates: - drop python 3.5 test from travis - replace Debian 9 containers with 10 - increase cross build timeout - bump minimum python version in configure - move user plugins tests to gitlab - split deprecated builds into build and test Alex Bennée (2): gitlab: move linux-user plugins test across to gitlab gitlab: split deprecated job into build/check stages Thomas Huth (12): migration: Silence compiler warning in global_state_store_running() travis.yml: Drop the default softmmu builds travis.yml: Update Travis to use Bionic and Focal instead of Xenial travis.yml: Drop the superfluous Python 3.6 build travis.yml: Drop the Python 3.5 build tests/docker: Use Fedora containers for MinGW cross-builds in the gitlab-CI gitlab-ci: Remove the Debian9-based containers and containers-layer3 tests/docker: Update the tricore container to debian 10 shippable.yml: Remove the Debian9-based MinGW cross-compiler tests tests/docker: Remove old Debian 9 containers gitlab-ci: Increase the timeout for the cross-compiler builds configure: Bump the minimum required Python version to 3.6 docs/conf.py | 4 +- configure | 4 +- migration/global_state.c | 4 +- .gitlab-ci.d/containers.yml| 38 - .gitlab-ci.d/crossbuilds.yml | 5 +- .gitlab-ci.yml | 29 +- .shippable.yml | 4 -- .travis.yml| 66 +- tests/docker/Makefile.include | 2 +- .../docker/dockerfiles/debian-tricore-cross.docker | 2 +- tests/docker/dockerfiles/debian-win32-cross.docker | 38 - tests/docker/dockerfiles/debian-win64-cross.docker | 45 --- tests/docker/dockerfiles/debian9-mxe.docker| 21 --- tests/docker/dockerfiles/debian9.docker| 32 --- tests/docker/dockerfiles/fedora-win32-cross.docker | 42 ++ tests/docker/dockerfiles/fedora-win64-cross.docker | 38 + tests/qemu-iotests/iotests.py | 2 - 17 files changed, 142 insertions(+), 234 deletions(-) delete mode 100644 tests/docker/dockerfiles/debian-win32-cross.docker delete mode 100644 tests/docker/dockerfiles/debian-win64-cross.docker delete mode 100644 tests/docker/dockerfiles/debian9-mxe.docker delete mode 100644 tests/docker/dockerfiles/debian9.docker create mode 100644 tests/docker/dockerfiles/fedora-win32-cross.docker create mode 100644 tests/docker/dockerfiles/fedora-win64-cross.docker -- 2.20.1
[PULL 02/14] travis.yml: Drop the default softmmu builds
From: Thomas Huth The total runtime of all Travis jobs is very long and we are testing all softmmu targets in the gitlab-CI already - so we can speed up the Travis testing a little bit by not testing the softmmu targets here anymore. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Reviewed-by: Daniel P. Berrangé Reviewed-by: Cleber Rosa Acked-by: Alex Bennée Acked-by: Philippe Mathieu-Daudé Message-Id: <20200918103430.297167-5-th...@redhat.com> Message-Id: <20200925154027.12672-6-alex.ben...@linaro.org> diff --git a/.travis.yml b/.travis.yml index bd9a6fc06c..b2d492f8c6 100644 --- a/.travis.yml +++ b/.travis.yml @@ -123,20 +123,6 @@ jobs: - CONFIG="--disable-system --static" - CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-default" - -# we split the system builds as it takes a while to build them all -- name: "GCC (main-softmmu)" - env: -- CONFIG="--disable-user --target-list=${MAIN_SOFTMMU_TARGETS}" -- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-default" - - -- name: "GCC (other-softmmu)" - env: - - CONFIG="--disable-user --target-list-exclude=${MAIN_SOFTMMU_TARGETS}" -- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-default" - - # Just build tools and run minimal unit and softfloat checks - name: "GCC check-unit and check-softfloat" env: -- 2.20.1
[PULL 08/14] tests/docker: Update the tricore container to debian 10
From: Thomas Huth We do not support Debian 9 anymore, thus update the Tricore container to Debian 10 now. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Reviewed-by: Daniel P. Berrangé Reviewed-by: Philippe Mathieu-Daudé Message-Id: <20200921174320.46062-4-th...@redhat.com> Message-Id: <20200925154027.12672-12-alex.ben...@linaro.org> diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml index 6769eef0ff..089cea7c14 100644 --- a/.gitlab-ci.d/containers.yml +++ b/.gitlab-ci.d/containers.yml @@ -210,7 +210,7 @@ sparc64-debian-cross-container: tricore-debian-cross-container: <<: *container_job_definition stage: containers-layer2 - needs: ['amd64-debian9-container'] + needs: ['amd64-debian10-container'] variables: NAME: debian-tricore-cross diff --git a/tests/docker/Makefile.include b/tests/docker/Makefile.include index 75704268ff..02ec92830b 100644 --- a/tests/docker/Makefile.include +++ b/tests/docker/Makefile.include @@ -137,7 +137,7 @@ docker-image-debian-sparc64-cross: docker-image-debian10 docker-image-travis: NOUSER=1 # Specialist build images, sometimes very limited tools -docker-image-debian-tricore-cross: docker-image-debian9 +docker-image-debian-tricore-cross: docker-image-debian10 docker-image-debian-all-test-cross: docker-image-debian10 docker-image-debian-arm64-test-cross: docker-image-debian11 diff --git a/tests/docker/dockerfiles/debian-tricore-cross.docker b/tests/docker/dockerfiles/debian-tricore-cross.docker index 769d95c77b..985925134c 100644 --- a/tests/docker/dockerfiles/debian-tricore-cross.docker +++ b/tests/docker/dockerfiles/debian-tricore-cross.docker @@ -7,7 +7,7 @@ # # SPDX-License-Identifier: GPL-2.0-or-later # -FROM qemu/debian9 +FROM qemu/debian10 MAINTAINER Philippe Mathieu-Daudé -- 2.20.1
[PULL 03/14] travis.yml: Update Travis to use Bionic and Focal instead of Xenial
From: Thomas Huth According to our support policy, we do not support Xenial anymore. Time to switch the bigger parts of the builds to Focal instead. Some few jobs have to be updated to Bionic instead, since they are currently still failing on Focal otherwise. Also "--disable-pie" is causing linker problems with newer versions of Ubuntu ... so remove that switch from the jobs now (we still test it in a gitlab CI job, so we don't lose much test coverage here). Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Tested-by: Cleber Rosa Reviewed-by: Cleber Rosa Message-Id: <20200918103430.297167-6-th...@redhat.com> Message-Id: <20200925154027.12672-7-alex.ben...@linaro.org> diff --git a/.travis.yml b/.travis.yml index b2d492f8c6..65b825ff64 100644 --- a/.travis.yml +++ b/.travis.yml @@ -2,7 +2,7 @@ # Additional builds with specific requirements for a full VM need to # be added as additional matrix: entries later on os: linux -dist: xenial +dist: focal language: c compiler: - gcc @@ -10,7 +10,7 @@ cache: # There is one cache per branch and compiler version. # characteristics of each job are used to identify the cache: # - OS name (currently only linux) - # - OS distribution (for Linux, xenial, trusty, or precise) + # - OS distribution (for Linux, bionic or focal) # - Names and values of visible environment variables set in .travis.yml or Settings panel timeout: 1200 ccache: true @@ -27,7 +27,7 @@ addons: - libattr1-dev - libbrlapi-dev - libcap-ng-dev - - libgcc-4.8-dev + - libgcc-7-dev - libgnutls28-dev - libgtk-3-dev - libiscsi-dev @@ -210,8 +210,10 @@ jobs: # gprof/gcov are GCC features - name: "GCC gprof/gcov" + dist: bionic env: -- CONFIG="--enable-gprof --enable-gcov --disable-pie --target-list=${MAIN_SOFTMMU_TARGETS}" +- CONFIG="--enable-gprof --enable-gcov --disable-libssh + --target-list=${MAIN_SOFTMMU_TARGETS}" after_success: - ${SRC_DIR}/scripts/travis/coverage-summary.sh @@ -270,6 +272,7 @@ jobs: # Using newer GCC with sanitizers - name: "GCC9 with sanitizers (softmmu)" + dist: bionic addons: apt: update: true @@ -285,7 +288,7 @@ jobs: - libattr1-dev - libbrlapi-dev - libcap-ng-dev -- libgnutls-dev +- libgnutls28-dev - libgtk-3-dev - libiscsi-dev - liblttng-ust-dev @@ -293,14 +296,13 @@ jobs: - libncurses5-dev - libnss3-dev - libpixman-1-dev -- libpng12-dev +- libpng-dev - librados-dev - libsdl2-dev - libsdl2-image-dev - libseccomp-dev - libspice-protocol-dev - libspice-server-dev -- libssh-dev - liburcu-dev - libusb-1.0-0-dev - libvte-2.91-dev @@ -310,11 +312,11 @@ jobs: compiler: none env: - COMPILER_NAME=gcc CXX=g++-9 CC=gcc-9 -- CONFIG="--cc=gcc-9 --cxx=g++-9 --disable-pie --disable-linux-user" +- CONFIG="--cc=gcc-9 --cxx=g++-9 --disable-linux-user" - TEST_CMD="" before_script: - mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR} -- ${SRC_DIR}/configure ${CONFIG} --extra-cflags="-g3 -O0 -Wno-error=stringop-truncation -fsanitize=thread" --extra-ldflags="-fuse-ld=gold" || { cat config.log && exit 1; } +- ${SRC_DIR}/configure ${CONFIG} --extra-cflags="-g3 -O0 -fsanitize=thread" || { cat config.log && exit 1; } # Run check-tcg against linux-user @@ -356,7 +358,7 @@ jobs: - name: "[aarch64] GCC check-tcg" arch: arm64 - dist: xenial + dist: focal addons: apt_packages: - libaio-dev @@ -389,7 +391,7 @@ jobs: - name: "[ppc64] GCC check-tcg" arch: ppc64le - dist: xenial + dist: focal addons: apt_packages: - libaio-dev -- 2.20.1
[PULL 13/14] gitlab: move linux-user plugins test across to gitlab
Even with the recent split moving beefier plugins into contrib and dropping them from the check-tcg tests we are still hitting time limits. This possibly points to a slow down of --debug-tcg but seeing as we are migrating stuff to gitlab we might as well move there and bump the timeout. Signed-off-by: Alex Bennée Reviewed-by: Thomas Huth Message-Id: <20201002103223.24022-1-alex.ben...@linaro.org> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index c265e7f8ab..346f23acf7 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -244,6 +244,17 @@ build-user: CONFIGURE_ARGS: --disable-tools --disable-system MAKE_CHECK_ARGS: check-tcg +# Run check-tcg against linux-user (with plugins) +# we skip sparc64-linux-user until it has been fixed somewhat +# we skip cris-linux-user as it doesn't use the common run loop +build-user-plugins: + <<: *native_build_job_definition + variables: +IMAGE: debian-all-test-cross +CONFIGURE_ARGS: --disable-tools --disable-system --enable-plugins --enable-debug-tcg --target-list-exclude=sparc64-linux-user,cris-linux-user +MAKE_CHECK_ARGS: check-tcg + timeout: 1h 30m + build-clang: <<: *native_build_job_definition variables: diff --git a/.travis.yml b/.travis.yml index c255c331a7..519e62432d 100644 --- a/.travis.yml +++ b/.travis.yml @@ -311,17 +311,6 @@ jobs: - CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-debug-tcg" -# Run check-tcg against linux-user (with plugins) -# we skip sparc64-linux-user until it has been fixed somewhat -# we skip cris-linux-user as it doesn't use the common run loop -- name: "GCC plugins check-tcg (user)" - env: -- CONFIG="--disable-system --enable-plugins --enable-debug-tcg --target-list-exclude=sparc64-linux-user,cris-linux-user" -- TEST_BUILD_CMD="make build-tcg" -- TEST_CMD="make check-tcg" -- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-debug-tcg" - - # Run check-tcg against softmmu targets - name: "GCC check-tcg (some-softmmu)" env: -- 2.20.1
[PULL 09/14] shippable.yml: Remove the Debian9-based MinGW cross-compiler tests
From: Thomas Huth We're not supporting Debian 9 anymore, and we are now testing MinGW cross-compiler builds in the gitlab-CI, too, so we do not really need these jobs in the shippable.yml anymore. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Message-Id: <20200921174320.46062-5-th...@redhat.com> Message-Id: <20200925154027.12672-13-alex.ben...@linaro.org> diff --git a/.shippable.yml b/.shippable.yml index 0b4fd6df1d..14350e6de8 100644 --- a/.shippable.yml +++ b/.shippable.yml @@ -7,10 +7,6 @@ env: matrix: - IMAGE=debian-amd64 TARGET_LIST=x86_64-softmmu,x86_64-linux-user -- IMAGE=debian-win32-cross - TARGET_LIST=arm-softmmu,i386-softmmu -- IMAGE=debian-win64-cross - TARGET_LIST=aarch64-softmmu,sparc64-softmmu,x86_64-softmmu - IMAGE=debian-armel-cross TARGET_LIST=arm-softmmu,arm-linux-user,armeb-linux-user - IMAGE=debian-armhf-cross -- 2.20.1
[PULL 11/14] gitlab-ci: Increase the timeout for the cross-compiler builds
From: Thomas Huth Some of the cross-compiler builds (the mips build and the win64 build for example) are quite slow and sometimes hit the 1h time limit. Increase the limit a little bit to make sure that we do not get failures in the CI runs just because of some few minutes. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Reviewed-by: Daniel P. Berrangé Reviewed-by: Philippe Mathieu-Daudé Message-Id: <20200921174320.46062-7-th...@redhat.com> Message-Id: <20200925154027.12672-15-alex.ben...@linaro.org> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml index 510cfec03b..03ebfabb3f 100644 --- a/.gitlab-ci.d/crossbuilds.yml +++ b/.gitlab-ci.d/crossbuilds.yml @@ -2,6 +2,7 @@ .cross_system_build_job_template: &cross_system_build_job_definition stage: build image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest + timeout: 80m script: - mkdir build - cd build -- 2.20.1
[PULL 14/14] gitlab: split deprecated job into build/check stages
While the job is pretty fast for only a few targets we still want to catch breakage of the build. By splitting the test step we can allow_failures for that while still ensuring we don't miss the build breaking. Signed-off-by: Alex Bennée Reviewed-by: Thomas Huth Message-Id: <20201002091538.3017-1-alex.ben...@linaro.org> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 346f23acf7..a51c89554f 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -270,9 +270,24 @@ build-deprecated: variables: IMAGE: debian-all-test-cross CONFIGURE_ARGS: --disable-docs --disable-tools -MAKE_CHECK_ARGS: check-tcg +MAKE_CHECK_ARGS: build-tcg TARGETS: ppc64abi32-linux-user tilegx-linux-user lm32-softmmu unicore32-softmmu + artifacts: +expire_in: 2 days +paths: + - build + +# We split the check-tcg step as test failures are expected but we still +# want to catch the build breaking. +check-deprecated: + <<: *native_test_job_definition + needs: +- job: build-deprecated + artifacts: true + variables: +IMAGE: debian-all-test-cross +MAKE_CHECK_ARGS: check-tcg allow_failure: true build-oss-fuzz: -- 2.20.1
[PULL 07/14] gitlab-ci: Remove the Debian9-based containers and containers-layer3
From: Thomas Huth According to our support policy, Debian 9 is not supported by the QEMU project anymore. Since we now switched the MinGW cross-compiler builds to Fedora, we do not need these Debian9-based containers in the gitlab-CI anymore, and can now also get rid of the "layer3" container build stage this way. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Reviewed-by: Daniel P. Berrangé Message-Id: <20200921174320.46062-3-th...@redhat.com> Message-Id: <20200925154027.12672-11-alex.ben...@linaro.org> diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml index 15e7b564f9..6769eef0ff 100644 --- a/.gitlab-ci.d/containers.yml +++ b/.gitlab-ci.d/containers.yml @@ -214,20 +214,6 @@ tricore-debian-cross-container: variables: NAME: debian-tricore-cross -win32-debian-cross-container: - <<: *container_job_definition - stage: containers-layer3 - needs: ['amd64-debian9-mxe-container'] - variables: -NAME: debian-win32-cross - -win64-debian-cross-container: - <<: *container_job_definition - stage: containers-layer3 - needs: ['amd64-debian9-mxe-container'] - variables: -NAME: debian-win64-cross - xtensa-debian-cross-container: <<: *container_job_definition variables: diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index a18e18b57e..c265e7f8ab 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -4,7 +4,6 @@ stages: - containers - containers-layer2 - - containers-layer3 - build - test -- 2.20.1
[PULL 12/14] configure: Bump the minimum required Python version to 3.6
From: Thomas Huth All our supported build platforms have Python 3.6 or newer nowadays, and there are some useful features in Python 3.6 which are not available in 3.5 yet (e.g. the type hint annotations which will allow us to statically type the QAPI parser), so let's bump the minimum Python version to 3.6 now. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Message-Id: <20200923162908.95372-1-th...@redhat.com> Message-Id: <20200925154027.12672-16-alex.ben...@linaro.org> diff --git a/docs/conf.py b/docs/conf.py index 606f623211..00e1b750e2 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -36,9 +36,9 @@ from sphinx.errors import ConfigError # In newer versions of Sphinx this will display nicely; in older versions # Sphinx will also produce a Python backtrace but at least the information # gets printed... -if sys.version_info < (3,5): +if sys.version_info < (3,6): raise ConfigError( -"QEMU requires a Sphinx that uses Python 3.5 or better\n") +"QEMU requires a Sphinx that uses Python 3.6 or better\n") # The per-manual conf.py will set qemu_docdir for a single-manual build; # otherwise set it here if this is an entire-manual-set build. diff --git a/configure b/configure index ca9b458ea0..a5841241be 100755 --- a/configure +++ b/configure @@ -1964,8 +1964,8 @@ fi # Note that if the Python conditional here evaluates True we will exit # with status 1 which is a shell 'false' value. -if ! $python -c 'import sys; sys.exit(sys.version_info < (3,5))'; then - error_exit "Cannot use '$python', Python >= 3.5 is required." \ +if ! $python -c 'import sys; sys.exit(sys.version_info < (3,6))'; then + error_exit "Cannot use '$python', Python >= 3.6 is required." \ "Use --python=/path/to/python to specify a supported Python." fi diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index 91e4a57126..f48460480a 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tests/qemu-iotests/iotests.py @@ -40,8 +40,6 @@ sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..', 'python')) from qemu import qtest from qemu.qmp import QMPMessage -assert sys.version_info >= (3, 6) - # Use this logger for logging messages directly from the iotests module logger = logging.getLogger('qemu.iotests') logger.addHandler(logging.NullHandler()) -- 2.20.1
[PULL 05/14] travis.yml: Drop the Python 3.5 build
From: Thomas Huth We are soon going to remove the support for Python 3.5. So remove the CI job now. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Message-Id: <20200922070441.48844-1-th...@redhat.com> Message-Id: <20200925154027.12672-9-alex.ben...@linaro.org> diff --git a/.travis.yml b/.travis.yml index 990dd11e6f..c255c331a7 100644 --- a/.travis.yml +++ b/.travis.yml @@ -253,15 +253,6 @@ jobs: - TEST_CMD="" -# Python builds -- name: "GCC Python 3.5 (x86_64-softmmu)" - env: -- CONFIG="--target-list=x86_64-softmmu" -- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-default" - language: python - python: 3.5 - - # Using newer GCC with sanitizers - name: "GCC9 with sanitizers (softmmu)" dist: bionic -- 2.20.1
[PULL 10/14] tests/docker: Remove old Debian 9 containers
From: Thomas Huth We do not support Debian 9 in QEMU anymore, and the Debian 9 containers are now no longer used in the gitlab-CI. Time to remove them. Signed-off-by: Thomas Huth Signed-off-by: Alex Bennée Reviewed-by: Daniel P. Berrangé Reviewed-by: Philippe Mathieu-Daudé Message-Id: <20200921174320.46062-6-th...@redhat.com> Message-Id: <20200925154027.12672-14-alex.ben...@linaro.org> diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml index 089cea7c14..11d079ea58 100644 --- a/.gitlab-ci.d/containers.yml +++ b/.gitlab-ci.d/containers.yml @@ -48,18 +48,6 @@ amd64-debian11-container: variables: NAME: debian11 -amd64-debian9-container: - <<: *container_job_definition - variables: -NAME: debian9 - -amd64-debian9-mxe-container: - <<: *container_job_definition - stage: containers-layer2 - needs: ['amd64-debian9-container'] - variables: -NAME: debian9-mxe - alpha-debian-cross-container: <<: *container_job_definition stage: containers-layer2 diff --git a/tests/docker/dockerfiles/debian-win32-cross.docker b/tests/docker/dockerfiles/debian-win32-cross.docker deleted file mode 100644 index b045e821b9..00 --- a/tests/docker/dockerfiles/debian-win32-cross.docker +++ /dev/null @@ -1,38 +0,0 @@ -# -# Docker mingw32 cross-compiler target -# -# This docker target builds on the debian Stretch MXE base image. -# -FROM qemu/debian9-mxe - -MAINTAINER Philippe Mathieu-Daudé - -ENV TARGET i686 - -ENV PATH $PATH:/usr/lib/mxe/usr/bin:/usr/lib/mxe/usr/$TARGET-w64-mingw32.shared/bin - -ENV PKG_CONFIG_PATH \ -$PKG_CONFIG_PATH:/usr/lib/mxe/usr/$TARGET-w64-mingw32.shared/lib/pkgconfig - -RUN apt-get update && \ -DEBIAN_FRONTEND=noninteractive eatmydata \ -apt-get install -y --no-install-recommends \ -mxe-$TARGET-w64-mingw32.shared-bzip2 \ -mxe-$TARGET-w64-mingw32.shared-curl \ -mxe-$TARGET-w64-mingw32.shared-glib \ -mxe-$TARGET-w64-mingw32.shared-libgcrypt \ -mxe-$TARGET-w64-mingw32.shared-libusb1 \ -mxe-$TARGET-w64-mingw32.shared-lzo \ -mxe-$TARGET-w64-mingw32.shared-nettle \ -mxe-$TARGET-w64-mingw32.shared-ncurses \ -mxe-$TARGET-w64-mingw32.shared-nsis \ -mxe-$TARGET-w64-mingw32.shared-pixman \ -mxe-$TARGET-w64-mingw32.shared-pkgconf \ -mxe-$TARGET-w64-mingw32.shared-pthreads \ -mxe-$TARGET-w64-mingw32.shared-sdl2 \ -mxe-$TARGET-w64-mingw32.shared-sdl2-mixer \ -mxe-$TARGET-w64-mingw32.shared-sdl2-gfx \ -mxe-$TARGET-w64-mingw32.shared-zlib - -# Specify the cross prefix for this image (see tests/docker/common.rc) -ENV QEMU_CONFIGURE_OPTS --cross-prefix=$TARGET-w64-mingw32.shared- diff --git a/tests/docker/dockerfiles/debian-win64-cross.docker b/tests/docker/dockerfiles/debian-win64-cross.docker deleted file mode 100644 index 4cc4a3f365..00 --- a/tests/docker/dockerfiles/debian-win64-cross.docker +++ /dev/null @@ -1,45 +0,0 @@ -# -# Docker mingw64 cross-compiler target -# -# This docker target builds on the debian Stretch MXE base image. -# -FROM qemu/debian9-mxe - -MAINTAINER Philippe Mathieu-Daudé - -ENV TARGET x86-64 - -ENV PATH $PATH:/usr/lib/mxe/usr/$TARGET-w64-mingw32.shared/bin - -ENV PKG_CONFIG_PATH \ -$PKG_CONFIG_PATH:/usr/lib/mxe/usr/$TARGET-w64-mingw32.shared/lib/pkgconfig - -RUN apt-get update && \ -DEBIAN_FRONTEND=noninteractive eatmydata \ -apt-get install -y --no-install-recommends \ -mxe-$TARGET-w64-mingw32.shared-bzip2 \ -mxe-$TARGET-w64-mingw32.shared-curl \ -mxe-$TARGET-w64-mingw32.shared-glib \ -mxe-$TARGET-w64-mingw32.shared-libgcrypt \ -mxe-$TARGET-w64-mingw32.shared-libusb1 \ -mxe-$TARGET-w64-mingw32.shared-lzo \ -mxe-$TARGET-w64-mingw32.shared-nettle \ -mxe-$TARGET-w64-mingw32.shared-ncurses \ -mxe-$TARGET-w64-mingw32.shared-nsis \ -mxe-$TARGET-w64-mingw32.shared-pixman \ -mxe-$TARGET-w64-mingw32.shared-pkgconf \ -mxe-$TARGET-w64-mingw32.shared-pthreads \ -mxe-$TARGET-w64-mingw32.shared-sdl2 \ -mxe-$TARGET-w64-mingw32.shared-sdl2-mixer \ -mxe-$TARGET-w64-mingw32.shared-sdl2-gfx \ -mxe-$TARGET-w64-mingw32.shared-zlib \ -curl && \ -curl -s -S -o /usr/lib/mxe/usr/x86_64-w64-mingw32.shared/include/WinHvEmulation.h \ - "https://sourceforge.net/p/mingw-w64/mingw-w64/ci/master/tree/mingw-w64-headers/include/winhvemulation.h?format=raw"; && \ -curl -s -S -o /usr/lib/mxe/usr/x86_64-w64-mingw32.shared/include/WinHvPlatform.h \ - "https://sourceforge.net/p/mingw-w64/mingw-w64/ci/master/tree/mingw-w64-headers/include/winhvplatform.h?format=raw"; && \ -curl -s -S -o /usr/lib/mxe/usr/x86_64-w64-mingw32.shared/include/winhvplatformdefs.h \ - "https://sourceforge.net/p/mingw-w64/mingw-w64/ci/master/tree/mingw-w64-headers/include/winhvplatformdefs.h?format=raw"; - -# Specify the cross prefix for this image (see tests/docker/comm
Re: [PATCH v1 1/1] sheepdog driver patch: fixs the problem of qemu process become crashed when the sheepdog gateway break the IO and then recover
Am 01.10.2020 um 04:21 hat mingwei geschrieben: > this patch fixs the problem of qemu process become crashed when the sheepdog > gateway break the IO for a few seconds and then recover. > > problem reproduce: > 1.start a fio process in qemu to produce IOs to sheepdog gateway, whatever IO > type you like. > 2.kill the sheepdog gateway. > 3.wait for a few seconds. > 4.restart the sheepdog gateway. > 5.qemu process crashed with segfault error 6. Can you post a stack trace? Signal 6 is not a segfault, but SIGABRT. > problem cause: > the last io coroutine will be destroyed after reconnect to sheepdog gateway, > but the coroutine still be scheduled and the s->co_recv is still the last io > coroutine pointer which had been destroyed, so when this coroutine go to > coroutine context switch, it will make qemu process crashed. > > problem fix: > just make s->co_recv = NULL when the last io coroutine reconnect to sheepdog > gateway. > > Signed-off-by: mingwei > --- > block/sheepdog.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/block/sheepdog.c b/block/sheepdog.c > index 2f5c0eb376..3a00f0c1e1 100644 > --- a/block/sheepdog.c > +++ b/block/sheepdog.c > @@ -727,6 +727,7 @@ static coroutine_fn void reconnect_to_sdog(void *opaque) > NULL, NULL, NULL); > close(s->fd); > s->fd = -1; > +s->co_recv = NULL; If s->co_revc != NULL before this, there is still a coroutine running that hasn't terminated yet. Don't we need to make sure that the coroutine actually terminates instead of just overwriting the pointer to it? Otherwise, we either leak the coroutine and the memory used for its stack, or the coroutine continues to run at some point and might interfer with the operation of the new instance. > /* Wait for outstanding write requests to be completed. */ > while (s->co_send != NULL) { co_write_request(opaque); } This existing code after your change is wrong, too, by the way. It potentially calls aio_co_wake() multiple times in a row, which will crash if it ends up only scheduling the coroutine instead of directly entering it. Kevin
Re: [PATCH] meson.build: Don't look for libudev for static builds
On 02/10/20 12:52, Peter Maydell wrote: > commit f01496a314d916 moved the logic for detecting libudev from > configure to meson.build, but in the process it dropped the condition > that meant we only ask pkg-config about libudev for a non-static > build. > > This breaks static builds of the system emulators on at least Ubuntu > 18.04.4, because on that host there is no static libudev but > pkg-config still claims it exists. > > Reinstate the logic that we had in the configure check. > > Signed-off-by: Peter Maydell I don't think this is a good idea. Having a completely new build system also lets us notice all these weird one-off tests, decide whether they are worth being kept (like the SDL -Wno-undef workaround) or not, and possibly come up with a policy that avoids having to make one-off decisions. If "../configure --static" worked and now doesn't then it would be a clear regression, but a one off check should have a bigger justification than "39 --disable switches have worked so far and 39 < 40". Here are three alternatives to the patch: - the workaround: just leave things as they are and add --disable-libudev to your script. We have just added a month ago a completely new dependency that would have the same issue (libu2f); we didn't add "test $static" just because you don't have libu2f installed and therefore you didn't notice. The day you updated your system ended up with libu2f-dev installed, you would probably just add --disable-libu2f instead of adding a test for $static. So why should libudev be treated differently? - the cheap fix: deprecate static building of emulators in the configure script (in the meanwhile you add --disable-libudev); if people complain, we figure out the dependencies that they care about and we try to understand if it's actually possible to build a statically linked emulator for their usecase. - the complicated fix: check which statically linked libraries are available in e.g. Debian and disable _all_ other dependencies with --static (most of them will be the ones that you already have to disable in your script, and most of them will be in configure). Based on the outcome, decide whether it's actually possible to build a statically linked emulator that makes sense. I wouldn't be surprised if people trying to statically build emulators are getting worse performance just due to a missing static libaio or libio_uring, for example, and these people might even be building a statically-linked QEMU for use with KVM (like Firecracker), i.e. they would care about performance. Finally, no matter how we proceed, static building of system emulators is not covered by either CI or tests/docker (only user-mode emulators are). Even if we deprecate it, we should make sure that your configuration is reproduced in either Travis or GitLab, otherwise people will keep on breaking it. That would also document that building a statically-linked system emulator is messy and requires a few dozen --disable options. Thanks, Paolo > We could certainly do something cleverer here, but basic "convert > from configure to meson" should in general not also be changing the > detection logic IMHO. We can make the logic smarter as a follow-on > patch if desired. > --- > meson.build | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/meson.build b/meson.build > index 3161c1f037a..07da66e1d81 100644 > --- a/meson.build > +++ b/meson.build > @@ -271,7 +271,7 @@ if 'CONFIG_CURL' in config_host > link_args: config_host['CURL_LIBS'].split()) > endif > libudev = not_found > -if targetos == 'linux' and (have_system or have_tools) > +if targetos == 'linux' and (have_system or have_tools) and not enable_static >libudev = dependency('libudev', > required: get_option('mpath').enabled(), > static: enable_static) >
Re: [PATCH] meson.build: Don't look for libudev for static builds
On 02/10/20 13:15, 罗勇刚(Yonggang Luo) wrote: > So the better way is pkg-config handling sttaic properly? The problem is that you cannot handle it properly. Consider for example libmultipath, which requires the program to define a couple functions for static linking to work. A compile-time check that the library works would fail, and therefore Meson (and configure) are just punting and declaring it a user configuration issue. Meson will warn about it, but it will proceed anyway. Paolo
Re: [PATCH qemu 1/4] drive-mirror: add support for sync=bitmap mode=never
Fabian Grünbichler writes: > On October 2, 2020 9:06 am, Markus Armbruster wrote: >> Fabian Grünbichler writes: >> >>> From: John Snow >>> >>> This patch adds support for the "BITMAP" sync mode to drive-mirror and >>> blockdev-mirror. It adds support only for the BitmapSyncMode "never," >>> because it's the simplest mode. >>> >>> This mode simply uses a user-provided bitmap as an initial copy >>> manifest, and then does not clear any bits in the bitmap at the >>> conclusion of the operation. >>> >>> Any new writes dirtied during the operation are copied out, in contrast >>> to backup. Note that whether these writes are reflected in the bitmap >>> at the conclusion of the operation depends on whether that bitmap is >>> actually recording! >>> >>> This patch was originally based on one by Ma Haocong, but it has since >>> been modified pretty heavily. >>> >>> Suggested-by: Ma Haocong >>> Signed-off-by: Ma Haocong >>> Signed-off-by: John Snow >>> Signed-off-by: Fabian Grünbichler >>> --- >> [...] >>> diff --git a/qapi/block-core.json b/qapi/block-core.json >>> index 2d94873ca0..dac5497084 100644 >>> --- a/qapi/block-core.json >>> +++ b/qapi/block-core.json >>> @@ -1961,10 +1961,19 @@ >>> #(all the disk, only the sectors allocated in the topmost image, or >>> #only new I/O). >>> # >>> +# @bitmap: The name of a bitmap to use for sync=bitmap mode. This argument >>> must >>> +# be present for bitmap mode and absent otherwise. The bitmap's >> >> What is "bitmap mode"? Do you mean "present when @bitmap-mode is, else >> absent"? > > bitmap mode is sync=bitmap , as in the first sentence. if you set > sync=bitmap, you must specify a bitmap and a bitmap-mode. if you use > another sync mode, you must not specify a bitmap or a bitmap-mode. Got it. > there is also a 'sugar' sync mode 'incremental' that desugars to > sync=bitmap with bitmap-mode=on-success. I guess that should also be > mentioned somewhere in QAPI, it's mainly there since MirrorSyncMode has > it as possible value, it's semantics are straight-forward to map onto > this combination, and it's how the sync modes are known from backup > jobs. > > maybe the following is easier to understand and more aligned with > bitmap-mode: > > The name of the bitmap to use for sync=bitmap/sync=incremental mode. > Must be present if sync is "bitmap" or "incremental". Must NOT be > present otherwise. Better. >>> +# granularity is used instead of @granularity (since 5.2). >>> +# >>> +# @bitmap-mode: Specifies the type of data the bitmap should contain after >>> +# the operation concludes. Must be present if sync is >>> "bitmap". >>> +# Must NOT be present otherwise. (Since 5.2) > > Specifies the type of data the bitmap should contain after the operation > concludes. Must be present if sync is "bitmap". Must be "on-success" or > absent if sync is "incremental". Must NOT be present otherwise. Thanks for closing the gaps. >>> +# >>> # @granularity: granularity of the dirty bitmap, default is 64K >>> # if the image format doesn't have clusters, 4K if the >>> clusters >>> # are smaller than that, else the cluster size. Must be a >>> -# power of 2 between 512 and 64M (since 1.4). >>> +# power of 2 between 512 and 64M. Must not be specified if >>> +# @bitmap is present (since 1.4). >>> # >> >> Is @granularity forbidden with @bitmap because it makes no sense? > > yes. > >> >> If yes, then it doesn't actually default to anything then, does it? > > we must use the same granularity as the sync bitmap passed in via > 'bitmap', so the caller can't set a different one. Contradicts the doc comment :) Shouldn't be hard to fix. >> Looks like we have >> >> sync'bitmap'anything else >> - >> bitmap requiredforbidden >> bitmap-mode requiredforbidden >> granularity forbidden optional >> >> Correct? > > yes. with the addition of sync=incremental as subset of sync=bitmap, as > described above. When you have members that make sense only for certain values of another member, you should consider (flat) unions. I'm not sure a flat union makes sense here, but I wanted to mention it. >>> # @buf-size: maximum amount of data in flight from source to >>> #target (since 1.4). >>> @@ -2002,7 +2011,9 @@ >>> { 'struct': 'DriveMirror', >>>'data': { '*job-id': 'str', 'device': 'str', 'target': 'str', >>> '*format': 'str', '*node-name': 'str', '*replaces': 'str', >>> -'sync': 'MirrorSyncMode', '*mode': 'NewImageMode', >>> +'sync': 'MirrorSyncMode', '*bitmap': 'str', >>> +'*bitmap-mode': 'BitmapSyncMode', >>> +'*mode': 'NewImageMode', >>> '*speed': 'int', '*granularity': 'uint32', >>> '*buf-si
[PULL 01/19] hw/s390x/css: Remove double initialization
From: Philippe Mathieu-Daudé Fix eventual copy/paste mistake introduced in commit bc994b74ea ("s390x/css: Use static initialization for channel_subsys fields"). Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Thomas Huth Message-Id: <20200907024020.854465-1-phi...@redhat.com> Signed-off-by: Cornelia Huck --- hw/s390x/css.c | 1 - 1 file changed, 1 deletion(-) diff --git a/hw/s390x/css.c b/hw/s390x/css.c index 519dc91316d7..9961cfe7bf67 100644 --- a/hw/s390x/css.c +++ b/hw/s390x/css.c @@ -353,7 +353,6 @@ static ChannelSubSys channel_subsys = { .pending_crws = QTAILQ_HEAD_INITIALIZER(channel_subsys.pending_crws), .do_crw_mchk = true, .sei_pending = false, -.do_crw_mchk = true, .crws_lost = false, .chnmon_active = false, .indicator_addresses = -- 2.25.4
[PULL 00/19] s390x changes
The following changes since commit b5ce42f5d138d7546f9faa2decbd6ee8702243a3: Merge remote-tracking branch 'remotes/jsnow-gitlab/tags/ide-pull-request' into staging (2020-10-01 19:55:10 +0100) are available in the Git repository at: https://github.com/cohuck/qemu tags/s390x-20201002 for you to fetch changes up to be2b567018d987591647935a7c9648e9c45e05e8: s390x/tcg: Implement CIPHER MESSAGE WITH AUTHENTICATION (KMA) (2020-10-02 13:52:49 +0200) s390x update - support extended sccb and diagnose 0x318 - implement additional instructions in tcg - bug fixes Collin L. Walling (7): s390/sclp: get machine once during read scp/cpu info s390/sclp: rework sclp boundary checks s390/sclp: read sccb from mem based on provided length s390/sclp: check sccb len before filling in data s390/sclp: use cpu offset to locate cpu entries s390/sclp: add extended-length sccb support for kvm guest s390: guest support for diagnose 0x318 Cornelia Huck (1): vfio-ccw: plug memory leak while getting region info David Hildenbrand (10): s390x/tcg: Implement MONITOR CALL s390x/cpumodel: S390_FEAT_MISC_INSTRUCTION_EXT -> S390_FEAT_MISC_INSTRUCTION_EXT2 s390x/tcg: Implement ADD HALFWORD (AGH) s390x/tcg: Implement SUBTRACT HALFWORD (SGH) s390x/tcg: Implement MULTIPLY (MG, MGRK) s390x/tcg: Implement MULTIPLY HALFWORD (MGH) s390x/tcg: Implement BRANCH INDIRECT ON CONDITION (BIC) s390x/tcg: Implement MULTIPLY SINGLE (MSC, MSGC, MSGRKC, MSRKC) s390x/tcg: We support Miscellaneous-Instruction-Extensions Facility 2 s390x/tcg: Implement CIPHER MESSAGE WITH AUTHENTICATION (KMA) Philippe Mathieu-Daudé (1): hw/s390x/css: Remove double initialization hw/s390x/css.c | 1 - hw/s390x/event-facility.c | 2 +- hw/s390x/sclp.c | 142 hw/vfio/ccw.c | 5 +- include/hw/s390x/sclp.h | 11 ++- target/s390x/cc_helper.c| 32 +++ target/s390x/cpu.h | 2 + target/s390x/cpu_features.h | 1 + target/s390x/cpu_features_def.h.inc | 6 +- target/s390x/cpu_models.c | 1 + target/s390x/excp_helper.c | 23 + target/s390x/gen-features.c | 6 +- target/s390x/helper.c | 2 + target/s390x/helper.h | 1 + target/s390x/insn-data.def | 15 +++ target/s390x/internal.h | 2 + target/s390x/kvm.c | 47 + target/s390x/machine.c | 17 target/s390x/translate.c| 69 ++ 19 files changed, 340 insertions(+), 45 deletions(-) -- 2.25.4
[PULL 03/19] s390/sclp: rework sclp boundary checks
From: Collin Walling Rework the SCLP boundary check to account for different SCLP commands (eventually) allowing different boundary sizes. Signed-off-by: Collin Walling Reviewed-by: Cornelia Huck Reviewed-by: Thomas Huth Acked-by: Janosch Frank Reviewed-by: Claudio Imbrenda Message-Id: <20200915194416.107460-3-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/sclp.c | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index 28b973de8fd2..a37cfbf534cd 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -49,6 +49,18 @@ static inline bool sclp_command_code_valid(uint32_t code) return false; } +static bool sccb_verify_boundary(uint64_t sccb_addr, uint16_t sccb_len) +{ +uint64_t sccb_max_addr = sccb_addr + sccb_len - 1; +uint64_t sccb_boundary = (sccb_addr & PAGE_MASK) + PAGE_SIZE; + +if (sccb_max_addr < sccb_boundary) { +return true; +} + +return false; +} + static void prepare_cpu_entries(MachineState *ms, CPUEntry *entry, int *count) { uint8_t features[SCCB_CPU_FEATURE_LEN] = { 0 }; @@ -229,6 +241,11 @@ int sclp_service_call_protected(CPUS390XState *env, uint64_t sccb, goto out_write; } +if (!sccb_verify_boundary(sccb, be16_to_cpu(work_sccb.h.length))) { +work_sccb.h.response_code = cpu_to_be16(SCLP_RC_SCCB_BOUNDARY_VIOLATION); +goto out_write; +} + sclp_c->execute(sclp, &work_sccb, code); out_write: s390_cpu_pv_mem_write(env_archcpu(env), 0, &work_sccb, @@ -274,7 +291,7 @@ int sclp_service_call(CPUS390XState *env, uint64_t sccb, uint32_t code) goto out_write; } -if ((sccb + be16_to_cpu(work_sccb.h.length)) > ((sccb & PAGE_MASK) + PAGE_SIZE)) { +if (!sccb_verify_boundary(sccb, be16_to_cpu(work_sccb.h.length))) { work_sccb.h.response_code = cpu_to_be16(SCLP_RC_SCCB_BOUNDARY_VIOLATION); goto out_write; } -- 2.25.4
[PULL 08/19] s390: guest support for diagnose 0x318
From: Collin Walling DIAGNOSE 0x318 (diag318) is an s390 instruction that allows the storage of diagnostic information that is collected by the firmware in the case of hardware/firmware service events. QEMU handles the instruction by storing the info in the CPU state. A subsequent register sync will communicate the data to the hypervisor. QEMU handles the migration via a VM State Description. This feature depends on the Extended-Length SCCB (els) feature. If els is not present, then a warning will be printed and the SCLP bit that allows the Linux kernel to execute the instruction will not be set. Availability of this instruction is determined by byte 134 (aka fac134) bit 0 of the SCLP Read Info block. This coincidentally expands into the space used for CPU entries, which means VMs running with the diag318 capability may not be able to read information regarding all CPUs unless the guest kernel supports an extended-length SCCB. This feature is not supported in protected virtualization mode. Signed-off-by: Collin Walling Acked-by: Janosch Frank Acked-by: Thomas Huth Acked-by: David Hildenbrand Acked-by: Claudio Imbrenda Message-Id: <20200915194416.107460-9-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/sclp.c | 5 include/hw/s390x/sclp.h | 8 ++ target/s390x/cpu.h | 2 ++ target/s390x/cpu_features.h | 1 + target/s390x/cpu_features_def.h.inc | 3 +++ target/s390x/cpu_models.c | 1 + target/s390x/gen-features.c | 1 + target/s390x/kvm.c | 39 + target/s390x/machine.c | 17 + 9 files changed, 77 insertions(+) diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index caf40f41b69e..00f1e4648db2 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -139,6 +139,11 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) s390_get_feat_block(S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT, read_info->conf_char_ext); +if (s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB)) { +s390_get_feat_block(S390_FEAT_TYPE_SCLP_FAC134, +&read_info->fac134); +} + read_info->facilities = cpu_to_be64(SCLP_HAS_CPU_INFO | SCLP_HAS_IOA_RECONFIG); diff --git a/include/hw/s390x/sclp.h b/include/hw/s390x/sclp.h index 88fb65aef44b..d3ade40a5a8d 100644 --- a/include/hw/s390x/sclp.h +++ b/include/hw/s390x/sclp.h @@ -134,7 +134,15 @@ typedef struct ReadInfo { uint16_t highest_cpu; uint8_t _reserved5[124 - 122]; /* 122-123 */ uint32_t hmfai; +uint8_t _reserved7[134 - 128]; /* 128-133 */ +uint8_t fac134; +uint8_t _reserved8[144 - 135]; /* 135-143 */ struct CPUEntry entries[]; +/* + * When the Extended-Length SCCB (ELS) feature is enabled the + * start of the entries field begins at an offset denoted by the + * offset_cpu field, otherwise it's at an offset of 128. + */ } QEMU_PACKED ReadInfo; typedef struct ReadCpuInfo { diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h index 035427521cec..f875ebf0f491 100644 --- a/target/s390x/cpu.h +++ b/target/s390x/cpu.h @@ -112,6 +112,8 @@ struct CPUS390XState { uint16_t external_call_addr; DECLARE_BITMAP(emergency_signals, S390_MAX_CPUS); +uint64_t diag318_info; + /* Fields up to this point are cleared by a CPU reset */ struct {} end_reset_fields; diff --git a/target/s390x/cpu_features.h b/target/s390x/cpu_features.h index 2a2947549387..ef52ffce83ec 100644 --- a/target/s390x/cpu_features.h +++ b/target/s390x/cpu_features.h @@ -23,6 +23,7 @@ typedef enum { S390_FEAT_TYPE_STFL, S390_FEAT_TYPE_SCLP_CONF_CHAR, S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT, +S390_FEAT_TYPE_SCLP_FAC134, S390_FEAT_TYPE_SCLP_CPU, S390_FEAT_TYPE_MISC, S390_FEAT_TYPE_PLO, diff --git a/target/s390x/cpu_features_def.h.inc b/target/s390x/cpu_features_def.h.inc index 1c04cc18f40f..f82b4b5ec16a 100644 --- a/target/s390x/cpu_features_def.h.inc +++ b/target/s390x/cpu_features_def.h.inc @@ -122,6 +122,9 @@ DEF_FEAT(SIE_CMMA, "cmma", SCLP_CONF_CHAR_EXT, 1, "SIE: Collaborative-memory-man DEF_FEAT(SIE_PFMFI, "pfmfi", SCLP_CONF_CHAR_EXT, 9, "SIE: PFMF interpretation facility") DEF_FEAT(SIE_IBS, "ibs", SCLP_CONF_CHAR_EXT, 10, "SIE: Interlock-and-broadcast-suppression facility") +/* Features exposed via SCLP SCCB Facilities byte 134 (bit numbers relative to byte-134) */ +DEF_FEAT(DIAG_318, "diag318", SCLP_FAC134, 0, "Control program name and version codes") + /* Features exposed via SCLP CPU info. */ DEF_FEAT(SIE_F2, "sief2", SCLP_CPU, 4, "SIE: interception format 2 (Virtual SIE)") DEF_FEAT(SIE_SKEY, "skey", SCLP_CPU, 5, "SIE: Storage-key facility") diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c index b97e9596ab03..ca484bfda7be 100644 --- a/target/s390x/cpu_models.c +++ b/target/s390x/cpu_models.c
[PULL 02/19] s390/sclp: get machine once during read scp/cpu info
From: Collin Walling Functions within read scp/cpu info will need access to the machine state. Let's make a call to retrieve the machine state once and pass the appropriate data to the respective functions. Signed-off-by: Collin Walling Reviewed-by: David Hildenbrand Reviewed-by: Thomas Huth Reviewed-by: Janosch Frank Reviewed-by: Cornelia Huck Reviewed-by: Claudio Imbrenda Message-Id: <20200915194416.107460-2-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/sclp.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index a0ce444b4bf2..28b973de8fd2 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -49,9 +49,8 @@ static inline bool sclp_command_code_valid(uint32_t code) return false; } -static void prepare_cpu_entries(SCLPDevice *sclp, CPUEntry *entry, int *count) +static void prepare_cpu_entries(MachineState *ms, CPUEntry *entry, int *count) { -MachineState *ms = MACHINE(qdev_get_machine()); uint8_t features[SCCB_CPU_FEATURE_LEN] = { 0 }; int i; @@ -77,7 +76,7 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) IplParameterBlock *ipib = s390_ipl_get_iplb(); /* CPU information */ -prepare_cpu_entries(sclp, read_info->entries, &cpu_count); +prepare_cpu_entries(machine, read_info->entries, &cpu_count); read_info->entries_cpu = cpu_to_be16(cpu_count); read_info->offset_cpu = cpu_to_be16(offsetof(ReadInfo, entries)); read_info->highest_cpu = cpu_to_be16(machine->smp.max_cpus - 1); @@ -132,10 +131,11 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) /* Provide information about the CPU */ static void sclp_read_cpu_info(SCLPDevice *sclp, SCCB *sccb) { +MachineState *machine = MACHINE(qdev_get_machine()); ReadCpuInfo *cpu_info = (ReadCpuInfo *) sccb; int cpu_count; -prepare_cpu_entries(sclp, cpu_info->entries, &cpu_count); +prepare_cpu_entries(machine, cpu_info->entries, &cpu_count); cpu_info->nr_configured = cpu_to_be16(cpu_count); cpu_info->offset_configured = cpu_to_be16(offsetof(ReadCpuInfo, entries)); cpu_info->nr_standby = cpu_to_be16(0); -- 2.25.4
[PULL 12/19] s390x/tcg: Implement ADD HALFWORD (AGH)
From: David Hildenbrand Easy, just like ADD HALFWORD IMMEDIATE (AGHI). Signed-off-by: David Hildenbrand Reviewed-by: Thomas Huth Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-3-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/insn-data.def | 1 + target/s390x/translate.c | 1 + 2 files changed, 2 insertions(+) diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index e14cbd63fa0a..dbcdb4c387fa 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -52,6 +52,7 @@ /* ADD HALFWORD */ C(0x4a00, AH, RX_a, Z, r1, m2_16s, new, r1_32, add, adds32) C(0xe37a, AHY, RXY_a, LD, r1, m2_16s, new, r1_32, add, adds32) +C(0xe338, AGH, RXY_a, MIE2,r1, m2_16s, r1, 0, add, adds64) /* ADD HALFWORD IMMEDIATE */ C(0xa70a, AHI, RI_a, Z, r1, i2, new, r1_32, add, adds32) C(0xa70b, AGHI,RI_a, Z, r1, i2, r1, 0, add, adds64) diff --git a/target/s390x/translate.c b/target/s390x/translate.c index 90dc1740e7ab..7ea666b9a7fb 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -6119,6 +6119,7 @@ enum DisasInsnEnum { #define FAC_AIS S390_FEAT_ADAPTER_INT_SUPPRESSION #define FAC_V S390_FEAT_VECTOR /* vector facility */ #define FAC_VE S390_FEAT_VECTOR_ENH /* vector enhancements facility 1 */ +#define FAC_MIE2S390_FEAT_MISC_INSTRUCTION_EXT2 /* miscellaneous-instruction-extensions facility 2 */ static const DisasInsn insn_info[] = { #include "insn-data.def" -- 2.25.4
[PULL 05/19] s390/sclp: check sccb len before filling in data
From: Collin Walling The SCCB must be checked for a sufficient length before it is filled with any data. If the length is insufficient, then the SCLP command is suppressed and the proper response code is set in the SCCB header. While we're at it, let's cleanup the length check by placing the calculation inside a macro. Fixes: 832be0d8a3bb ("s390x: sclp: Report insufficient SCCB length") Signed-off-by: Collin Walling Reviewed-by: Janosch Frank Reviewed-by: David Hildenbrand Reviewed-by: Cornelia Huck Reviewed-by: Thomas Huth Reviewed-by: Claudio Imbrenda Message-Id: <20200915194416.107460-5-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/sclp.c | 26 ++ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index 4ae6fb400b40..0d54075309d5 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -78,6 +78,8 @@ static void prepare_cpu_entries(MachineState *ms, CPUEntry *entry, int *count) } } +#define SCCB_REQ_LEN(s, max_cpus) (sizeof(s) + max_cpus * sizeof(CPUEntry)) + /* Provide information about the configuration, CPUs and storage */ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) { @@ -86,6 +88,12 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) int cpu_count; int rnsize, rnmax; IplParameterBlock *ipib = s390_ipl_get_iplb(); +int required_len = SCCB_REQ_LEN(ReadInfo, machine->possible_cpus->len); + +if (be16_to_cpu(sccb->h.length) < required_len) { +sccb->h.response_code = cpu_to_be16(SCLP_RC_INSUFFICIENT_SCCB_LENGTH); +return; +} /* CPU information */ prepare_cpu_entries(machine, read_info->entries, &cpu_count); @@ -95,12 +103,6 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) read_info->ibc_val = cpu_to_be32(s390_get_ibc_val()); -if (be16_to_cpu(sccb->h.length) < -(sizeof(ReadInfo) + cpu_count * sizeof(CPUEntry))) { -sccb->h.response_code = cpu_to_be16(SCLP_RC_INSUFFICIENT_SCCB_LENGTH); -return; -} - /* Configuration Characteristic (Extension) */ s390_get_feat_block(S390_FEAT_TYPE_SCLP_CONF_CHAR, read_info->conf_char); @@ -146,18 +148,18 @@ static void sclp_read_cpu_info(SCLPDevice *sclp, SCCB *sccb) MachineState *machine = MACHINE(qdev_get_machine()); ReadCpuInfo *cpu_info = (ReadCpuInfo *) sccb; int cpu_count; +int required_len = SCCB_REQ_LEN(ReadCpuInfo, machine->possible_cpus->len); + +if (be16_to_cpu(sccb->h.length) < required_len) { +sccb->h.response_code = cpu_to_be16(SCLP_RC_INSUFFICIENT_SCCB_LENGTH); +return; +} prepare_cpu_entries(machine, cpu_info->entries, &cpu_count); cpu_info->nr_configured = cpu_to_be16(cpu_count); cpu_info->offset_configured = cpu_to_be16(offsetof(ReadCpuInfo, entries)); cpu_info->nr_standby = cpu_to_be16(0); -if (be16_to_cpu(sccb->h.length) < -(sizeof(ReadCpuInfo) + cpu_count * sizeof(CPUEntry))) { -sccb->h.response_code = cpu_to_be16(SCLP_RC_INSUFFICIENT_SCCB_LENGTH); -return; -} - /* The standby offset is 16-byte for each CPU */ cpu_info->offset_standby = cpu_to_be16(cpu_info->offset_configured + cpu_info->nr_configured*sizeof(CPUEntry)); -- 2.25.4
[PULL 04/19] s390/sclp: read sccb from mem based on provided length
From: Collin Walling The header contained within the SCCB passed to the SCLP service call contains the actual length of the SCCB. Instead of allocating a static 4K size for the work sccb, let's allow for a variable size determined by the value in the header. The proper checks are already in place to ensure the SCCB length is sufficent to store a full response and that the length does not cross any explicitly-set boundaries. Signed-off-by: Collin Walling Reviewed-by: Thomas Huth Reviewed-by: Claudio Imbrenda Message-Id: <20200915194416.107460-4-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/event-facility.c | 2 +- hw/s390x/sclp.c | 55 ++- include/hw/s390x/sclp.h | 2 +- 3 files changed, 33 insertions(+), 26 deletions(-) diff --git a/hw/s390x/event-facility.c b/hw/s390x/event-facility.c index 645b4080c5b9..ed92ce510d9e 100644 --- a/hw/s390x/event-facility.c +++ b/hw/s390x/event-facility.c @@ -213,7 +213,7 @@ static uint16_t handle_sccb_read_events(SCLPEventFacility *ef, SCCB *sccb, event_buf = &red->ebh; event_buf->length = 0; -slen = sizeof(sccb->data); +slen = sccb_data_len(sccb); rc = SCLP_RC_NO_EVENT_BUFFERS_STORED; diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index a37cfbf534cd..4ae6fb400b40 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -231,25 +231,29 @@ int sclp_service_call_protected(CPUS390XState *env, uint64_t sccb, { SCLPDevice *sclp = get_sclp_device(); SCLPDeviceClass *sclp_c = SCLP_GET_CLASS(sclp); -SCCB work_sccb; -hwaddr sccb_len = sizeof(SCCB); +SCCBHeader header; +g_autofree SCCB *work_sccb = NULL; -s390_cpu_pv_mem_read(env_archcpu(env), 0, &work_sccb, sccb_len); +s390_cpu_pv_mem_read(env_archcpu(env), 0, &header, sizeof(SCCBHeader)); + +work_sccb = g_malloc0(be16_to_cpu(header.length)); +s390_cpu_pv_mem_read(env_archcpu(env), 0, work_sccb, + be16_to_cpu(header.length)); if (!sclp_command_code_valid(code)) { -work_sccb.h.response_code = cpu_to_be16(SCLP_RC_INVALID_SCLP_COMMAND); +work_sccb->h.response_code = cpu_to_be16(SCLP_RC_INVALID_SCLP_COMMAND); goto out_write; } -if (!sccb_verify_boundary(sccb, be16_to_cpu(work_sccb.h.length))) { -work_sccb.h.response_code = cpu_to_be16(SCLP_RC_SCCB_BOUNDARY_VIOLATION); +if (!sccb_verify_boundary(sccb, be16_to_cpu(work_sccb->h.length))) { +work_sccb->h.response_code = cpu_to_be16(SCLP_RC_SCCB_BOUNDARY_VIOLATION); goto out_write; } -sclp_c->execute(sclp, &work_sccb, code); +sclp_c->execute(sclp, work_sccb, code); out_write: -s390_cpu_pv_mem_write(env_archcpu(env), 0, &work_sccb, - be16_to_cpu(work_sccb.h.length)); +s390_cpu_pv_mem_write(env_archcpu(env), 0, work_sccb, + be16_to_cpu(work_sccb->h.length)); sclp_c->service_interrupt(sclp, SCLP_PV_DUMMY_ADDR); return 0; } @@ -258,9 +262,8 @@ int sclp_service_call(CPUS390XState *env, uint64_t sccb, uint32_t code) { SCLPDevice *sclp = get_sclp_device(); SCLPDeviceClass *sclp_c = SCLP_GET_CLASS(sclp); -SCCB work_sccb; - -hwaddr sccb_len = sizeof(SCCB); +SCCBHeader header; +g_autofree SCCB *work_sccb = NULL; /* first some basic checks on program checks */ if (env->psw.mask & PSW_MASK_PSTATE) { @@ -274,32 +277,36 @@ int sclp_service_call(CPUS390XState *env, uint64_t sccb, uint32_t code) return -PGM_SPECIFICATION; } +/* the header contains the actual length of the sccb */ +cpu_physical_memory_read(sccb, &header, sizeof(SCCBHeader)); + +/* Valid sccb sizes */ +if (be16_to_cpu(header.length) < sizeof(SCCBHeader)) { +return -PGM_SPECIFICATION; +} + /* * we want to work on a private copy of the sccb, to prevent guests * from playing dirty tricks by modifying the memory content after * the host has checked the values */ -cpu_physical_memory_read(sccb, &work_sccb, sccb_len); - -/* Valid sccb sizes */ -if (be16_to_cpu(work_sccb.h.length) < sizeof(SCCBHeader)) { -return -PGM_SPECIFICATION; -} +work_sccb = g_malloc0(be16_to_cpu(header.length)); +cpu_physical_memory_read(sccb, work_sccb, be16_to_cpu(header.length)); if (!sclp_command_code_valid(code)) { -work_sccb.h.response_code = cpu_to_be16(SCLP_RC_INVALID_SCLP_COMMAND); +work_sccb->h.response_code = cpu_to_be16(SCLP_RC_INVALID_SCLP_COMMAND); goto out_write; } -if (!sccb_verify_boundary(sccb, be16_to_cpu(work_sccb.h.length))) { -work_sccb.h.response_code = cpu_to_be16(SCLP_RC_SCCB_BOUNDARY_VIOLATION); +if (!sccb_verify_boundary(sccb, be16_to_cpu(work_sccb->h.length))) { +work_sccb->h.response_code = cpu_to_be16(SCLP_RC_SCCB_BOUNDARY_VIOLATION); goto out_write; } -sclp_c->execute(sclp, &work_sccb, code
[PULL 07/19] s390/sclp: add extended-length sccb support for kvm guest
From: Collin Walling As more features and facilities are added to the Read SCP Info (RSCPI) response, more space is required to store them. The space used to store these new features intrudes on the space originally used to store CPU entries. This means as more features and facilities are added to the RSCPI response, less space can be used to store CPU entries. With the Extended-Length SCCB (ELS) facility, a KVM guest can execute the RSCPI command and determine if the SCCB is large enough to store a complete reponse. If it is not large enough, then the required length will be set in the SCCB header. The caller of the SCLP command is responsible for creating a large-enough SCCB to store a complete response. Proper checking should be in place, and the caller should execute the command once-more with the large-enough SCCB. This facility also enables an extended SCCB for the Read CPU Info (RCPUI) command. When this facility is enabled, the boundary violation response cannot be a result from the RSCPI, RSCPI Forced, or RCPUI commands. In order to tolerate kernels that do not yet have full support for this feature, a "fixed" offset to the start of the CPU Entries within the Read SCP Info struct is set to allow for the original 248 max entries when this feature is disabled. Additionally, this is introduced as a CPU feature to protect the guest from migrating to a machine that does not support storing an extended SCCB. This could otherwise hinder the VM from being able to read all available CPU entries after migration (such as during re-ipl). Signed-off-by: Collin Walling Reviewed-by: Thomas Huth Acked-by: Cornelia Huck Reviewed-by: Claudio Imbrenda Message-Id: <20200915194416.107460-7-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/sclp.c | 43 + include/hw/s390x/sclp.h | 1 + target/s390x/cpu_features_def.h.inc | 1 + target/s390x/gen-features.c | 1 + target/s390x/kvm.c | 8 ++ 5 files changed, 48 insertions(+), 6 deletions(-) diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index 1df67c99bfb9..caf40f41b69e 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -49,13 +49,30 @@ static inline bool sclp_command_code_valid(uint32_t code) return false; } -static bool sccb_verify_boundary(uint64_t sccb_addr, uint16_t sccb_len) +static bool sccb_verify_boundary(uint64_t sccb_addr, uint16_t sccb_len, + uint32_t code) { uint64_t sccb_max_addr = sccb_addr + sccb_len - 1; uint64_t sccb_boundary = (sccb_addr & PAGE_MASK) + PAGE_SIZE; -if (sccb_max_addr < sccb_boundary) { -return true; +switch (code & SCLP_CMD_CODE_MASK) { +case SCLP_CMDW_READ_SCP_INFO: +case SCLP_CMDW_READ_SCP_INFO_FORCED: +case SCLP_CMDW_READ_CPU_INFO: +/* + * An extended-length SCCB is only allowed for Read SCP/CPU Info and + * is allowed to exceed the 4k boundary. The respective commands will + * set the length field to the required length if an insufficient + * SCCB length is provided. + */ +if (s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB)) { +return true; +} +/* fallthrough */ +default: +if (sccb_max_addr < sccb_boundary) { +return true; +} } return false; @@ -80,6 +97,12 @@ static void prepare_cpu_entries(MachineState *ms, CPUEntry *entry, int *count) #define SCCB_REQ_LEN(s, max_cpus) (sizeof(s) + max_cpus * sizeof(CPUEntry)) +static inline bool ext_len_sccb_supported(SCCBHeader header) +{ +return s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB) && + header.control_mask[2] & SCLP_VARIABLE_LENGTH_RESPONSE; +} + /* Provide information about the configuration, CPUs and storage */ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) { @@ -89,10 +112,15 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) int rnsize, rnmax; IplParameterBlock *ipib = s390_ipl_get_iplb(); int required_len = SCCB_REQ_LEN(ReadInfo, machine->possible_cpus->len); -int offset_cpu = offsetof(ReadInfo, entries); +int offset_cpu = s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB) ? + offsetof(ReadInfo, entries) : + SCLP_READ_SCP_INFO_FIXED_CPU_OFFSET; CPUEntry *entries_start = (void *)sccb + offset_cpu; if (be16_to_cpu(sccb->h.length) < required_len) { +if (ext_len_sccb_supported(sccb->h)) { +sccb->h.length = cpu_to_be16(required_len); +} sccb->h.response_code = cpu_to_be16(SCLP_RC_INSUFFICIENT_SCCB_LENGTH); return; } @@ -153,6 +181,9 @@ static void sclp_read_cpu_info(SCLPDevice *sclp, SCCB *sccb) int required_len = SCCB_REQ_LEN(ReadCpuInfo, machine->possible_cpus->len); if (be16_to_cpu(sccb->h.length) < required_len) { +if (ext_len_sccb_supported(sccb->h)) { +sccb->h.length
[PULL 11/19] s390x/cpumodel: S390_FEAT_MISC_INSTRUCTION_EXT -> S390_FEAT_MISC_INSTRUCTION_EXT2
From: David Hildenbrand Let's avoid confusion with the "Miscellaneous-Instruction-Extensions Facility 1" Suggested-by: Thomas Huth Signed-off-by: David Hildenbrand Cc: Christian Borntraeger Message-Id: <20200928122717.30586-2-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/cpu_features_def.h.inc | 2 +- target/s390x/gen-features.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/target/s390x/cpu_features_def.h.inc b/target/s390x/cpu_features_def.h.inc index f82b4b5ec16a..7db3449e0434 100644 --- a/target/s390x/cpu_features_def.h.inc +++ b/target/s390x/cpu_features_def.h.inc @@ -72,7 +72,7 @@ DEF_FEAT(INTERLOCKED_ACCESS_2, "iacc2", STFL, 52, "Interlocked-access facility 2 DEF_FEAT(STFLE_53, "stfle53", STFL, 53, "Various facilities introduced with z13") DEF_FEAT(ENTROPY_ENC_COMP, "eec", STFL, 54, "Entropy encoding compression facility") DEF_FEAT(MSA_EXT_5, "msa5-base", STFL, 57, "Message-security-assist-extension-5 facility (excluding subfunctions)") -DEF_FEAT(MISC_INSTRUCTION_EXT, "minste2", STFL, 58, "Miscellaneous-instruction-extensions facility 2") +DEF_FEAT(MISC_INSTRUCTION_EXT2, "minste2", STFL, 58, "Miscellaneous-instruction-extensions facility 2") DEF_FEAT(SEMAPHORE_ASSIST, "sema", STFL, 59, "Semaphore-assist facility") DEF_FEAT(TIME_SLICE_INSTRUMENTATION, "tsi", STFL, 60, "Time-slice Instrumentation facility") DEF_FEAT(MISC_INSTRUCTION_EXT3, "minste3", STFL, 61, "Miscellaneous-Instruction-Extensions Facility 3") diff --git a/target/s390x/gen-features.c b/target/s390x/gen-features.c index a1f0a6f3c6fc..e3fd0c0a2ef3 100644 --- a/target/s390x/gen-features.c +++ b/target/s390x/gen-features.c @@ -412,7 +412,7 @@ static uint16_t base_GEN13_GA1[] = { static uint16_t base_GEN14_GA1[] = { S390_FEAT_ENTROPY_ENC_COMP, -S390_FEAT_MISC_INSTRUCTION_EXT, +S390_FEAT_MISC_INSTRUCTION_EXT2, S390_FEAT_SEMAPHORE_ASSIST, S390_FEAT_TIME_SLICE_INSTRUMENTATION, S390_FEAT_ORDER_PRESERVING_COMPRESSION, -- 2.25.4
[PULL 06/19] s390/sclp: use cpu offset to locate cpu entries
From: Collin Walling The start of the CPU entry region in the Read SCP Info response data is denoted by the offset_cpu field. As such, QEMU needs to begin creating entries at this address. This is in preparation for when Read SCP Info inevitably introduces new bytes that push the start of the CPUEntry field further away. Read CPU Info is unlikely to ever change, so let's not bother accounting for the offset there. Signed-off-by: Collin Walling Reviewed-by: Thomas Huth Reviewed-by: Cornelia Huck Reviewed-by: Claudio Imbrenda Message-Id: <20200915194416.107460-6-wall...@linux.ibm.com> Signed-off-by: Cornelia Huck --- hw/s390x/sclp.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c index 0d54075309d5..1df67c99bfb9 100644 --- a/hw/s390x/sclp.c +++ b/hw/s390x/sclp.c @@ -89,6 +89,8 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) int rnsize, rnmax; IplParameterBlock *ipib = s390_ipl_get_iplb(); int required_len = SCCB_REQ_LEN(ReadInfo, machine->possible_cpus->len); +int offset_cpu = offsetof(ReadInfo, entries); +CPUEntry *entries_start = (void *)sccb + offset_cpu; if (be16_to_cpu(sccb->h.length) < required_len) { sccb->h.response_code = cpu_to_be16(SCLP_RC_INSUFFICIENT_SCCB_LENGTH); @@ -96,9 +98,9 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb) } /* CPU information */ -prepare_cpu_entries(machine, read_info->entries, &cpu_count); +prepare_cpu_entries(machine, entries_start, &cpu_count); read_info->entries_cpu = cpu_to_be16(cpu_count); -read_info->offset_cpu = cpu_to_be16(offsetof(ReadInfo, entries)); +read_info->offset_cpu = cpu_to_be16(offset_cpu); read_info->highest_cpu = cpu_to_be16(machine->smp.max_cpus - 1); read_info->ibc_val = cpu_to_be32(s390_get_ibc_val()); -- 2.25.4
[PULL 13/19] s390x/tcg: Implement SUBTRACT HALFWORD (SGH)
From: David Hildenbrand Easy to wire up. Signed-off-by: David Hildenbrand Reviewed-by: Thomas Huth Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-4-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/insn-data.def | 1 + 1 file changed, 1 insertion(+) diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index dbcdb4c387fa..e994d32d96bd 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -886,6 +886,7 @@ /* SUBTRACT HALFWORD */ C(0x4b00, SH, RX_a, Z, r1, m2_16s, new, r1_32, sub, subs32) C(0xe37b, SHY, RXY_a, LD, r1, m2_16s, new, r1_32, sub, subs32) +C(0xe339, SGH, RXY_a, MIE2,r1, m2_16s, r1, 0, sub, subs64) /* SUBTRACT HIGH */ C(0xb9c9, SHHHR, RRF_a, HW, r2_sr32, r3_sr32, new, r1_32h, sub, subs32) C(0xb9d9, SHHLR, RRF_a, HW, r2_sr32, r3, new, r1_32h, sub, subs32) -- 2.25.4
[PULL 14/19] s390x/tcg: Implement MULTIPLY (MG, MGRK)
From: David Hildenbrand Multiply two signed 64bit values and store the 128bit result in r1 (0-63) and r1 + 1 (64-127). Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-5-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/insn-data.def | 2 ++ target/s390x/translate.c | 13 + 2 files changed, 15 insertions(+) diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index e994d32d96bd..13c4ffdaf5b3 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -653,8 +653,10 @@ /* MULTIPLY */ C(0x1c00, MR, RR_a, Z, r1p1_32s, r2_32s, new, r1_D32, mul, 0) +C(0xb9ec, MGRK,RRF_a, MIE2,r3_o, r2_o, r1_P, 0, muls128, 0) C(0x5c00, M, RX_a, Z, r1p1_32s, m2_32s, new, r1_D32, mul, 0) C(0xe35c, MFY, RXY_a, GIE, r1p1_32s, m2_32s, new, r1_D32, mul, 0) +C(0xe384, MG, RXY_a, MIE2,r1p1_o, m2_64, r1_P, 0, muls128, 0) F(0xb317, MEEBR, RRE, Z, e1, e2, new, e1, meeb, 0, IF_BFP) F(0xb31c, MDBR,RRE, Z, f1, f2, new, f1, mdb, 0, IF_BFP) F(0xb34c, MXBR,RRE, Z, x2h, x2l, x1, x1, mxb, 0, IF_BFP) diff --git a/target/s390x/translate.c b/target/s390x/translate.c index 7ea666b9a7fb..66a3693d128c 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -3539,6 +3539,12 @@ static DisasJumpType op_mul128(DisasContext *s, DisasOps *o) return DISAS_NEXT; } +static DisasJumpType op_muls128(DisasContext *s, DisasOps *o) +{ +tcg_gen_muls2_i64(o->out2, o->out, o->in1, o->in2); +return DISAS_NEXT; +} + static DisasJumpType op_meeb(DisasContext *s, DisasOps *o) { gen_helper_meeb(o->out, cpu_env, o->in1, o->in2); @@ -5563,6 +5569,13 @@ static void in1_r1p1(DisasContext *s, DisasOps *o) } #define SPEC_in1_r1p1 SPEC_r1_even +static void in1_r1p1_o(DisasContext *s, DisasOps *o) +{ +o->in1 = regs[get_field(s, r1) + 1]; +o->g_in1 = true; +} +#define SPEC_in1_r1p1_o SPEC_r1_even + static void in1_r1p1_32s(DisasContext *s, DisasOps *o) { o->in1 = tcg_temp_new_i64(); -- 2.25.4
[PULL 09/19] s390x/tcg: Implement MONITOR CALL
From: David Hildenbrand Recent upstream Linux uses the MONITOR CALL instruction for things like BUG_ON() and WARN_ON(). We currently inject an operation exception when we hit a MONITOR CALL instruction - which is wrong, as the instruction is not glued to specific CPU features. Doing a simple WARN_ON_ONCE() currently results in a panic: [ 18.162801] illegal operation: 0001 ilc:2 [#1] SMP [ 18.162889] Modules linked in: [...] [ 18.165476] Kernel panic - not syncing: Fatal exception: panic_on_oops With a proper implementation, we now get: [ 18.242754] [ cut here ] [ 18.242855] WARNING: CPU: 7 PID: 1 at init/main.c:1534 [...] [ 18.242919] Modules linked in: [...] [ 18.246262] ---[ end trace a420477d71dc97b4 ]--- [ 18.259014] Freeing unused kernel memory: 4220K Reported-by: Christian Borntraeger Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200918085122.26132-1-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/excp_helper.c | 23 +++ target/s390x/helper.h | 1 + target/s390x/insn-data.def | 3 +++ target/s390x/translate.c | 21 + 4 files changed, 48 insertions(+) diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c index 3b58d10df3ca..0adfbbda2708 100644 --- a/target/s390x/excp_helper.c +++ b/target/s390x/excp_helper.c @@ -610,4 +610,27 @@ void s390x_cpu_do_unaligned_access(CPUState *cs, vaddr addr, tcg_s390_program_interrupt(env, PGM_SPECIFICATION, retaddr); } +static void QEMU_NORETURN monitor_event(CPUS390XState *env, +uint64_t monitor_code, +uint8_t monitor_class, uintptr_t ra) +{ +/* Store the Monitor Code and the Monitor Class Number into the lowcore */ +stq_phys(env_cpu(env)->as, + env->psa + offsetof(LowCore, monitor_code), monitor_code); +stw_phys(env_cpu(env)->as, + env->psa + offsetof(LowCore, mon_class_num), monitor_class); + +tcg_s390_program_interrupt(env, PGM_MONITOR, ra); +} + +void HELPER(monitor_call)(CPUS390XState *env, uint64_t monitor_code, + uint32_t monitor_class) +{ +g_assert(monitor_class <= 0xff); + +if (env->cregs[8] & (0x8000 >> monitor_class)) { +monitor_event(env, monitor_code, monitor_class, GETPC()); +} +} + #endif /* CONFIG_USER_ONLY */ diff --git a/target/s390x/helper.h b/target/s390x/helper.h index b7887b552bbb..55bd1551e604 100644 --- a/target/s390x/helper.h +++ b/target/s390x/helper.h @@ -349,4 +349,5 @@ DEF_HELPER_3(sic, void, env, i64, i64) DEF_HELPER_3(rpcit, void, env, i32, i32) DEF_HELPER_5(pcistb, void, env, i32, i32, i64, i32) DEF_HELPER_4(mpcifc, void, env, i32, i64, i32) +DEF_HELPER_3(monitor_call, void, env, i64, i32) #endif diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index d79ae9e3f114..e14cbd63fa0a 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -617,6 +617,9 @@ C(0x9a00, LAM, RS_a, Z, 0, a2, 0, 0, lam, 0) C(0xeb9a, LAMY,RSY_a, LD, 0, a2, 0, 0, lam, 0) +/* MONITOR CALL */ +C(0xaf00, MC, SI,Z, la1, 0, 0, 0, mc, 0) + /* MOVE */ C(0xd200, MVC, SS_a, Z, la1, a2, 0, 0, mvc, 0) C(0xe544, MVHHI, SIL, GIE, la1, i2, 0, m1_16, mov2, 0) diff --git a/target/s390x/translate.c b/target/s390x/translate.c index a777343821bb..90dc1740e7ab 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -3302,6 +3302,27 @@ static DisasJumpType op_lcbb(DisasContext *s, DisasOps *o) return DISAS_NEXT; } +static DisasJumpType op_mc(DisasContext *s, DisasOps *o) +{ +#if !defined(CONFIG_USER_ONLY) +TCGv_i32 i2; +#endif +const uint16_t monitor_class = get_field(s, i2); + +if (monitor_class & 0xff00) { +gen_program_exception(s, PGM_SPECIFICATION); +return DISAS_NORETURN; +} + +#if !defined(CONFIG_USER_ONLY) +i2 = tcg_const_i32(monitor_class); +gen_helper_monitor_call(cpu_env, o->addr1, i2); +tcg_temp_free_i32(i2); +#endif +/* Defaults to a NOP. */ +return DISAS_NEXT; +} + static DisasJumpType op_mov2(DisasContext *s, DisasOps *o) { o->out = o->in2; -- 2.25.4
[PULL 10/19] vfio-ccw: plug memory leak while getting region info
vfio_get_dev_region_info() unconditionally allocates memory for a passed-in vfio_region_info structure (and does not re-use an already allocated structure). Therefore, we have to free the structure we pass to that function in vfio_ccw_get_region() for every region we successfully obtained information for. Fixes: 8fadea24de4e ("vfio-ccw: support async command subregion") Fixes: 46ea3841edaf ("vfio-ccw: Add support for the schib region") Fixes: f030532f2ad6 ("vfio-ccw: Add support for the CRW region and IRQ") Reported-by: Alex Williamson Signed-off-by: Cornelia Huck Reviewed-by: Eric Farman Reviewed-by: Philippe Mathieu-Daudé Message-Id: <20200928101701.13540-1-coh...@redhat.com> --- hw/vfio/ccw.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/hw/vfio/ccw.c b/hw/vfio/ccw.c index ff7f36977994..d2755d7fc5ca 100644 --- a/hw/vfio/ccw.c +++ b/hw/vfio/ccw.c @@ -491,6 +491,7 @@ static void vfio_ccw_get_region(VFIOCCWDevice *vcdev, Error **errp) vcdev->io_region_offset = info->offset; vcdev->io_region = g_malloc0(info->size); +g_free(info); /* check for the optional async command region */ ret = vfio_get_dev_region_info(vdev, VFIO_REGION_TYPE_CCW, @@ -503,6 +504,7 @@ static void vfio_ccw_get_region(VFIOCCWDevice *vcdev, Error **errp) } vcdev->async_cmd_region_offset = info->offset; vcdev->async_cmd_region = g_malloc0(info->size); +g_free(info); } ret = vfio_get_dev_region_info(vdev, VFIO_REGION_TYPE_CCW, @@ -515,6 +517,7 @@ static void vfio_ccw_get_region(VFIOCCWDevice *vcdev, Error **errp) } vcdev->schib_region_offset = info->offset; vcdev->schib_region = g_malloc(info->size); +g_free(info); } ret = vfio_get_dev_region_info(vdev, VFIO_REGION_TYPE_CCW, @@ -528,9 +531,9 @@ static void vfio_ccw_get_region(VFIOCCWDevice *vcdev, Error **errp) } vcdev->crw_region_offset = info->offset; vcdev->crw_region = g_malloc(info->size); +g_free(info); } -g_free(info); return; out_err: -- 2.25.4
[PULL 15/19] s390x/tcg: Implement MULTIPLY HALFWORD (MGH)
From: David Hildenbrand Just like MULTIPLY HALFWORD IMMEDIATE (MGHI), only the second operand (signed 16 bit) comes from memory. Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-6-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/insn-data.def | 1 + 1 file changed, 1 insertion(+) diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index 13c4ffdaf5b3..bf18d8aaf43e 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -669,6 +669,7 @@ /* MULTIPLY HALFWORD */ C(0x4c00, MH, RX_a, Z, r1_o, m2_16s, new, r1_32, mul, 0) C(0xe37c, MHY, RXY_a, GIE, r1_o, m2_16s, new, r1_32, mul, 0) +C(0xe33c, MGH, RXY_a, MIE2,r1_o, m2_16s, r1, 0, mul, 0) /* MULTIPLY HALFWORD IMMEDIATE */ C(0xa70c, MHI, RI_a, Z, r1_o, i2, new, r1_32, mul, 0) C(0xa70d, MGHI,RI_a, Z, r1_o, i2, r1, 0, mul, 0) -- 2.25.4
[PULL 16/19] s390x/tcg: Implement BRANCH INDIRECT ON CONDITION (BIC)
From: David Hildenbrand Just like BRANCH ON CONDITION - however the address is read from memory (always 8 bytes are read), we have to wrap the address manually. The address is read using current CPU DAT/address-space controls, just like ordinary data. Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-7-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/insn-data.def | 2 ++ target/s390x/translate.c | 8 2 files changed, 10 insertions(+) diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index bf18d8aaf43e..3322e5f2a504 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -115,6 +115,8 @@ /* BRANCH RELATIVE AND SAVE */ C(0xa705, BRAS,RI_b, Z, 0, 0, r1, 0, basi, 0) C(0xc005, BRASL, RIL_b, Z, 0, 0, r1, 0, basi, 0) +/* BRANCH INDIRECT ON CONDITION */ +C(0xe347, BIC, RXY_b, MIE2,0, m2_64w, 0, 0, bc, 0) /* BRANCH ON CONDITION */ C(0x0700, BCR, RR_b, Z, 0, r2_nz, 0, 0, bc, 0) C(0x4700, BC, RX_b, Z, 0, a2, 0, 0, bc, 0) diff --git a/target/s390x/translate.c b/target/s390x/translate.c index 66a3693d128c..27fb7af8fb1c 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -5956,6 +5956,14 @@ static void in2_m2_64(DisasContext *s, DisasOps *o) } #define SPEC_in2_m2_64 0 +static void in2_m2_64w(DisasContext *s, DisasOps *o) +{ +in2_a2(s, o); +tcg_gen_qemu_ld64(o->in2, o->in2, get_mem_index(s)); +gen_addi_and_wrap_i64(s, o->in2, o->in2, 0); +} +#define SPEC_in2_m2_64w 0 + #ifndef CONFIG_USER_ONLY static void in2_m2_64a(DisasContext *s, DisasOps *o) { -- 2.25.4
[PULL 18/19] s390x/tcg: We support Miscellaneous-Instruction-Extensions Facility 2
From: David Hildenbrand We implement all relevant instructions. Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-9-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/gen-features.c | 1 + 1 file changed, 1 insertion(+) diff --git a/target/s390x/gen-features.c b/target/s390x/gen-features.c index e3fd0c0a2ef3..02ec0a673517 100644 --- a/target/s390x/gen-features.c +++ b/target/s390x/gen-features.c @@ -718,6 +718,7 @@ static uint16_t qemu_MAX[] = { S390_FEAT_MSA_EXT_5, /* features introduced after the z13 */ S390_FEAT_INSTRUCTION_EXEC_PROT, +S390_FEAT_MISC_INSTRUCTION_EXT2, }; /** END FEATURE DEFS **/ -- 2.25.4
[PULL 17/19] s390x/tcg: Implement MULTIPLY SINGLE (MSC, MSGC, MSGRKC, MSRKC)
From: David Hildenbrand We need new CC handling, determining the CC based on the intermediate result (64bit for MSC and MSRKC, 128bit for MSGC and MSGRKC). We want to store out2 ("low") after muls128 to r1, so add "wout_out2_r1". Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-8-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/cc_helper.c | 32 target/s390x/helper.c | 2 ++ target/s390x/insn-data.def | 4 target/s390x/internal.h| 2 ++ target/s390x/translate.c | 19 +++ 5 files changed, 59 insertions(+) diff --git a/target/s390x/cc_helper.c b/target/s390x/cc_helper.c index 44731e4a85c3..5432aeeed46c 100644 --- a/target/s390x/cc_helper.c +++ b/target/s390x/cc_helper.c @@ -417,6 +417,32 @@ static uint32_t cc_calc_vc(uint64_t low, uint64_t high) } } +static uint32_t cc_calc_muls_32(int64_t res) +{ +const int64_t tmp = res >> 31; + +if (!res) { +return 0; +} else if (tmp && tmp != -1) { +return 3; +} else if (res < 0) { +return 1; +} +return 2; +} + +static uint64_t cc_calc_muls_64(int64_t res_high, uint64_t res_low) +{ +if (!res_high && !res_low) { +return 0; +} else if (res_high + (res_low >> 63) != 0) { +return 3; +} else if (res_high < 0) { +return 1; +} +return 2; +} + static uint32_t do_calc_cc(CPUS390XState *env, uint32_t cc_op, uint64_t src, uint64_t dst, uint64_t vr) { @@ -484,6 +510,9 @@ static uint32_t do_calc_cc(CPUS390XState *env, uint32_t cc_op, case CC_OP_COMP_64: r = cc_calc_comp_64(dst); break; +case CC_OP_MULS_64: +r = cc_calc_muls_64(src, dst); +break; case CC_OP_ADD_32: r = cc_calc_add_32(src, dst, vr); @@ -512,6 +541,9 @@ static uint32_t do_calc_cc(CPUS390XState *env, uint32_t cc_op, case CC_OP_COMP_32: r = cc_calc_comp_32(dst); break; +case CC_OP_MULS_32: +r = cc_calc_muls_32(dst); +break; case CC_OP_ICM: r = cc_calc_icm(src, dst); diff --git a/target/s390x/helper.c b/target/s390x/helper.c index 9257d388baed..b877690845aa 100644 --- a/target/s390x/helper.c +++ b/target/s390x/helper.c @@ -430,6 +430,8 @@ const char *cc_name(enum cc_op cc_op) [CC_OP_FLOGR] = "CC_OP_FLOGR", [CC_OP_LCBB] = "CC_OP_LCBB", [CC_OP_VC]= "CC_OP_VC", +[CC_OP_MULS_32] = "CC_OP_MULS_32", +[CC_OP_MULS_64] = "CC_OP_MULS_64", }; return cc_names[cc_op]; diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index 3322e5f2a504..fc83a6ec32a4 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -682,11 +682,15 @@ C(0xe386, MLG, RXY_a, Z, r1p1, m2_64, r1_P, 0, mul128, 0) /* MULTIPLY SINGLE */ C(0xb252, MSR, RRE, Z, r1_o, r2_o, new, r1_32, mul, 0) +C(0xb9fd, MSRKC, RRF_a, MIE2,r3_32s, r2_32s, new, r1_32, mul, muls32) C(0x7100, MS, RX_a, Z, r1_o, m2_32s, new, r1_32, mul, 0) C(0xe351, MSY, RXY_a, LD, r1_o, m2_32s, new, r1_32, mul, 0) +C(0xe353, MSC, RXY_a, MIE2,r1_32s, m2_32s, new, r1_32, mul, muls32) C(0xb90c, MSGR,RRE, Z, r1_o, r2_o, r1, 0, mul, 0) +C(0xb9ed, MSGRKC, RRF_a, MIE2,r3_o, r2_o, new_P, out2_r1, muls128, muls64) C(0xb91c, MSGFR, RRE, Z, r1_o, r2_32s, r1, 0, mul, 0) C(0xe30c, MSG, RXY_a, Z, r1_o, m2_64, r1, 0, mul, 0) +C(0xe383, MSGC,RXY_a, MIE2,r1_o, m2_64, new_P, out2_r1, muls128, muls64) C(0xe31c, MSGF,RXY_a, Z, r1_o, m2_32s, r1, 0, mul, 0) /* MULTIPLY SINGLE IMMEDIATE */ C(0xc201, MSFI,RIL_a, GIE, r1_o, i2, new, r1_32, mul, 0) diff --git a/target/s390x/internal.h b/target/s390x/internal.h index bac0d3c67b21..64602660ae17 100644 --- a/target/s390x/internal.h +++ b/target/s390x/internal.h @@ -175,6 +175,7 @@ enum cc_op { CC_OP_SUBB_64, /* overflow on unsigned sub-borrow (64bit) */ CC_OP_ABS_64, /* sign eval on abs (64bit) */ CC_OP_NABS_64, /* sign eval on nabs (64bit) */ +CC_OP_MULS_64, /* overflow on signed multiply (64bit) */ CC_OP_ADD_32, /* overflow on add (32bit) */ CC_OP_ADDU_32, /* overflow on unsigned add (32bit) */ @@ -184,6 +185,7 @@ enum cc_op { CC_OP_SUBB_32, /* overflow on unsigned sub-borrow (32bit) */ CC_OP_ABS_32, /* sign eval on abs (64bit) */ CC_OP_NABS_32, /* sign eval on nabs (64bit) */ +CC_OP_MULS_32, /* overflow on signed multiply (32bit) */ CC_OP_COMP_32, /* complement */ CC_OP_COMP_64, /* complement */ diff --git a/target/s390x/translate.c b/target/s390x/translate.c index 27fb7af8fb1c..bcc65893e4ff 100644 --- a/target/s390x/translate.c +
[PULL 19/19] s390x/tcg: Implement CIPHER MESSAGE WITH AUTHENTICATION (KMA)
From: David Hildenbrand As with the other crypto functions, we only implement subcode 0 (query) and no actual encryption/decryption. We now implement S390_FEAT_MSA_EXT_8. Signed-off-by: David Hildenbrand Reviewed-by: Richard Henderson Message-Id: <20200928122717.30586-10-da...@redhat.com> Signed-off-by: Cornelia Huck --- target/s390x/gen-features.c | 1 + target/s390x/insn-data.def | 1 + target/s390x/translate.c| 7 +++ 3 files changed, 9 insertions(+) diff --git a/target/s390x/gen-features.c b/target/s390x/gen-features.c index 02ec0a673517..a6ec918e901e 100644 --- a/target/s390x/gen-features.c +++ b/target/s390x/gen-features.c @@ -719,6 +719,7 @@ static uint16_t qemu_MAX[] = { /* features introduced after the z13 */ S390_FEAT_INSTRUCTION_EXEC_PROT, S390_FEAT_MISC_INSTRUCTION_EXT2, +S390_FEAT_MSA_EXT_8, }; /** END FEATURE DEFS **/ diff --git a/target/s390x/insn-data.def b/target/s390x/insn-data.def index fc83a6ec32a4..d3bcdfd67b3c 100644 --- a/target/s390x/insn-data.def +++ b/target/s390x/insn-data.def @@ -982,6 +982,7 @@ D(0xb92d, KMCTR, RRF_b, MSA4, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_KMCTR) D(0xb92e, KM, RRE, MSA, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_KM) D(0xb92f, KMC, RRE, MSA, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_KMC) +D(0xb929, KMA, RRF_b, MSA8, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_KMA) D(0xb93c, PPNO,RRE, MSA5, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_PPNO) D(0xb93e, KIMD,RRE, MSA, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_KIMD) D(0xb93f, KLMD,RRE, MSA, 0, 0, 0, 0, msa, 0, S390_FEAT_TYPE_KLMD) diff --git a/target/s390x/translate.c b/target/s390x/translate.c index bcc65893e4ff..ac10f42f1045 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -2710,6 +2710,12 @@ static DisasJumpType op_msa(DisasContext *s, DisasOps *o) TCGv_i32 t_r1, t_r2, t_r3, type; switch (s->insn->data) { +case S390_FEAT_TYPE_KMA: +if (r3 == r1 || r3 == r2) { +gen_program_exception(s, PGM_SPECIFICATION); +return DISAS_NORETURN; +} +/* FALL THROUGH */ case S390_FEAT_TYPE_KMCTR: if (r3 & 1 || !r3) { gen_program_exception(s, PGM_SPECIFICATION); @@ -6154,6 +6160,7 @@ enum DisasInsnEnum { #define FAC_MSA3S390_FEAT_MSA_EXT_3 /* msa-extension-3 facility */ #define FAC_MSA4S390_FEAT_MSA_EXT_4 /* msa-extension-4 facility */ #define FAC_MSA5S390_FEAT_MSA_EXT_5 /* msa-extension-5 facility */ +#define FAC_MSA8S390_FEAT_MSA_EXT_8 /* msa-extension-8 facility */ #define FAC_ECT S390_FEAT_EXTRACT_CPU_TIME #define FAC_PCI S390_FEAT_ZPCI /* z/PCI facility */ #define FAC_AIS S390_FEAT_ADAPTER_INT_SUPPRESSION -- 2.25.4
Re: [PATCH v2 2/4] qom: Factor out helpers from user_creatable_print_help()
Kevin Wolf writes: > This creates separate helper functions for printing a list of user > creatable object types and for printing a list of properties of a given > type. This allows using these parts without having a QemuOpts. Does the last sentence allude to a future patch? If yes, I suggest to phrase it as "This will allow ..." > Signed-off-by: Kevin Wolf > --- > qom/object_interfaces.c | 90 - > 1 file changed, 52 insertions(+), 38 deletions(-) > > diff --git a/qom/object_interfaces.c b/qom/object_interfaces.c > index e8e1523960..3fd1da157e 100644 > --- a/qom/object_interfaces.c > +++ b/qom/object_interfaces.c > @@ -214,54 +214,68 @@ char *object_property_help(const char *name, const char > *type, > return g_string_free(str, false); > } > > -bool user_creatable_print_help(const char *type, QemuOpts *opts) > +static void user_creatable_print_types(void) > +{ > +GSList *l, *list; > + > +printf("List of user creatable objects:\n"); > +list = object_class_get_list_sorted(TYPE_USER_CREATABLE, false); > +for (l = list; l != NULL; l = l->next) { > +ObjectClass *oc = OBJECT_CLASS(l->data); > +printf(" %s\n", object_class_get_name(oc)); > +} > +g_slist_free(list); > +} > + > +static bool user_creatable_print_type_properites(const char *type) > { > ObjectClass *klass; > +ObjectPropertyIterator iter; > +ObjectProperty *prop; > +GPtrArray *array; > +int i; > > -if (is_help_option(type)) { > -GSList *l, *list; > +klass = object_class_by_name(type); > +if (!klass) { > +return false; > +} > > -printf("List of user creatable objects:\n"); > -list = object_class_get_list_sorted(TYPE_USER_CREATABLE, false); > -for (l = list; l != NULL; l = l->next) { > -ObjectClass *oc = OBJECT_CLASS(l->data); > -printf(" %s\n", object_class_get_name(oc)); > +array = g_ptr_array_new(); > +object_class_property_iter_init(&iter, klass); > +while ((prop = object_property_iter_next(&iter))) { > +if (!prop->set) { > +continue; > } > -g_slist_free(list); > -return true; > + > +g_ptr_array_add(array, > +object_property_help(prop->name, prop->type, > + prop->defval, > prop->description)); > } > +g_ptr_array_sort(array, (GCompareFunc)qemu_pstrcmp0); > +if (array->len > 0) { > +printf("%s options:\n", type); > +} else { > +printf("There are no options for %s.\n", type); > +} > +for (i = 0; i < array->len; i++) { > +printf("%s\n", (char *)array->pdata[i]); > +} > +g_ptr_array_set_free_func(array, g_free); > +g_ptr_array_free(array, true); > +return true; > +} > > -klass = object_class_by_name(type); > -if (klass && qemu_opt_has_help_opt(opts)) { > -ObjectPropertyIterator iter; > -ObjectProperty *prop; > -GPtrArray *array = g_ptr_array_new(); > -int i; > - > -object_class_property_iter_init(&iter, klass); > -while ((prop = object_property_iter_next(&iter))) { > -if (!prop->set) { > -continue; > -} > - > -g_ptr_array_add(array, > -object_property_help(prop->name, prop->type, > - prop->defval, > prop->description)); > -} > -g_ptr_array_sort(array, (GCompareFunc)qemu_pstrcmp0); > -if (array->len > 0) { > -printf("%s options:\n", type); > -} else { > -printf("There are no options for %s.\n", type); > -} > -for (i = 0; i < array->len; i++) { > -printf("%s\n", (char *)array->pdata[i]); > -} > -g_ptr_array_set_free_func(array, g_free); > -g_ptr_array_free(array, true); > +bool user_creatable_print_help(const char *type, QemuOpts *opts) > +{ > +if (is_help_option(type)) { > +user_creatable_print_types(); > return true; > } > > +if (qemu_opt_has_help_opt(opts)) { > +return user_creatable_print_type_properites(type); > +} > + > return false; > } I'd make user_creatable_print_types() return true for summetry with user_creatable_print_type_properites(), but that's a matter of taste. Reviewed-by: Markus Armbruster
Re: [PATCH v2 3/4] qom: Add user_creatable_print_help_from_qdict()
Markus Armbruster writes: > Kevin Wolf writes: > >> This adds a function that, given a QDict of non-help options, prints >> help for user creatable objects. >> >> Signed-off-by: Kevin Wolf >> --- >> include/qom/object_interfaces.h | 9 + >> qom/object_interfaces.c | 9 + >> 2 files changed, 18 insertions(+) >> >> diff --git a/include/qom/object_interfaces.h >> b/include/qom/object_interfaces.h >> index f118fb516b..53b114b11a 100644 >> --- a/include/qom/object_interfaces.h >> +++ b/include/qom/object_interfaces.h >> @@ -161,6 +161,15 @@ int user_creatable_add_opts_foreach(void *opaque, >> */ >> bool user_creatable_print_help(const char *type, QemuOpts *opts); >> >> +/** >> + * user_creatable_print_help_from_qdict: >> + * @args: options to create >> + * >> + * Prints help considering the other options given in @args (if "qom-type" >> is >> + * given and valid, print properties for the type, otherwise print valid >> types) >> + */ >> +void user_creatable_print_help_from_qdict(QDict *args); >> + >> /** >> * user_creatable_del: >> * @id: the unique ID for the object >> diff --git a/qom/object_interfaces.c b/qom/object_interfaces.c >> index 3fd1da157e..ed896fe764 100644 >> --- a/qom/object_interfaces.c >> +++ b/qom/object_interfaces.c >> @@ -279,6 +279,15 @@ bool user_creatable_print_help(const char *type, >> QemuOpts *opts) >> return false; >> } >> >> +void user_creatable_print_help_from_qdict(QDict *args) >> +{ >> +const char *type = qdict_get_try_str(args, "qom-type"); >> + >> +if (!type || !user_creatable_print_type_properites(type)) { >> +user_creatable_print_types(); >> +} > > Existing user_creatable_print_help(): > > 1. "qom-type=help,..." and its sugared forms, in particular "help" > >List QOM types and succeed. > > 2. "qom-type=T,help,..." > > 2a. If T names a QOM type > > List T's properties and succeed. > > 2b. If T does not name a QOM type > > Fail. Callers typically interpret this as "no help requested", > proceed, then choke on invalid qom-type=T. And of course 3. Else No help requested; fail. > New user_creatable_print_help() treats case 2b like case 1. > > Intentional? The next patch relies on this, so I figure the answer is yes. The difference to user_creatable_print_help() is subtle. Perhaps the contract should point it out explicitly. Up to you. By the way, the contract of user_creatable_print_help() is inaccurate: * Prints help if requested in @opts. * * Returns: true if @opts contained a help option and help was printed, false * if no help option was found. One, it prints help when either @type or @opts request it. Two, "if no help option was found" is misleading: case 2b. Not this patch's problem. > >> +} >> + >> bool user_creatable_del(const char *id, Error **errp) >> { >> Object *container; Reviewed-by: Markus Armbruster
Re: [PATCH v2 4/4] qemu-storage-daemon: Remove QemuOpts from --object parser
Kevin Wolf writes: > The command line parser for --object parses the input twice: Once into > QemuOpts just for detecting help options, and then again into a QDict > using the keyval parser for actually creating the object. > > Now that the keyval parser can also detect help options, we can simplify > this and remove the QemuOpts part. > > Signed-off-by: Kevin Wolf > --- > storage-daemon/qemu-storage-daemon.c | 15 --- > 1 file changed, 4 insertions(+), 11 deletions(-) > > diff --git a/storage-daemon/qemu-storage-daemon.c > b/storage-daemon/qemu-storage-daemon.c > index bb9cb740f0..7cbdbf0b23 100644 > --- a/storage-daemon/qemu-storage-daemon.c > +++ b/storage-daemon/qemu-storage-daemon.c > @@ -264,21 +264,14 @@ static void process_options(int argc, char *argv[]) > } > case OPTION_OBJECT: > { > -QemuOpts *opts; > -const char *type; > QDict *args; > +bool help; > > -/* FIXME The keyval parser rejects 'help' arguments, so we > must > - * unconditionall try QemuOpts first. */ > -opts = qemu_opts_parse(&qemu_object_opts, > - optarg, true, &error_fatal); > -type = qemu_opt_get(opts, "qom-type"); > -if (type && user_creatable_print_help(type, opts)) { > +args = keyval_parse(optarg, "qom-type", &help, &error_fatal); > +if (help) { > +user_creatable_print_help_from_qdict(args); > exit(EXIT_SUCCESS); > } > -qemu_opts_del(opts); > - > -args = keyval_parse(optarg, "qom-type", NULL, &error_fatal); > user_creatable_add_dict(args, true, &error_fatal); > qobject_unref(args); > break; Reviewed-by: Markus Armbruster
[RFC][POC PATCH] Supporting >255 guest CPUs without interrupt remapping
AFAICT there's not actually any good reason why guests can't use x2apic and have more than 255 CPUs today, even without exposing interrupt remapping to the guest. The only issue is that guests can't direct external IOAPIC and MSI interrupts at the higher APIC IDs. So what? A guest might have a workload where it makes plenty of sense to use those extra CPUs and just refrain from targeting external interrupts at them. In fact, if you take a close look at the hyperv-iommu driver in the Linux guest kernel, you'll note that it doesn't actually do any remapping at all; all it does is return -EINVAL if asked to set affinity to a CPU which can't be targeted. For Linux at least, it should be fairly simple to have a per-IRQ controller affinity limit, so it doesn't attempt to target CPUs it can't reach. But actually, it's really simple to extend the limit of reachable APICs even without the complexity of adding a full vIOMMU. There are 8 bits of extended destination ID in the IOAPIC RTE, which maps to bits 11-4 of the MSI address. This was historically not used in bare metal, but IRQ remapping now uses the lowest bit to indicate a remappable format interrupt. A VMM can use the other 7 bits to allow guests to target 15 bits of APIC ID, which gives support for 32Ki vCPUs without needing to expose IRQ remapping to the guest. Here's a proof-of-concept hack, which I've tested with a Linux guest that knows where to put the additional 7 bits in the IOAPIC RTE and MSI message. At least IOAPIC and emulated AHCI (MSI) are working; I haven't tested assigned PCI devices yet. diff --git a/hw/i386/kvm/apic.c b/hw/i386/kvm/apic.c index 4eb2d77b87..b0f4b1a630 100644 --- a/hw/i386/kvm/apic.c +++ b/hw/i386/kvm/apic.c @@ -14,6 +14,7 @@ #include "qemu/module.h" #include "cpu.h" #include "hw/i386/apic_internal.h" +#include "hw/i386/apic-msidef.h" #include "hw/pci/msi.h" #include "sysemu/hw_accel.h" #include "sysemu/kvm.h" @@ -183,6 +184,13 @@ static void kvm_send_msi(MSIMessage *msg) { int ret; +/* + * The message has already passed through interrupt remapping if enabled, + * but the legacy extended destination ID in low bits still needs to be + * handled. + */ +msg->address = apic_convert_ext_dest_id(msg->address); + ret = kvm_irqchip_send_msi(kvm_state, *msg); if (ret < 0) { fprintf(stderr, "KVM: injection failed, MSI lost (%s)\n", diff --git a/hw/i386/pc.c b/hw/i386/pc.c index e87be5d29a..eb4901d6b7 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -807,7 +807,7 @@ void pc_machine_done(Notifier *notifier, void *data) fw_cfg_modify_i16(x86ms->fw_cfg, FW_CFG_NB_CPUS, x86ms->boot_cpus); } -if (x86ms->apic_id_limit > 255 && !xen_enabled()) { +if (0 && x86ms->apic_id_limit > 255 && !xen_enabled()) { IntelIOMMUState *iommu = INTEL_IOMMU_DEVICE(x86_iommu_get_default()); if (!iommu || !x86_iommu_ir_supported(X86_IOMMU_DEVICE(iommu)) || diff --git a/include/hw/i386/apic-msidef.h b/include/hw/i386/apic-msidef.h index 420b41167d..b3e0da64a5 100644 --- a/include/hw/i386/apic-msidef.h +++ b/include/hw/i386/apic-msidef.h @@ -28,4 +28,20 @@ #define MSI_ADDR_DEST_IDX_SHIFT 4 #define MSI_ADDR_DEST_ID_MASK 0x000ff000 +static inline uint64_t apic_convert_ext_dest_id(uint64_t address) +{ +uint64_t ext_id = address & (0xff << MSI_ADDR_DEST_IDX_SHIFT); +/* + * If the remappable format bit is set, or the upper bits are + * already set in address_hi, or the low extended bits aren't + * there anyway, do nothing. + */ +if (!ext_id || (ext_id & (1 << MSI_ADDR_DEST_IDX_SHIFT)) || +(address >> 32)) +return address; + +address &= ~ext_id; +address |= ext_id << 35; +return address; +} #endif /* HW_APIC_MSIDEF_H */ diff --git a/target/i386/kvm.c b/target/i386/kvm.c index f6dae4cfb6..547a2faf72 100644 --- a/target/i386/kvm.c +++ b/target/i386/kvm.c @@ -4589,13 +4589,11 @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route, X86IOMMUState *iommu = x86_iommu_get_default(); if (iommu) { -int ret; -MSIMessage src, dst; X86IOMMUClass *class = X86_IOMMU_DEVICE_GET_CLASS(iommu); -if (!class->int_remap) { -return 0; -} +if (class->int_remap) { +int ret; +MSIMessage src, dst; src.address = route->u.msi.address_hi; src.address <<= VTD_MSI_ADDR_HI_SHIFT; @@ -4610,11 +4608,21 @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route, return 1; } +/* + * Handled untranslated compatibilty format interrupt with + * extended destination ID in the low bits 11-5. */ +dst.address = apic_convert_ext_dest_id(dst.address); + route->u.msi.address_hi = dst.address >> VTD_MSI_ADDR_HI_SHIFT; route->u.msi.a
Re: [PATCH v2 3/4] qom: Add user_creatable_print_help_from_qdict()
Kevin Wolf writes: > This adds a function that, given a QDict of non-help options, prints > help for user creatable objects. > > Signed-off-by: Kevin Wolf > --- > include/qom/object_interfaces.h | 9 + > qom/object_interfaces.c | 9 + > 2 files changed, 18 insertions(+) > > diff --git a/include/qom/object_interfaces.h b/include/qom/object_interfaces.h > index f118fb516b..53b114b11a 100644 > --- a/include/qom/object_interfaces.h > +++ b/include/qom/object_interfaces.h > @@ -161,6 +161,15 @@ int user_creatable_add_opts_foreach(void *opaque, > */ > bool user_creatable_print_help(const char *type, QemuOpts *opts); > > +/** > + * user_creatable_print_help_from_qdict: > + * @args: options to create > + * > + * Prints help considering the other options given in @args (if "qom-type" is > + * given and valid, print properties for the type, otherwise print valid > types) > + */ > +void user_creatable_print_help_from_qdict(QDict *args); > + > /** > * user_creatable_del: > * @id: the unique ID for the object > diff --git a/qom/object_interfaces.c b/qom/object_interfaces.c > index 3fd1da157e..ed896fe764 100644 > --- a/qom/object_interfaces.c > +++ b/qom/object_interfaces.c > @@ -279,6 +279,15 @@ bool user_creatable_print_help(const char *type, > QemuOpts *opts) > return false; > } > > +void user_creatable_print_help_from_qdict(QDict *args) > +{ > +const char *type = qdict_get_try_str(args, "qom-type"); > + > +if (!type || !user_creatable_print_type_properites(type)) { > +user_creatable_print_types(); > +} Existing user_creatable_print_help(): 1. "qom-type=help,..." and its sugared forms, in particular "help" List QOM types and succeed. 2. "qom-type=T,help,..." 2a. If T names a QOM type List T's properties and succeed. 2b. If T does not name a QOM type Fail. Callers typically interpret this as "no help requested", proceed, then choke on invalid qom-type=T. New user_creatable_print_help() treats case 2b like case 1. Intentional? > +} > + > bool user_creatable_del(const char *id, Error **errp) > { > Object *container;
Re: [PATCH] meson.build: Don't look for libudev for static builds
On Fri, 2 Oct 2020 at 13:02, Paolo Bonzini wrote: > > On 02/10/20 12:52, Peter Maydell wrote: > > commit f01496a314d916 moved the logic for detecting libudev from > > configure to meson.build, but in the process it dropped the condition > > that meant we only ask pkg-config about libudev for a non-static > > build. > > > > This breaks static builds of the system emulators on at least Ubuntu > > 18.04.4, because on that host there is no static libudev but > > pkg-config still claims it exists. > > > > Reinstate the logic that we had in the configure check. > > > > Signed-off-by: Peter Maydell > > I don't think this is a good idea. Having a completely new build system > also lets us notice all these weird one-off tests, decide whether they > are worth being kept (like the SDL -Wno-undef workaround) or not, and > possibly come up with a policy that avoids having to make one-off decisions. > > If "../configure --static" worked and now doesn't then it would be a > clear regression, but a one off check should have a bigger justification > than "39 --disable switches have worked so far and 39 < 40". My configure command line used to work and now it doesn't. There is no workaround that I'm aware of that I can use by tweaking the configure options. That's a clear regression. > Here are three alternatives to the patch: > > - the workaround: just leave things as they are and add > --disable-libudev to your script. There is no --disable-udev ! If there was I'd just have used it. e104462:bionic:static-sys$ '../../configure' '--target-list=arm-softmmu' '--enable-debug' '--static' '--disable-tools' '--disable-sdl' '--disable-gtk' '--disable-vnc' '--disable-virtfs' '--disable-attr' '--disable-libiscsi' '--disable-libnfs' '--disable-libusb' '--disable-opengl' '--disable-numa' '--disable-usb-redir' '--disable-bzip2' '--audio-drv-list=' '--disable-guest-agent' '--disable-vte' --disable-udev ERROR: unknown option --disable-udev Try '../../configure --help' for more information > We have just added a month ago a > completely new dependency that would have the same issue (libu2f); we > didn't add "test $static" just because you don't have libu2f installed > and therefore you didn't notice. The day you updated your system ended > up with libu2f-dev installed, you would probably just add > --disable-libu2f instead of adding a test for $static. So why should > libudev be treated differently? The last time this came up it was in an all-in-configure test, so I took the approach of making the test a bit smarter to sanity check that what it was getting from pkg-config really worked: https://patchew.org/QEMU/20200928160402.7961-1-peter.mayd...@linaro.org/ I don't know enough meson to do that in meson, so this patch is the simple change that fixes the regression by reinstating the logic configure had. > - the complicated fix: check which statically linked libraries are > available in e.g. Debian and disable _all_ other dependencies with > --static (most of them will be the ones that you already have to disable > in your script, and most of them will be in configure). Based on the > outcome, decide whether it's actually possible to build a statically > linked emulator that makes sense. I don't think we want to try to say "look at what statically linked libraries are in Debian". The point of configure style logic is to look at what is present when we try to compile and, when something is an optional feature, only compile it in if it's going to work. It would be better to do the "see if a static library is present" test. This isn't too hard to do in configure (compare that six line fix to the detection of libgio). Hopefully it is not too hard to do in meson ? > Finally, no matter how we proceed, static building of system emulators > is not covered by either CI or tests/docker (only user-mode emulators > are). Even if we deprecate it, we should make sure that your > configuration is reproduced in either Travis or GitLab, otherwise people > will keep on breaking it. That would also document that building a > statically-linked system emulator is messy and requires a few dozen > --disable options. I agree that we don't really very solidly support this use case, and it's pretty much "if it doesn't work we accept point-fixes to the build machinery and/or recommend more use of --disable-foo". But it is still useful sometimes to have. thanks -- PMM
[PATCH] virtiofsd: Call qemu_init_exec_dir
From: "Dr. David Alan Gilbert" Since fcb4f59c879 qemu_get_local_state_pathname relies on the init_exec_dir, and virtiofsd asserts because we never set it. Set it. Reported-by: Alex Bennée Signed-off-by: Dr. David Alan Gilbert --- tools/virtiofsd/passthrough_ll.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c index 0b229ebd57..cce983ca63 100644 --- a/tools/virtiofsd/passthrough_ll.c +++ b/tools/virtiofsd/passthrough_ll.c @@ -2839,6 +2839,8 @@ int main(int argc, char *argv[]) /* Don't mask creation mode, kernel already did that */ umask(0); +qemu_init_exec_dir(argv[0]); + pthread_mutex_init(&lo.mutex, NULL); lo.inodes = g_hash_table_new(lo_key_hash, lo_key_equal); lo.root.fd = -1; -- 2.26.2
Re: [PATCH v9 0/4] Introduce Xilinx ZynqMP CAN controller
On Thu, 1 Oct 2020 at 18:57, Vikram Garhwal wrote: > > Hi Peter, > I sent rebased V10 series three weeks back as there were some issues with > meson build on v9. Would it possible for you to apply the patch series? Oops, sorry -- I didn't notice the resent series. I've put it on my list to handle. thanks -- PMM
Re: [PATCH] meson.build: Don't look for libudev for static builds
On Fri, 2 Oct 2020 at 13:35, Peter Maydell wrote: > > On Fri, 2 Oct 2020 at 13:02, Paolo Bonzini wrote: > > - the workaround: just leave things as they are and add > > --disable-libudev to your script. > > There is no --disable-udev ! ...and there's no --disable-libudev either :-) -- PMM
Re: [PATCH v2] gitlab: move linux-user plugins test across to gitlab
On 02/10/2020 13.16, Fam Zheng wrote: > On Fri, 2020-10-02 at 11:32 +0100, Alex Bennée wrote: >> Even with the recent split moving beefier plugins into contrib and >> dropping them from the check-tcg tests we are still hitting time >> limits. This possibly points to a slow down of --debug-tcg but seeing >> as we are migrating stuff to gitlab we might as well move there and >> bump the timeout. >> >> Signed-off-by: Alex Bennée > > Hi Alex, > > Unrelated to the patch: do we have custom runners on gitlab? I'm > exploring ideas to run vm based tests there. Not yet, but Cleber is working on that topic... Thomas