Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Dario Faggioli
On Thu, 2022-05-26 at 20:18 +0200, Claudio Fontana wrote:
> Forget about his aspect, I think it is a separate problem.
> 
> valgind of qos-test when run restricted to those specific paths (-p
> /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-
> net/virtio-net-tests/vhost-user/reconnect for example)
> shows all clear,
> 
> and still the test fails when run in a while loop after a few
> attempts:
> 
Yes, this kind of matches what I've also seen and reported about in
<5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
enable/run just one of:
- reconnect
- flags_mismatch
- connect_fail

I see no issues.

As soon as two of those are run, one after the other, the problem
starts to appear.

However, Claudio, AFAIUI, you're seeing this with an older GCC and
without LTO, right?

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Claudio Fontana
On 5/27/22 9:26 AM, Dario Faggioli wrote:
> On Thu, 2022-05-26 at 20:18 +0200, Claudio Fontana wrote:
>> Forget about his aspect, I think it is a separate problem.
>>
>> valgind of qos-test when run restricted to those specific paths (-p
>> /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-
>> net/virtio-net-tests/vhost-user/reconnect for example)
>> shows all clear,
>>
>> and still the test fails when run in a while loop after a few
>> attempts:
>>
> Yes, this kind of matches what I've also seen and reported about in
> <5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
> enable/run just one of:
> - reconnect
> - flags_mismatch
> - connect_fail
> 
> I see no issues.

On the countrary, for me just running a single one of those can fail.

To reproduce this I run in a loop using, as quoted above,

-p 
/x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/vhost-user/reconnect
 

for example.

After a few successful runs I hit the error.


> 
> As soon as two of those are run, one after the other, the problem
> starts to appear.

Not for me: one is enough.

> 
> However, Claudio, AFAIUI, you're seeing this with an older GCC and
> without LTO, right?

Yes, to provide a different angle I tried on veteran OpenSUSE Leap 15.2, so gcc 
is based on 7.5.0.

I don't think LTO is being used in any way.

> 
> Regards
> 

Ciao,

CLaudio



Re: [RFC PATCH v4 22/36] i386/tdx: Track RAM entries for TDX VM

2022-05-27 Thread Xiaoyao Li

On 5/26/2022 3:33 PM, Xiaoyao Li wrote:

On 5/24/2022 3:37 PM, Gerd Hoffmann wrote:



+    if (e->address == address && e->length == length) {
+    e->type = TDX_RAM_ADDED;
+    } else if (e->address == address) {
+    e->address += length;
+    e->length -= length;
+    tdx_add_ram_entry(address, length, TDX_RAM_ADDED);
+    } else if (e->address + e->length == address + length) {
+    e->length -= length;
+    tdx_add_ram_entry(address, length, TDX_RAM_ADDED);
+    } else {
+    TdxRamEntry tmp = {
+    .address = e->address,
+    .length = e->length,
+    };
+    e->length = address - tmp.address;
+
+    tdx_add_ram_entry(address, length, TDX_RAM_ADDED);
+    tdx_add_ram_entry(address + length,
+  tmp.address + tmp.length - (address + 
length),

+  TDX_RAM_UNACCEPTED);
+    }


I think all this can be simplified, by
   (1) Change the existing entry to cover the accepted ram range.
   (2) If there is room before the accepted ram range add a
   TDX_RAM_UNACCEPTED entry for that.
   (3) If there is room after the accepted ram range add a
   TDX_RAM_UNACCEPTED entry for that.


I implement as below. Please help review.

+static int tdx_accept_ram_range(uint64_t address, uint64_t length)
+{
+    uint64_t head_start, tail_start, head_length, tail_length;
+    uint64_t tmp_address, tmp_length;
+    TdxRamEntry *e;
+    int i;
+
+    for (i = 0; i < tdx_guest->nr_ram_entries; i++) {
+    e = &tdx_guest->ram_entries[i];
+
+    if (address + length < e->address ||
+    e->address + e->length < address) {
+    continue;
+    }
+
+    /*
+ * The to-be-accepted ram range must be fully contained by one
+ * RAM entries
+ */
+    if (e->address > address ||
+    e->address + e->length < address + length) {
+    return -EINVAL;
+    }
+
+    if (e->type == TDX_RAM_ADDED) {
+    return -EINVAL;
+    }
+
+    tmp_address = e->address;
+    tmp_length = e->length;
+
+    e->address = address;
+    e->length = length;
+    e->type = TDX_RAM_ADDED;
+
+    head_length = address - tmp_address;
+    if (head_length > 0) {
+    head_start = e->address;
+    tdx_add_ram_entry(head_start, head_length, 
TDX_RAM_UNACCEPTED);

+    }
+
+    tail_start = address + length;
+    if (tail_start < tmp_address + tmp_length) {
+    tail_length = e->address + e->length - tail_start;
+    tdx_add_ram_entry(tail_start, tail_length, 
TDX_RAM_UNACCEPTED);

+    }
+
+    return 0;
+    }
+
+    return -1;
+}


above is incorrect. I implement fixed one:

+static int tdx_accept_ram_range(uint64_t address, uint64_t length)
+{
+uint64_t head_start, tail_start, head_length, tail_length;
+uint64_t tmp_address, tmp_length;
+TdxRamEntry *e;
+int i;
+
+for (i = 0; i < tdx_guest->nr_ram_entries; i++) {
+e = &tdx_guest->ram_entries[i];
+
+if (address + length < e->address ||
+e->address + e->length < address) {
+continue;
+}
+
+/*
+ * The to-be-accepted ram range must be fully contained by one
+ * RAM entries
+ */
+if (e->address > address ||
+e->address + e->length < address + length) {
+return -EINVAL;
+}
+
+if (e->type == TDX_RAM_ADDED) {
+return -EINVAL;
+}
+
+tmp_address = e->address;
+tmp_length = e->length;
+
+e->address = address;
+e->length = length;
+e->type = TDX_RAM_ADDED;
+
+head_length = address - tmp_address;
+if (head_length > 0) {
+head_start = tmp_address;
+tdx_add_ram_entry(head_start, head_length, TDX_RAM_UNACCEPTED);
+}
+
+tail_start = address + length;
+if (tail_start < tmp_address + tmp_length) {
+tail_length = tmp_address + tmp_length - tail_start;
+tdx_add_ram_entry(tail_start, tail_length, TDX_RAM_UNACCEPTED);
+}
+
+return 0;
+}
+
+return -1;
+}






take care,
   Gerd








Re: [RFC PATCH v4 22/36] i386/tdx: Track RAM entries for TDX VM

2022-05-27 Thread Xiaoyao Li

On 5/27/2022 2:48 AM, Isaku Yamahata wrote:

On Thu, May 26, 2022 at 03:33:10PM +0800,
Xiaoyao Li  wrote:


On 5/24/2022 3:37 PM, Gerd Hoffmann wrote:

I think all this can be simplified, by
(1) Change the existing entry to cover the accepted ram range.
(2) If there is room before the accepted ram range add a
TDX_RAM_UNACCEPTED entry for that.
(3) If there is room after the accepted ram range add a
TDX_RAM_UNACCEPTED entry for that.


I implement as below. Please help review.

+static int tdx_accept_ram_range(uint64_t address, uint64_t length)
+{
+uint64_t head_start, tail_start, head_length, tail_length;
+uint64_t tmp_address, tmp_length;
+TdxRamEntry *e;
+int i;
+
+for (i = 0; i < tdx_guest->nr_ram_entries; i++) {
+e = &tdx_guest->ram_entries[i];
+
+if (address + length < e->address ||
+e->address + e->length < address) {
+continue;
+}
+
+/*
+ * The to-be-accepted ram range must be fully contained by one
+ * RAM entries
+ */
+if (e->address > address ||
+e->address + e->length < address + length) {
+return -EINVAL;
+}
+
+if (e->type == TDX_RAM_ADDED) {
+return -EINVAL;
+}
+
+tmp_address = e->address;
+tmp_length = e->length;
+
+e->address = address;
+e->length = length;
+e->type = TDX_RAM_ADDED;
+
+head_length = address - tmp_address;
+if (head_length > 0) {
+head_start = e->address;
+tdx_add_ram_entry(head_start, head_length, TDX_RAM_UNACCEPTED);


tdx_add_ram_entry() increments tdx_guest->nr_ram_entries.  I think it's worth
for comments why this is safe regarding to this for-loop.


The for-loop is to find the valid existing RAM entry (from E820 table).
It will update the RAM entry and increment tdx_guest->nr_ram_entries 
when the initial RAM entry needs to be split. However, once find, the 
for-loop is certainly stopped since it returns unconditionally.





[PATCH 1/1] nbd: trace long NBD operations

2022-05-27 Thread Denis V. Lunev
At the moment there are 2 sources of lengthy operations if configured:
* open connection, which could retry inside and
* reconnect of already opened connection
These operations could be quite lengthy and cumbersome to catch thus
it would be quite natural to add trace points for them.

This patch is based on the original downstream work made by Vladimir.

Signed-off-by: Denis V. Lunev 
CC: Eric Blake 
CC: Vladimir Sementsov-Ogievskiy 
CC: Kevin Wolf 
CC: Hanna Reitz 
CC: Paolo Bonzini 
---
 block/nbd.c | 11 ---
 block/trace-events  |  2 ++
 nbd/client-connection.c |  2 ++
 nbd/trace-events|  3 +++
 4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 6085ab1d2c..f1a473d36b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -371,6 +371,7 @@ static bool nbd_client_connecting(BDRVNBDState *s)
 /* Called with s->requests_lock taken.  */
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
+int ret;
 bool blocking = s->state == NBD_CLIENT_CONNECTING_WAIT;
 
 /*
@@ -380,6 +381,8 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
 assert(nbd_client_connecting(s));
 assert(s->in_flight == 1);
 
+trace_nbd_reconnect_attempt(s->bs->in_flight);
+
 if (blocking && !s->reconnect_delay_timer) {
 /*
  * It's the first reconnect attempt after switching to
@@ -401,7 +404,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
 }
 
 qemu_mutex_unlock(&s->requests_lock);
-nbd_co_do_establish_connection(s->bs, blocking, NULL);
+ret = nbd_co_do_establish_connection(s->bs, blocking, NULL);
 qemu_mutex_lock(&s->requests_lock);
 
 /*
@@ -410,6 +413,8 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
  * this I/O request (so draining removes all timers).
  */
 reconnect_delay_timer_del(s);
+
+trace_nbd_reconnect_attempt_result(ret, s->bs->in_flight);
 }
 
 static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
@@ -1856,8 +1861,8 @@ static int nbd_process_options(BlockDriverState *bs, 
QDict *options,
 goto error;
 }
 
-s->reconnect_delay = qemu_opt_get_number(opts, "reconnect-delay", 0);
-s->open_timeout = qemu_opt_get_number(opts, "open-timeout", 0);
+s->reconnect_delay = qemu_opt_get_number(opts, "reconnect-delay", 300);
+s->open_timeout = qemu_opt_get_number(opts, "open-timeout", 300);
 
 ret = 0;
 
diff --git a/block/trace-events b/block/trace-events
index 549090d453..caab699c22 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -172,6 +172,8 @@ nbd_read_reply_entry_fail(int ret, const char *err) "ret = 
%d, err: %s"
 nbd_co_request_fail(uint64_t from, uint32_t len, uint64_t handle, uint16_t 
flags, uint16_t type, const char *name, int ret, const char *err) "Request 
failed { .from = %" PRIu64", .len = %" PRIu32 ", .handle = %" PRIu64 ", .flags 
= 0x%" PRIx16 ", .type = %" PRIu16 " (%s) } ret = %d, err: %s"
 nbd_client_handshake(const char *export_name) "export '%s'"
 nbd_client_handshake_success(const char *export_name) "export '%s'"
+nbd_reconnect_attempt(int in_flight) "in_flight %d"
+nbd_reconnect_attempt_result(int ret, int in_flight) "ret %d in_flight %d"
 
 # ssh.c
 ssh_restart_coroutine(void *co) "co=%p"
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 2a632931c3..a5ee82e804 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -23,6 +23,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "trace.h"
 
 #include "block/nbd.h"
 
@@ -210,6 +211,7 @@ static void *connect_thread_func(void *opaque)
 object_unref(OBJECT(conn->sioc));
 conn->sioc = NULL;
 if (conn->do_retry && !conn->detached) {
+trace_nbd_connect_iteration(timeout);
 qemu_mutex_unlock(&conn->mutex);
 
 sleep(timeout);
diff --git a/nbd/trace-events b/nbd/trace-events
index c4919a2dd5..bdadfdc82d 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -73,3 +73,6 @@ nbd_co_receive_request_decode_type(uint64_t handle, uint16_t 
type, const char *n
 nbd_co_receive_request_payload_received(uint64_t handle, uint32_t len) 
"Payload received: handle = %" PRIu64 ", len = %" PRIu32
 nbd_co_receive_align_compliance(const char *op, uint64_t from, uint32_t len, 
uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" 
PRIx64 ", len=0x%" PRIx32 ", align=0x%" PRIx32
 nbd_trip(void) "Reading request"
+
+# client-connection.c
+nbd_connect_iteration(int in_flight) "timeout %d"
-- 
2.32.0




HELP: I can't get whpx working on ryzen / win11

2022-05-27 Thread 刘辉





CPU: AMD Ryzen 7 5800H


Windows Version: Microsoft Windows [Version 10.0.22621.1]





QEMU Versions




D:\dev\qemu-toby>D:/dev/qemu/qemu-system-x86_64.exe -version


QEMU emulator version 7.0.0 (v7.0.0-11902-g1d935f4a02-dirty)

Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers





D:\dev\qemu-toby>qemu-system-x86_64.exe --version


QEMU emulator version 7.0.50 (v7.0.0-1245-g58b53669e8)

Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers




Tried Command Line *Same error for both version*


COMMAND LINE: D:/dev/qemu-toby/qemu-system-x86_64.exe --name tQEMU --display 
vnc=:0 --rtc base=utc,clock=host --machine q35 --accel whpx --boot 
order=dc,menu=off,strict=off --cpu max --m 8G --device virtio-gpu --audiodev 
none,id=QEMUAudio --device intel-hda --device hda-duplex,audiodev=QEMUAudio 
--device virtio-net,netdev=QEMUNet --netdev user,id=QEMUNet,smb=D:/install/FPGA 
--blockdev 
driver=qcow2,node-name=QEMUDisk0,file.driver=file,file.filename=E:/VM/tqemu/myzynq.qcow2
 --device virtio-blk,drive=QEMUDisk0 --cdrom 
E:/iso/windows7/cn_windows_7_enterprise_with_sp1_x64_dvd_u_677685.iso -drive 
if=pflash,format=raw,file=d:/dev/qemu/share/edk2-x86_64-code.fd -smp 4 -usb 
-device usb-tablet




WHPX: setting APIC emulation mode in the hypervisor

Windows Hypervisor Platform accelerator is operational

whpx: injection failed, MSI (0, 0) delivery: 0, dest_mode: 0, trigger mode: 0, 
vector: 0, lost (c0350005)

qemu-system-x86_64.exe: WHPX: Failed to emulate MMIO access with 
EmulatorReturnStatus: 2

qemu-system-x86_64.exe: WHPX: Failed to exec a virtual processor




Re: [PATCH v3 2/9] replay: notify vCPU when BH is scheduled

2022-05-27 Thread Pavel Dovgalyuk

On 26.05.2022 15:10, Paolo Bonzini wrote:

On 5/26/22 11:51, Pavel Dovgalyuk wrote:


At least aio_bh_schedule_oneshot_full should have the same effect, so 
should this be done at a lower level, in aio_bh_enqueue() or even 
aio_notify()?


Not sure about aio_notify. It can operate with different contexts.
Can some of them be not related to the VM state?


All but the main AioContext one would have current_cpu == NULL.


aio_bh_enqueue is better. Moving this code to aio_notify breaks the tests.





Introduce akcipher service for virtio-crypto

2022-05-27 Thread zhenwei pi
v7 - v8:
- The changes of QEMU crypto has been reviewed & merged by Daniel,
  remove this part from this series. Thanks to Daniel!
- virtio_crypto.h is updated by e4082063e47e
  ("linux-headers: Update to v5.18-rc6"), remove from this series.
- Minor fixes reviewed by Gonglei. Thanks to Gonglei!

v6 -> v7:
- Fix serval build errors for some specific platforms/configurations.
- Use '%zu' instead of '%lu' for size_t parameters.
- AkCipher-gcrypt: avoid setting wrong error messages when parsing RSA
  keys.
- AkCipher-benchmark: process constant amount of sign/verify instead
 of running sign/verify for a constant duration.

v5 -> v6:
- Fix build errors and codestyles.
- Add parameter 'Error **errp' for qcrypto_akcipher_rsakey_parse.
- Report more detailed errors.
- Fix buffer length check and return values of akcipher-nettle, allows caller to
 pass a buffer with larger size than actual needed.

A million thanks to Daniel!

v4 -> v5:
- Move QCryptoAkCipher into akcipherpriv.h, and modify the related comments.
- Rename asn1_decoder.c to der.c.
- Code style fix: use 'cleanup' & 'error' lables.
- Allow autoptr type to auto-free.
- Add test cases for rsakey to handle DER error.
- Other minor fixes.

v3 -> v4:
- Coding style fix: Akcipher -> AkCipher, struct XXX -> XXX, Rsa -> RSA,
XXX-alg -> XXX-algo.
- Change version info in qapi/crypto.json, from 7.0 -> 7.1.
- Remove ecdsa from qapi/crypto.json, it would be introduced with the 
implemetion later.
- Use QCryptoHashAlgothrim instead of QCryptoRSAHashAlgorithm(removed) in 
qapi/crypto.json.
- Rename arguments of qcrypto_akcipher_XXX to keep aligned with 
qcrypto_cipher_XXX(dec/enc/sign/vefiry -> in/out/in2), and add 
qcrypto_akcipher_max_XXX APIs.
- Add new API: qcrypto_akcipher_supports.
- Change the return value of qcrypto_akcipher_enc/dec/sign, these functions 
return the actual length of result.
- Separate ASN.1 source code and test case clean.
- Disable RSA raw encoding for akcipher-nettle.
- Separate RSA key parser into rsakey.{hc}, and implememts it with 
builtin-asn1-decoder and nettle respectivly.
- Implement RSA(pkcs1 and raw encoding) algorithm by gcrypt. This has higher 
priority than nettle.
- For some akcipher operations(eg, decryption of pkcs1pad(rsa)), the length of 
returned result maybe less than the dst buffer size, return the actual length 
of result instead of the buffer length to the guest side. (in function 
virtio_crypto_akcipher_input_data_helper)
- Other minor changes.

Thanks to Daniel!

Eric pointed out this missing part of use case, send it here again.

In our plan, the feature is designed for HTTPS offloading case and other 
applications which use kernel RSA/ecdsa by keyctl syscall. The full picture 
shows bellow:


 Nginx/openssl[1] ... Apps
Guest   -
  virtio-crypto driver[2]
-
  virtio-crypto backend[3]
Host-
 /  |  \
 builtin[4]   vhost keyctl[5] ...


[1] User applications can offload RSA calculation to kernel by keyctl syscall. 
There is no keyctl engine in openssl currently, we developed a engine and tried 
to contribute it to openssl upstream, but openssl 1.x does not accept new 
feature. Link:
   https://github.com/openssl/openssl/pull/16689

This branch is available and maintained by Lei 
   https://github.com/TousakaRin/openssl/tree/OpenSSL_1_1_1-kctl_engine

We tested nginx(change config file only) with openssl keyctl engine, it works 
fine.

[2] virtio-crypto driver is used to communicate with host side, send requests 
to host side to do asymmetric calculation.
   https://lkml.org/lkml/2022/3/1/1425

[3] virtio-crypto backend handles requests from guest side, and forwards 
request to crypto backend driver of QEMU.

[4] Currently RSA is supported only in builtin driver. This driver is supposed 
to test the full feature without other software(Ex vhost process) and hardware 
dependence. ecdsa is introduced into qapi type without implementation, this may 
be implemented in Q3-2022 or later. If ecdsa type definition should be added 
with the implementation together, I'll remove this in next version.

[5] keyctl backend is in development, we will post this feature in Q2-2022. 
keyctl backend can use hardware acceleration(Ex, Intel QAT).

Setup the full environment, tested with Intel QAT on host side, the QPS of 
HTTPS increase to ~200% in a guest.

VS PCI passthrough: the most important benefit of this solution makes the VM 
migratable.

v2 -> v3:
- Introduce akcipher types to qapi
- Add test/benchmark suite for akcipher class
- Seperate 'virtio_crypto: Support virtio crypto asym operation' into:
 - crypto: Introduce akcipher crypto class
 - virtio-crypto: Introduce RSA algorithm

v1 -> v2:
- Update virtio_crypto.h from v2 version of related kernel patch.

v1:
- Support akcipher for virtio-crypto.
- Introd

[PATCH v8 1/1] crypto: Introduce RSA algorithm

2022-05-27 Thread zhenwei pi
There are two parts in this patch:
1, support akcipher service by cryptodev-builtin driver
2, virtio-crypto driver supports akcipher service

In principle, we should separate this into two patches, to avoid
compiling error, merge them into one.

Then virtio-crypto gets request from guest side, and forwards the
request to builtin driver to handle it.

Test with a guest linux:
1, The self-test framework of crypto layer works fine in guest kernel
2, Test with Linux guest(with asym support), the following script
test(note that pkey_XXX is supported only in a newer version of keyutils):
  - both public key & private key
  - create/close session
  - encrypt/decrypt/sign/verify basic driver operation
  - also test with kernel crypto layer(pkey add/query)

All the cases work fine.

Run script in guest:
rm -rf *.der *.pem *.pfx
modprobe pkcs8_key_parser # if CONFIG_PKCS8_PRIVATE_KEY_PARSER=m
rm -rf /tmp/data
dd if=/dev/random of=/tmp/data count=1 bs=20

openssl req -nodes -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -subj 
"/C=CN/ST=BJ/L=HD/O=qemu/OU=dev/CN=qemu/emailAddress=q...@qemu.org"
openssl pkcs8 -in key.pem -topk8 -nocrypt -outform DER -out key.der
openssl x509 -in cert.pem -inform PEM -outform DER -out cert.der

PRIV_KEY_ID=`cat key.der | keyctl padd asymmetric test_priv_key @s`
echo "priv key id = "$PRIV_KEY_ID
PUB_KEY_ID=`cat cert.der | keyctl padd asymmetric test_pub_key @s`
echo "pub key id = "$PUB_KEY_ID

keyctl pkey_query $PRIV_KEY_ID 0
keyctl pkey_query $PUB_KEY_ID 0

echo "Enc with priv key..."
keyctl pkey_encrypt $PRIV_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.priv
echo "Dec with pub key..."
keyctl pkey_decrypt $PRIV_KEY_ID 0 /tmp/enc.priv enc=pkcs1 >/tmp/dec
cmp /tmp/data /tmp/dec

echo "Sign with priv key..."
keyctl pkey_sign $PRIV_KEY_ID 0 /tmp/data enc=pkcs1 hash=sha1 > /tmp/sig
echo "Verify with pub key..."
keyctl pkey_verify $PRIV_KEY_ID 0 /tmp/data /tmp/sig enc=pkcs1 hash=sha1

echo "Enc with pub key..."
keyctl pkey_encrypt $PUB_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.pub
echo "Dec with priv key..."
keyctl pkey_decrypt $PRIV_KEY_ID 0 /tmp/enc.pub enc=pkcs1 >/tmp/dec
cmp /tmp/data /tmp/dec

echo "Verify with pub key..."
keyctl pkey_verify $PUB_KEY_ID 0 /tmp/data /tmp/sig enc=pkcs1 hash=sha1

Signed-off-by: zhenwei pi 
Signed-off-by: lei he conf.crypto_services =
  1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
  1u << VIRTIO_CRYPTO_SERVICE_HASH |
- 1u << VIRTIO_CRYPTO_SERVICE_MAC;
+ 1u << VIRTIO_CRYPTO_SERVICE_MAC |
+ 1u << VIRTIO_CRYPTO_SERVICE_AKCIPHER;
 backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
 backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;
+backend->conf.akcipher_algo = 1u << VIRTIO_CRYPTO_AKCIPHER_RSA;
 /*
  * Set the Maximum length of crypto request.
  * Why this value? Just avoid to overflow when
  * memory allocation for each crypto request.
  */
-backend->conf.max_size = LONG_MAX - sizeof(CryptoDevBackendSymOpInfo);
+backend->conf.max_size = LONG_MAX - sizeof(CryptoDevBackendOpInfo);
 backend->conf.max_cipher_key_len = CRYPTODEV_BUITLIN_MAX_CIPHER_KEY_LEN;
 backend->conf.max_auth_key_len = CRYPTODEV_BUITLIN_MAX_AUTH_KEY_LEN;
 
@@ -148,6 +152,53 @@ err:
return -1;
 }
 
+static int cryptodev_builtin_get_rsa_hash_algo(
+int virtio_rsa_hash, Error **errp)
+{
+switch (virtio_rsa_hash) {
+case VIRTIO_CRYPTO_RSA_MD5:
+return QCRYPTO_HASH_ALG_MD5;
+
+case VIRTIO_CRYPTO_RSA_SHA1:
+return QCRYPTO_HASH_ALG_SHA1;
+
+case VIRTIO_CRYPTO_RSA_SHA256:
+return QCRYPTO_HASH_ALG_SHA256;
+
+case VIRTIO_CRYPTO_RSA_SHA512:
+return QCRYPTO_HASH_ALG_SHA512;
+
+default:
+error_setg(errp, "Unsupported rsa hash algo: %d", virtio_rsa_hash);
+return -1;
+}
+}
+
+static int cryptodev_builtin_set_rsa_options(
+int virtio_padding_algo,
+int virtio_hash_algo,
+QCryptoAkCipherOptionsRSA *opt,
+Error **errp)
+{
+if (virtio_padding_algo == VIRTIO_CRYPTO_RSA_PKCS1_PADDING) {
+opt->padding_alg = QCRYPTO_RSA_PADDING_ALG_PKCS1;
+opt->hash_alg =
+cryptodev_builtin_get_rsa_hash_algo(virtio_hash_algo, errp);
+if (opt->hash_alg < 0) {
+return -1;
+}
+return 0;
+}
+
+if (virtio_padding_algo == VIRTIO_CRYPTO_RSA_RAW_PADDING) {
+opt->padding_alg = QCRYPTO_RSA_PADDING_ALG_RAW;
+return 0;
+}
+
+error_setg(errp, "Unsupported rsa padding algo: %d", virtio_padding_algo);
+return -1;
+}
+
 static int cryptodev_builtin_create_cipher_session(
 CryptoDevBackendBuiltin *builtin,
 CryptoDevBackendSymSessionInfo *sess_info,
@@ -240,26 +291,89 @@ static int cryptodev_builtin_create_cipher_session(
 return index;
 }
 
-static int64_t cryptodev_

Re: [PATCH 1/1] nbd: trace long NBD operations

2022-05-27 Thread Vladimir Sementsov-Ogievskiy

On 5/27/22 11:43, Denis V. Lunev wrote:

At the moment there are 2 sources of lengthy operations if configured:
* open connection, which could retry inside and
* reconnect of already opened connection
These operations could be quite lengthy and cumbersome to catch thus
it would be quite natural to add trace points for them.

This patch is based on the original downstream work made by Vladimir.

Signed-off-by: Denis V. Lunev 
CC: Eric Blake 
CC: Vladimir Sementsov-Ogievskiy 
CC: Kevin Wolf 
CC: Hanna Reitz 
CC: Paolo Bonzini 
---
  block/nbd.c | 11 ---
  block/trace-events  |  2 ++
  nbd/client-connection.c |  2 ++
  nbd/trace-events|  3 +++
  4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 6085ab1d2c..f1a473d36b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -371,6 +371,7 @@ static bool nbd_client_connecting(BDRVNBDState *s)
  /* Called with s->requests_lock taken.  */
  static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
  {
+int ret;
  bool blocking = s->state == NBD_CLIENT_CONNECTING_WAIT;
  
  /*

@@ -380,6 +381,8 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
  assert(nbd_client_connecting(s));
  assert(s->in_flight == 1);
  
+trace_nbd_reconnect_attempt(s->bs->in_flight);

+
  if (blocking && !s->reconnect_delay_timer) {
  /*
   * It's the first reconnect attempt after switching to
@@ -401,7 +404,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
  }
  
  qemu_mutex_unlock(&s->requests_lock);

-nbd_co_do_establish_connection(s->bs, blocking, NULL);
+ret = nbd_co_do_establish_connection(s->bs, blocking, NULL);
  qemu_mutex_lock(&s->requests_lock);
  
  /*

@@ -410,6 +413,8 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
   * this I/O request (so draining removes all timers).
   */
  reconnect_delay_timer_del(s);
+
+trace_nbd_reconnect_attempt_result(ret, s->bs->in_flight);


May be better trace exactly after nbd_co_do_establish_connection(). Doesn't 
really matter, just simpler code.


  }
  
  static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t handle)

@@ -1856,8 +1861,8 @@ static int nbd_process_options(BlockDriverState *bs, 
QDict *options,
  goto error;
  }
  
-s->reconnect_delay = qemu_opt_get_number(opts, "reconnect-delay", 0);

-s->open_timeout = qemu_opt_get_number(opts, "open-timeout", 0);
+s->reconnect_delay = qemu_opt_get_number(opts, "reconnect-delay", 300);
+s->open_timeout = qemu_opt_get_number(opts, "open-timeout", 300);


That's changing defaults. Should not be in this patch. And I don't think we can 
simply change upstream default of open-timeout, as it breaks habitual behavior.

  
  ret = 0;
  
diff --git a/block/trace-events b/block/trace-events

index 549090d453..caab699c22 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -172,6 +172,8 @@ nbd_read_reply_entry_fail(int ret, const char *err) "ret = %d, 
err: %s"
  nbd_co_request_fail(uint64_t from, uint32_t len, uint64_t handle, uint16_t flags, uint16_t type, const char *name, int ret, const char 
*err) "Request failed { .from = %" PRIu64", .len = %" PRIu32 ", .handle = %" PRIu64 ", .flags = 
0x%" PRIx16 ", .type = %" PRIu16 " (%s) } ret = %d, err: %s"
  nbd_client_handshake(const char *export_name) "export '%s'"
  nbd_client_handshake_success(const char *export_name) "export '%s'"
+nbd_reconnect_attempt(int in_flight) "in_flight %d"
+nbd_reconnect_attempt_result(int ret, int in_flight) "ret %d in_flight %d"


bs->in_flight is "unsigned int", so a bit better would be use "unsigned int" and 
"%u" here

  
  # ssh.c

  ssh_restart_coroutine(void *co) "co=%p"
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 2a632931c3..a5ee82e804 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -23,6 +23,7 @@
   */
  
  #include "qemu/osdep.h"

+#include "trace.h"
  
  #include "block/nbd.h"
  
@@ -210,6 +211,7 @@ static void *connect_thread_func(void *opaque)

  object_unref(OBJECT(conn->sioc));
  conn->sioc = NULL;
  if (conn->do_retry && !conn->detached) {
+trace_nbd_connect_iteration(timeout);


Here we are going to sleep a bit, before next reconnect attempt. I'd call the trace-point 
"trace_nbd_connect_thread_sleep" or something like this to be more intuitive.


  qemu_mutex_unlock(&conn->mutex);
  
  sleep(timeout);

diff --git a/nbd/trace-events b/nbd/trace-events
index c4919a2dd5..bdadfdc82d 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -73,3 +73,6 @@ nbd_co_receive_request_decode_type(uint64_t handle, uint16_t 
type, const char *n
  nbd_co_receive_request_payload_received(uint64_t handle, uint32_t len) "Payload received: 
handle = %" PRIu64 ", len = %" PRIu32
  nbd_co_receive_align_compliance(const char *op, uint64_t f

Re: [PATCH 1/1] nbd: trace long NBD operations

2022-05-27 Thread Vladimir Sementsov-Ogievskiy

On 5/27/22 11:43, Denis V. Lunev wrote:

+++ b/nbd/client-connection.c
@@ -23,6 +23,7 @@
   */
  
  #include "qemu/osdep.h"

+#include "trace.h"
  
  #include "block/nbd.h"
  
@@ -210,6 +211,7 @@ static void *connect_thread_func(void *opaque)

  object_unref(OBJECT(conn->sioc));
  conn->sioc = NULL;
  if (conn->do_retry && !conn->detached) {
+trace_nbd_connect_iteration(timeout);
  qemu_mutex_unlock(&conn->mutex);
  
  sleep(timeout);

diff --git a/nbd/trace-events b/nbd/trace-events
index c4919a2dd5..bdadfdc82d 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -73,3 +73,6 @@ nbd_co_receive_request_decode_type(uint64_t handle, uint16_t 
type, const char *n
  nbd_co_receive_request_payload_received(uint64_t handle, uint32_t len) "Payload received: 
handle = %" PRIu64 ", len = %" PRIu32
  nbd_co_receive_align_compliance(const char *op, uint64_t from, uint32_t len, uint32_t align) "client 
sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx32 ", 
align=0x%" PRIx32
  nbd_trip(void) "Reading request"
+
+# client-connection.c
+nbd_connect_iteration(int in_flight) "timeout %d"


timeout is uint64_t, so, it should be "uint64_t timeout" here and %" PRIu64

--
Best regards,
Vladimir



[libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Daniel P . Berrangé
Libvirt provides QMP passthrough APIs for the QEMU driver and these are
exposed in virsh. It is not especially pleasant, however, using the raw
QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
exposes a human friendly interactive shell. It is not possible to use
this with libvirt managed guest, however, since only one client can
attach to he QMP socket at any point in time.

The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
socket and listens for incoming client connections, speaking QMP on
the connected socket. It will forward any QMP commands received onto
the running libvirt QEMU guest, and forward any replies back to the
QMP client.

  $ virsh start demo
  $ virt-qmp-proxy demo demo.qmp &
  $ qmp-shell demo.qmp
  Welcome to the QMP low-level shell!
  Connected to QEMU 6.2.0

  (QEMU) query-kvm
  {
  "return": {
  "enabled": true,
  "present": true
  }
  }

Note this tool of course has the same risks as the raw libvirt
QMP passthrough. It is safe to run query commands to fetch information
but commands which change the QEMU state risk disrupting libvirt's
management of QEMU, potentially resulting in data loss/corruption in
the worst case.

Signed-off-by: Daniel P. Berrangé 
---

CC'ing QEMU since this is likely of interest to maintainers and users
who work with QEMU and libvirt

Note this impl is fairly crude in that it assumes it is receiving
the QMP commands linewise one at a time. None the less it is good
enough to work with qmp-shell already, so I figured it was worth
exposing to the world. It also lacks support for forwarding events
back to the QMP client.

 docs/manpages/meson.build|   1 +
 docs/manpages/virt-qmp-proxy.rst | 123 
 tools/meson.build|   5 ++
 tools/virt-qmp-proxy | 133 +++
 4 files changed, 262 insertions(+)
 create mode 100644 docs/manpages/virt-qmp-proxy.rst
 create mode 100755 tools/virt-qmp-proxy

diff --git a/docs/manpages/meson.build b/docs/manpages/meson.build
index ba673cf472..4162a9969a 100644
--- a/docs/manpages/meson.build
+++ b/docs/manpages/meson.build
@@ -18,6 +18,7 @@ docs_man_files = [
   { 'name': 'virt-pki-query-dn', 'section': '1', 'install': true },
   { 'name': 'virt-pki-validate', 'section': '1', 'install': true },
   { 'name': 'virt-qemu-run', 'section': '1', 'install': conf.has('WITH_QEMU') 
},
+  { 'name': 'virt-qmp-proxy', 'section': '1', 'install': conf.has('WITH_QEMU') 
},
   { 'name': 'virt-xml-validate', 'section': '1', 'install': true },
 
   { 'name': 'libvirt-guests', 'section': '8', 'install': 
conf.has('WITH_LIBVIRTD') },
diff --git a/docs/manpages/virt-qmp-proxy.rst b/docs/manpages/virt-qmp-proxy.rst
new file mode 100644
index 00..94679406ab
--- /dev/null
+++ b/docs/manpages/virt-qmp-proxy.rst
@@ -0,0 +1,123 @@
+==
+virt-qmp-proxy
+==
+
+--
+Expose a QMP proxy server for a libvirt QEMU guest
+--
+
+:Manual section: 1
+:Manual group: Virtualization Support
+
+.. contents::
+
+
+SYNOPSIS
+
+
+``virt-qmp-proxy`` [*OPTION*]... *DOMAIN* *QMP-SOCKET-PATH*
+
+
+DESCRIPTION
+===
+
+This tool provides a way to expose a QMP proxy server that communicates
+with a QEMU guest managed by libvirt. This enables standard QMP client
+tools to interact with libvirt managed guests.
+
+**NOTE: use of this tool will result in the running QEMU guest being
+marked as tainted.** It is strongly recommended that this tool *only be
+used to send commands which query information* about the running guest.
+If this tool is used to make changes to the state of the guest, this
+may have negative interactions with the QEMU driver, resulting in an
+inability to manage the guest operation thereafter, and in the worst
+case **potentially lead to data loss or corruption**.
+
+The ``virt-qmp-proxy`` program will listen on a UNIX socket for incoming
+client connections, and run the QMP protocol over the connection. Any
+commands received will be sent to the running libvirt guest, and replies
+sent back.
+
+The ``virt-qemu-proxy`` program may be interrupted (eg Ctrl-C) when it
+is no longer required. The libvirt QEMU guest will continue running.
+
+
+OPTIONS
+===
+
+*DOMAIN*
+
+The ID or UUID or Name of the libvirt QEMU guest.
+
+*QMP-SOCKET-PATH*
+
+The filesystem path at which to run the QMP server, listening for
+incoming connections.
+
+``-c`` *CONNECTION-URI*
+``--connect``\ =\ *CONNECTION-URI*
+
+The URI for the connection to the libvirt QEMU driver. If omitted,
+a URI will be auto-detected.
+
+``-v``, ``--verbose``
+
+Run in verbose mode, printing all QMP commands and replies that
+are handled.
+
+``-h``, ``--help``
+
+Display the command line help.
+
+
+EXIT STATUS
+===
+
+Upon successful shutdown, an exit status of 0 will be set. Upon
+failure a non-zero status will be set.
+

Re: [PATCH v3 01/10] block: Add a 'flags' param to bdrv_{pread,pwrite,pwrite_sync}()

2022-05-27 Thread Vladimir Sementsov-Ogievskiy

On 5/19/22 17:48, Alberto Faria wrote:

For consistency with other I/O functions, and in preparation to
implement them using generated_co_wrapper.

Callers were updated using this Coccinelle script:

 @@ expression child, offset, buf, bytes; @@
 - bdrv_pread(child, offset, buf, bytes)
 + bdrv_pread(child, offset, buf, bytes, 0)

 @@ expression child, offset, buf, bytes; @@
 - bdrv_pwrite(child, offset, buf, bytes)
 + bdrv_pwrite(child, offset, buf, bytes, 0)

 @@ expression child, offset, buf, bytes; @@
 - bdrv_pwrite_sync(child, offset, buf, bytes)
 + bdrv_pwrite_sync(child, offset, buf, bytes, 0)

Resulting overly-long lines were then fixed by hand.

Signed-off-by: Alberto Faria
Reviewed-by: Paolo Bonzini


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Claudio Fontana
On 5/27/22 10:18 AM, Claudio Fontana wrote:
> On 5/27/22 9:26 AM, Dario Faggioli wrote:
>> On Thu, 2022-05-26 at 20:18 +0200, Claudio Fontana wrote:
>>> Forget about his aspect, I think it is a separate problem.
>>>
>>> valgind of qos-test when run restricted to those specific paths (-p
>>> /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-
>>> net/virtio-net-tests/vhost-user/reconnect for example)
>>> shows all clear,
>>>
>>> and still the test fails when run in a while loop after a few
>>> attempts:
>>>
>> Yes, this kind of matches what I've also seen and reported about in
>> <5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
>> enable/run just one of:
>> - reconnect
>> - flags_mismatch
>> - connect_fail
>>
>> I see no issues.
> 
> On the countrary, for me just running a single one of those can fail.
> 
> To reproduce this I run in a loop using, as quoted above,
> 
> -p 
> /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/vhost-user/reconnect
>  
> 
> for example.
> 
> After a few successful runs I hit the error.
> 
> 
>>
>> As soon as two of those are run, one after the other, the problem
>> starts to appear.
> 
> Not for me: one is enough.
> 
>>
>> However, Claudio, AFAIUI, you're seeing this with an older GCC and
>> without LTO, right?
> 
> Yes, to provide a different angle I tried on veteran OpenSUSE Leap 15.2, so 
> gcc is based on 7.5.0.
> 
> I don't think LTO is being used in any way.
> 
>>
>> Regards
>>
> 
> Ciao,
> 
> CLaudio
> 

Hi Alex, I noticed that the asserts in wait_for_rings_started and such are 
triggered after
the timeout of 5 seconds passed (end_time = g_get_monotonic_time() + 5 * 
G_TIME_SPAN_SECOND).

I tried to increase the timeouts from 5 seconds to 30 seconds 
(tests/qtest/vhost-user-test.c)

Still the thing timeouts.

Do we have there a problem with the data_mutex or the signaling of the 
condition variable?

Ciao,

Claudio







[PATCH] tests/tcg/s390x: Test overflow conditions

2022-05-27 Thread Gautam Agrawal
Add a test to check for overflow conditions in s390x.
This patch is based on the following patches :
* https://git.qemu.org/?p=qemu.git;a=commitdiff;h=5a2e67a691501
* https://git.qemu.org/?p=qemu.git;a=commitdiff;h=fc6e0d0f2db51
 
Signed-off-by: Gautam Agrawal 
---
 tests/tcg/s390x/Makefile.target |  1 +
 tests/tcg/s390x/overflow.c  | 58 +
 2 files changed, 59 insertions(+)
 create mode 100644 tests/tcg/s390x/overflow.c

diff --git a/tests/tcg/s390x/Makefile.target b/tests/tcg/s390x/Makefile.target
index 3124172736..7f86de85b9 100644
--- a/tests/tcg/s390x/Makefile.target
+++ b/tests/tcg/s390x/Makefile.target
@@ -16,6 +16,7 @@ TESTS+=shift
 TESTS+=trap
 TESTS+=signals-s390x
 TESTS+=branch-relative-long
+TESTS+=overflow
 
 VECTOR_TESTS=vxeh2_vs
 VECTOR_TESTS+=vxeh2_vcvt
diff --git a/tests/tcg/s390x/overflow.c b/tests/tcg/s390x/overflow.c
new file mode 100644
index 00..ea8a410b1a
--- /dev/null
+++ b/tests/tcg/s390x/overflow.c
@@ -0,0 +1,58 @@
+#include 
+
+int overflow_add_32(int x, int y)
+{
+int sum;
+return __builtin_add_overflow(x, y, &sum);
+}
+
+int overflow_add_64(long long x, long long y)
+{
+long sum;
+return __builtin_add_overflow(x, y, &sum);
+}
+
+int overflow_sub_32(int x, int y)
+{
+int sum;
+return __builtin_sub_overflow(x, y, &sum);
+}
+
+int overflow_sub_64(long long x, long long y)
+{
+long sum;
+return __builtin_sub_overflow(x, y, &sum);
+}
+
+int a1_add = -2147483648;
+int b1_add = -2147483648;
+long long a2_add = -9223372036854775808ULL;
+long long b2_add = -9223372036854775808ULL;
+
+int a1_sub;
+int b1_sub = -2147483648;
+long long a2_sub = 0L;
+long long b2_sub = -9223372036854775808ULL;
+
+int main()
+{
+int ret = 0;
+
+if (!overflow_add_32(a1_add, b1_add)) {
+fprintf(stderr, "data overflow while adding 32 bits\n");
+ret = 1;
+}
+if (!overflow_add_64(a2_add, b2_add)) {
+fprintf(stderr, "data overflow while adding 64 bits\n");
+ret = 1;
+}
+if (!overflow_sub_32(a1_sub, b1_sub)) {
+fprintf(stderr, "data overflow while subtracting 32 bits\n");
+ret = 1;
+}
+if (!overflow_sub_64(a2_sub, b2_sub)) {
+fprintf(stderr, "data overflow while subtracting 64 bits\n");
+ret = 1;
+}
+return ret;
+}
-- 
2.34.1




Re: [libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Peter Krempa
On Fri, May 27, 2022 at 10:47:58 +0100, Daniel P. Berrangé wrote:
> Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> exposed in virsh. It is not especially pleasant, however, using the raw
> QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
> exposes a human friendly interactive shell. It is not possible to use
> this with libvirt managed guest, however, since only one client can
> attach to he QMP socket at any point in time.
> 
> The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
> socket and listens for incoming client connections, speaking QMP on
> the connected socket. It will forward any QMP commands received onto
> the running libvirt QEMU guest, and forward any replies back to the
> QMP client.
> 
>   $ virsh start demo
>   $ virt-qmp-proxy demo demo.qmp &
>   $ qmp-shell demo.qmp
>   Welcome to the QMP low-level shell!
>   Connected to QEMU 6.2.0
> 
>   (QEMU) query-kvm
>   {
>   "return": {
>   "enabled": true,
>   "present": true
>   }
>   }
> 
> Note this tool of course has the same risks as the raw libvirt
> QMP passthrough. It is safe to run query commands to fetch information
> but commands which change the QEMU state risk disrupting libvirt's
> management of QEMU, potentially resulting in data loss/corruption in
> the worst case.
> 
> Signed-off-by: Daniel P. Berrangé 
> ---
> 
> CC'ing QEMU since this is likely of interest to maintainers and users
> who work with QEMU and libvirt
> 
> Note this impl is fairly crude in that it assumes it is receiving
> the QMP commands linewise one at a time. None the less it is good
> enough to work with qmp-shell already, so I figured it was worth
> exposing to the world. It also lacks support for forwarding events
> back to the QMP client.

I originally wanted to teach the qemu tools to work with libvirt
directly similarly how 'scripts/render_block_graph.py' from the qemu
tree already does but I guess this is also an option.

This is an option too albeit a bit more complex to set up, but on the
other hand a bit more universal.

I'll have a look at the code a bit later.




Re: [libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Claudio Fontana
On 5/27/22 12:20 PM, Peter Krempa wrote:
> On Fri, May 27, 2022 at 10:47:58 +0100, Daniel P. Berrangé wrote:
>> Libvirt provides QMP passthrough APIs for the QEMU driver and these are
>> exposed in virsh. It is not especially pleasant, however, using the raw
>> QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
>> exposes a human friendly interactive shell. It is not possible to use
>> this with libvirt managed guest, however, since only one client can
>> attach to he QMP socket at any point in time.
>>
>> The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
>> socket and listens for incoming client connections, speaking QMP on
>> the connected socket. It will forward any QMP commands received onto
>> the running libvirt QEMU guest, and forward any replies back to the
>> QMP client.
>>
>>   $ virsh start demo
>>   $ virt-qmp-proxy demo demo.qmp &
>>   $ qmp-shell demo.qmp
>>   Welcome to the QMP low-level shell!
>>   Connected to QEMU 6.2.0
>>
>>   (QEMU) query-kvm
>>   {
>>   "return": {
>>   "enabled": true,
>>   "present": true
>>   }
>>   }
>>
>> Note this tool of course has the same risks as the raw libvirt
>> QMP passthrough. It is safe to run query commands to fetch information
>> but commands which change the QEMU state risk disrupting libvirt's
>> management of QEMU, potentially resulting in data loss/corruption in
>> the worst case.
>>
>> Signed-off-by: Daniel P. Berrangé 
>> ---
>>
>> CC'ing QEMU since this is likely of interest to maintainers and users
>> who work with QEMU and libvirt
>>
>> Note this impl is fairly crude in that it assumes it is receiving
>> the QMP commands linewise one at a time. None the less it is good
>> enough to work with qmp-shell already, so I figured it was worth
>> exposing to the world. It also lacks support for forwarding events
>> back to the QMP client.
> 
> I originally wanted to teach the qemu tools to work with libvirt
> directly similarly how 'scripts/render_block_graph.py' from the qemu
> tree already does but I guess this is also an option.
> 
> This is an option too albeit a bit more complex to set up, but on the
> other hand a bit more universal.
> 
> I'll have a look at the code a bit later.
> 

Would have found it useful, at the time I wrote the multifd save series I ended 
up just scripting around virsh qemu-monitor-command from either bash or C.

One challenge I had to face was, when doing fd migration doing

"execute": "getfd", "arguments": {"fdname":"migrate"}

in that case we have to use the --pass-fds=N option to pass the FD.

Does the virt-qmp-proxy tool consider the passing of FDs issue?

Thanks,

Claudio



[PATCH v4 0/9] Record/replay refactoring and stuff

2022-05-27 Thread Pavel Dovgalyuk
The following series includes the following record/replay-related changes:
- simplified async event processing
- updated record/replay documentation, which was also converted to rst
- avocado tests for record/replay of Linux for x86_64 and Aarch64
- some bugfixes

v4 changes:
 - moved vCPU notification to aio_bh_enqueue (suggested by Paolo Bonzini)

v3 changes:
 - rebased to master

v2 changes:
 - rebased to master
 - fixed some issues found by Richard Henderson

---

Pavel Dovgalyuk (9):
  replay: fix event queue flush for qemu shutdown
  replay: notify vCPU when BH is scheduled
  replay: rewrite async event handling
  replay: simplify async event processing
  docs: convert docs/devel/replay page to rst
  docs: move replay docs to docs/system/replay.rst
  tests/avocado: update replay_linux test
  tests/avocado: add replay Linux tests for virtio machine
  tests/avocado: add replay Linux test for Aarch64 machines


 accel/tcg/tcg-accel-ops-icount.c |   5 +-
 docs/devel/index-tcg.rst |   1 +
 docs/devel/replay.rst| 306 +++
 docs/devel/replay.txt|  46 
 docs/replay.txt  | 410 ---
 docs/system/index.rst|   1 +
 docs/system/replay.rst   | 237 ++
 include/sysemu/cpu-timers.h  |   1 +
 include/sysemu/replay.h  |   9 +-
 replay/replay-events.c   |  56 ++---
 replay/replay-internal.h |  37 ++-
 replay/replay-snapshot.c |   2 -
 replay/replay.c  |  75 +++---
 softmmu/icount.c |  12 +-
 stubs/icount.c   |   4 +
 tests/avocado/replay_linux.py|  86 ++-
 util/async.c |   8 +
 17 files changed, 726 insertions(+), 570 deletions(-)
 create mode 100644 docs/devel/replay.rst
 delete mode 100644 docs/devel/replay.txt
 delete mode 100644 docs/replay.txt
 create mode 100644 docs/system/replay.rst

--
Pavel Dovgalyuk



[PATCH v4 5/9] docs: convert docs/devel/replay page to rst

2022-05-27 Thread Pavel Dovgalyuk
This patch converts prior .txt replay devel documentation to .rst.

Signed-off-by: Pavel Dovgalyuk 
Reviewed-by: Richard Henderson 
---
 docs/devel/index-tcg.rst |1 +
 docs/devel/replay.rst|   54 ++
 docs/devel/replay.txt|   46 ---
 3 files changed, 55 insertions(+), 46 deletions(-)
 create mode 100644 docs/devel/replay.rst
 delete mode 100644 docs/devel/replay.txt

diff --git a/docs/devel/index-tcg.rst b/docs/devel/index-tcg.rst
index 0b0ad12c22..7b9760b26f 100644
--- a/docs/devel/index-tcg.rst
+++ b/docs/devel/index-tcg.rst
@@ -13,3 +13,4 @@ are only implementing things for HW accelerated hypervisors.
multi-thread-tcg
tcg-icount
tcg-plugins
+   replay
diff --git a/docs/devel/replay.rst b/docs/devel/replay.rst
new file mode 100644
index 00..dd8bf3b195
--- /dev/null
+++ b/docs/devel/replay.rst
@@ -0,0 +1,54 @@
+..
+   Copyright (c) 2022, ISP RAS
+   Written by Pavel Dovgalyuk
+
+===
+Execution Record/Replay
+===
+
+Record/replay mechanism, that could be enabled through icount mode, expects
+the virtual devices to satisfy the following requirements.
+
+The main idea behind this document is that everything that affects
+the guest state during execution in icount mode should be deterministic.
+
+Timers
+--
+
+All virtual devices should use virtual clock for timers that change the guest
+state. Virtual clock is deterministic, therefore such timers are deterministic
+too.
+
+Virtual devices can also use realtime clock for the events that do not change
+the guest state directly. When the clock ticking should depend on VM execution
+speed, use virtual clock with EXTERNAL attribute. It is not deterministic,
+but its speed depends on the guest execution. This clock is used by
+the virtual devices (e.g., slirp routing device) that lie outside the
+replayed guest.
+
+Bottom halves
+-
+
+Bottom half callbacks, that affect the guest state, should be invoked through
+replay_bh_schedule_event or replay_bh_schedule_oneshot_event functions.
+Their invocations are saved in record mode and synchronized with the existing
+log in replay mode.
+
+Saving/restoring the VM state
+-
+
+All fields in the device state structure (including virtual timers)
+should be restored by loadvm to the same values they had before savevm.
+
+Avoid accessing other devices' state, because the order of saving/restoring
+is not defined. It means that you should not call functions like
+'update_irq' in post_load callback. Save everything explicitly to avoid
+the dependencies that may make restoring the VM state non-deterministic.
+
+Stopping the VM
+---
+
+Stopping the guest should not interfere with its state (with the exception
+of the network connections, that could be broken by the remote timeouts).
+VM can be stopped at any moment of replay by the user. Restarting the VM
+after that stop should not break the replay by the unneeded guest state change.
diff --git a/docs/devel/replay.txt b/docs/devel/replay.txt
deleted file mode 100644
index e641c35add..00
--- a/docs/devel/replay.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-Record/replay mechanism, that could be enabled through icount mode, expects
-the virtual devices to satisfy the following requirements.
-
-The main idea behind this document is that everything that affects
-the guest state during execution in icount mode should be deterministic.
-
-Timers
-==
-
-All virtual devices should use virtual clock for timers that change the guest
-state. Virtual clock is deterministic, therefore such timers are deterministic
-too.
-
-Virtual devices can also use realtime clock for the events that do not change
-the guest state directly. When the clock ticking should depend on VM execution
-speed, use virtual clock with EXTERNAL attribute. It is not deterministic,
-but its speed depends on the guest execution. This clock is used by
-the virtual devices (e.g., slirp routing device) that lie outside the
-replayed guest.
-
-Bottom halves
-=
-
-Bottom half callbacks, that affect the guest state, should be invoked through
-replay_bh_schedule_event or replay_bh_schedule_oneshot_event functions.
-Their invocations are saved in record mode and synchronized with the existing
-log in replay mode.
-
-Saving/restoring the VM state
-=
-
-All fields in the device state structure (including virtual timers)
-should be restored by loadvm to the same values they had before savevm.
-
-Avoid accessing other devices' state, because the order of saving/restoring
-is not defined. It means that you should not call functions like
-'update_irq' in post_load callback. Save everything explicitly to avoid
-the dependencies that may make restoring the VM state non-deterministic.
-
-Stopping the VM
-===
-
-Stopping the guest should not interfere with its state (with the excepti

Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Dario Faggioli
On Fri, 2022-05-27 at 10:18 +0200, Claudio Fontana wrote:
> On 5/27/22 9:26 AM, Dario Faggioli wrote:
> > > 
> > Yes, this kind of matches what I've also seen and reported about in
> > <5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
> > enable/run just one of:
> > - reconnect
> > - flags_mismatch
> > - connect_fail
> > 
> > I see no issues.
> 
> On the countrary, for me just running a single one of those can fail.
> 
Well, but you said (or at least so I understood) that running the test
for the first time, works.

Then, when you run it multiple times, things start to fail.

That was, in fact, my point... I was making the parallelism between the
fact running only one of those tests works for me and the fact that
running the test for the first time works for you too.

And between the fact that running two tests, one after the other, fails
for me and the fact that running the same tests multiple times fails
for you too.

:-)

> > However, Claudio, AFAIUI, you're seeing this with an older GCC and
> > without LTO, right?
> 
> Yes, to provide a different angle I tried on veteran OpenSUSE Leap
> 15.2, so gcc is based on 7.5.0.
> 
> I don't think LTO is being used in any way.
> 
Yep, agreed. Now I don't think it's related to LTO specifically either.

Although, it's at least a bit of an Heisenbug. I mean, we're seeing it
(with two different setups), but for others, things work fine, I guess?

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


[PATCH v4 3/9] replay: rewrite async event handling

2022-05-27 Thread Pavel Dovgalyuk
This patch decouples checkpoints and async events.
It was a tricky part of replay implementation. Now it becomes
much simpler and easier to maintain.

Signed-off-by: Pavel Dovgalyuk 
Acked-by: Richard Henderson 
---
 accel/tcg/tcg-accel-ops-icount.c |5 +--
 docs/replay.txt  |   11 ++
 include/sysemu/replay.h  |9 -
 replay/replay-events.c   |   20 +++---
 replay/replay-internal.h |6 +--
 replay/replay-snapshot.c |1 -
 replay/replay.c  |   74 +++---
 softmmu/icount.c |4 ++
 8 files changed, 54 insertions(+), 76 deletions(-)

diff --git a/accel/tcg/tcg-accel-ops-icount.c b/accel/tcg/tcg-accel-ops-icount.c
index 24520ea112..8f1dda4344 100644
--- a/accel/tcg/tcg-accel-ops-icount.c
+++ b/accel/tcg/tcg-accel-ops-icount.c
@@ -84,8 +84,7 @@ void icount_handle_deadline(void)
  * Don't interrupt cpu thread, when these events are waiting
  * (i.e., there is no checkpoint)
  */
-if (deadline == 0
-&& (replay_mode != REPLAY_MODE_PLAY || replay_has_checkpoint())) {
+if (deadline == 0) {
 icount_notify_aio_contexts();
 }
 }
@@ -109,7 +108,7 @@ void icount_prepare_for_run(CPUState *cpu)
 
 replay_mutex_lock();
 
-if (cpu->icount_budget == 0 && replay_has_checkpoint()) {
+if (cpu->icount_budget == 0) {
 icount_notify_aio_contexts();
 }
 }
diff --git a/docs/replay.txt b/docs/replay.txt
index 5b008ca491..6c9fdff09d 100644
--- a/docs/replay.txt
+++ b/docs/replay.txt
@@ -366,11 +366,9 @@ Here is the list of events that are written into the log:
Argument: 4-byte number of executed instructions.
  - EVENT_INTERRUPT. Used to synchronize interrupt processing.
  - EVENT_EXCEPTION. Used to synchronize exception handling.
- - EVENT_ASYNC. This is a group of events. They are always processed
-   together with checkpoints. When such an event is generated, it is
-   stored in the queue and processed only when checkpoint occurs.
-   Every such event is followed by 1-byte checkpoint id and 1-byte
-   async event id from the following list:
+ - EVENT_ASYNC. This is a group of events. When such an event is generated,
+   it is stored in the queue and processed in icount_account_warp_timer().
+   Every such event has it's own id from the following list:
  - REPLAY_ASYNC_EVENT_BH. Bottom-half callback. This event synchronizes
callbacks that affect virtual machine state, but normally called
asynchronously.
@@ -405,6 +403,5 @@ Here is the list of events that are written into the log:
  - EVENT_CLOCK + clock_id. Group of events for host clock read operations.
Argument: 8-byte clock value.
  - EVENT_CHECKPOINT + checkpoint_id. Checkpoint for synchronization of
-   CPU, internal threads, and asynchronous input events. May be followed
-   by one or more EVENT_ASYNC events.
+   CPU, internal threads, and asynchronous input events.
  - EVENT_END. Last event in the log.
diff --git a/include/sysemu/replay.h b/include/sysemu/replay.h
index 032256533b..9af0ac32f0 100644
--- a/include/sysemu/replay.h
+++ b/include/sysemu/replay.h
@@ -161,9 +161,14 @@ void replay_shutdown_request(ShutdownCause cause);
 Returns 0 in PLAY mode if checkpoint was not found.
 Returns 1 in all other cases. */
 bool replay_checkpoint(ReplayCheckpoint checkpoint);
-/*! Used to determine that checkpoint is pending.
+/*! Used to determine that checkpoint or async event is pending.
 Does not proceed to the next event in the log. */
-bool replay_has_checkpoint(void);
+bool replay_has_event(void);
+/*
+ * Processes the async events added to the queue (while recording)
+ * or reads the events from the file (while replaying).
+ */
+void replay_async_events(void);
 
 /* Asynchronous events queue */
 
diff --git a/replay/replay-events.c b/replay/replay-events.c
index ac47c89834..db1decf9dd 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -170,12 +170,11 @@ void replay_block_event(QEMUBH *bh, uint64_t id)
 }
 }
 
-static void replay_save_event(Event *event, int checkpoint)
+static void replay_save_event(Event *event)
 {
 if (replay_mode != REPLAY_MODE_PLAY) {
 /* put the event into the file */
 replay_put_event(EVENT_ASYNC);
-replay_put_byte(checkpoint);
 replay_put_byte(event->event_kind);
 
 /* save event-specific data */
@@ -206,34 +205,27 @@ static void replay_save_event(Event *event, int 
checkpoint)
 }
 
 /* Called with replay mutex locked */
-void replay_save_events(int checkpoint)
+void replay_save_events(void)
 {
 g_assert(replay_mutex_locked());
-g_assert(checkpoint != CHECKPOINT_CLOCK_WARP_START);
-g_assert(checkpoint != CHECKPOINT_CLOCK_VIRTUAL);
 while (!QTAILQ_EMPTY(&events_list)) {
 Event *event = QTAILQ_FIRST(&events_list);
-replay_save_event(event, checkpoint);
+replay_save_event(event);
 replay_run_event(event);
 

[PATCH v4 2/9] replay: notify vCPU when BH is scheduled

2022-05-27 Thread Pavel Dovgalyuk
vCPU execution should be suspended when new BH is scheduled.
This is needed to avoid guest timeouts caused by the long cycles
of the execution. In replay mode execution may hang when
vCPU sleeps and block event comes to the queue.
This patch adds notification which wakes up vCPU or interrupts
execution of guest code.

Signed-off-by: Pavel Dovgalyuk 
Reviewed-by: Richard Henderson 

--

v2: changed first_cpu to current_cpu (suggested by Richard Henderson)
v4: moved vCPU notification to aio_bh_enqueue (suggested by Paolo Bonzini)
---
 include/sysemu/cpu-timers.h |1 +
 softmmu/icount.c|8 
 stubs/icount.c  |4 
 util/async.c|8 
 4 files changed, 21 insertions(+)

diff --git a/include/sysemu/cpu-timers.h b/include/sysemu/cpu-timers.h
index ed6ee5c46c..2e786fe7fb 100644
--- a/include/sysemu/cpu-timers.h
+++ b/include/sysemu/cpu-timers.h
@@ -59,6 +59,7 @@ int64_t icount_round(int64_t count);
 /* if the CPUs are idle, start accounting real time to virtual clock. */
 void icount_start_warp_timer(void);
 void icount_account_warp_timer(void);
+void icount_notify_exit(void);
 
 /*
  * CPU Ticks and Clock
diff --git a/softmmu/icount.c b/softmmu/icount.c
index 5ca271620d..1cafec5014 100644
--- a/softmmu/icount.c
+++ b/softmmu/icount.c
@@ -486,3 +486,11 @@ void icount_configure(QemuOpts *opts, Error **errp)
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
 }
+
+void icount_notify_exit(void)
+{
+if (icount_enabled() && current_cpu) {
+qemu_cpu_kick(current_cpu);
+qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
+}
+}
diff --git a/stubs/icount.c b/stubs/icount.c
index f13c43568b..6df8c2bf7d 100644
--- a/stubs/icount.c
+++ b/stubs/icount.c
@@ -43,3 +43,7 @@ void icount_account_warp_timer(void)
 {
 abort();
 }
+
+void icount_notify_exit(void)
+{
+}
diff --git a/util/async.c b/util/async.c
index 554ba70cca..63434ddae4 100644
--- a/util/async.c
+++ b/util/async.c
@@ -33,6 +33,7 @@
 #include "block/raw-aio.h"
 #include "qemu/coroutine_int.h"
 #include "qemu/coroutine-tls.h"
+#include "sysemu/cpu-timers.h"
 #include "trace.h"
 
 /***/
@@ -84,6 +85,13 @@ static void aio_bh_enqueue(QEMUBH *bh, unsigned new_flags)
 }
 
 aio_notify(ctx);
+/*
+ * Workaround for record/replay.
+ * vCPU execution should be suspended when new BH is set.
+ * This is needed to avoid guest timeouts caused
+ * by the long cycles of the execution.
+ */
+icount_notify_exit();
 }
 
 /* Only called from aio_bh_poll() and aio_ctx_finalize() */




Re: [PATCH] ppc: fix boot with sam460ex

2022-05-27 Thread BALATON Zoltan

Hello,

Some changes to commit message (patch is OK).

On Thu, 26 May 2022, Michael S. Tsirkin wrote:

Recent changes to pcie_host corrected size of its internal region to
match what it expects - only the low 28 bits are ever decoded. Previous
code just ignored bit 29 (if size was 1 << 29) in the address which does
not make much sense.  We are now asserting on size > 1 << 28 instead,
but it so happened that ppc actually allows guest to configure as large
a size as it wants to, and current firmware set it to 1 << 29.

With just qemu-system-ppc -M sam460ex this triggers an assert which
seems to happen when the guest (board firmware?) writes a value to
CFGMSK reg:

(gdb) bt


Backtrace is missing but you could just drop this line and replace : with 
. at end of previous line as we probably don't need the full backtrace, 
the commit message is already too long in my opinion.



This is done in the board firmware here:

https://git.qemu.org/?p=u-boot-sam460ex.git;a=blob;f=arch/powerpc/cpu/ppc4xx/4xx_pcie.c;h=13348be93dccc74c13ea043d6635a7f8ece4b5f0;hb=HEAD

when trying to map config space.

Note that what firmware does matches
https://www.hardware.com.br/comunidade/switch-cisco/1128380/


That's not it. It's a different hardware and firmware, just quoted it as 
an example that this value seems to be common to that SoC even on 
different hardware/OS/firmware (probably comes from reference 
hardware/devel board?). The sam460ex is here


https://www.acube-systems.biz/index.php?page=hardware&pid=5

the U-Boot in above repo is matching the firmware from the acube page but 
I had to fix some bugs in it to make it compile and work.


Otherwise this should be OK.

Regards,
BALATON Zoltan


So it's not clear what the proper fix should be.

However, allowing guest to trigger an assert in qemu is not good practice 
anyway.

For now let's just force the mask to 256MB on guest write, this way
anything outside the expected address range is ignored.

Fixes: commit 1f1a7b2269 ("include/hw/pci/pcie_host: Correct 
PCIE_MMCFG_SIZE_MAX")
Reviewed-by: BALATON Zoltan 
Tested-by: BALATON Zoltan 
Signed-off-by: Michael S. Tsirkin 
---

Affected system is orphan so I guess I will merge the patch unless
someone objects.

hw/ppc/ppc440_uc.c | 8 
1 file changed, 8 insertions(+)

diff --git a/hw/ppc/ppc440_uc.c b/hw/ppc/ppc440_uc.c
index 993e3ba955..a1ecf6dd1c 100644
--- a/hw/ppc/ppc440_uc.c
+++ b/hw/ppc/ppc440_uc.c
@@ -1180,6 +1180,14 @@ static void dcr_write_pcie(void *opaque, int dcrn, 
uint32_t val)
case PEGPL_CFGMSK:
s->cfg_mask = val;
size = ~(val & 0xfffe) + 1;
+/*
+ * Firmware sets this register to E001. Why we are not sure,
+ * but the current guess is anything above PCIE_MMCFG_SIZE_MAX is
+ * ignored.
+ */
+if (size > PCIE_MMCFG_SIZE_MAX) {
+size = PCIE_MMCFG_SIZE_MAX;
+}
pcie_host_mmcfg_update(PCIE_HOST_BRIDGE(s), val & 1, s->cfg_base, size);
break;
case PEGPL_MSGBAH:





[PATCH v4 4/9] replay: simplify async event processing

2022-05-27 Thread Pavel Dovgalyuk
This patch joins replay event id and async event id into single byte in the log.
It makes processing a bit faster and log a bit smaller.

Signed-off-by: Pavel Dovgalyuk 
Reviewed-by: Richard Henderson 

--

v2: minor enum fixes (suggested by Richard Henderson)
---
 replay/replay-events.c   |   36 ++--
 replay/replay-internal.h |   31 ++-
 replay/replay-snapshot.c |1 -
 replay/replay.c  |5 +++--
 4 files changed, 31 insertions(+), 42 deletions(-)

diff --git a/replay/replay-events.c b/replay/replay-events.c
index db1decf9dd..af0721cc1a 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -174,8 +174,8 @@ static void replay_save_event(Event *event)
 {
 if (replay_mode != REPLAY_MODE_PLAY) {
 /* put the event into the file */
-replay_put_event(EVENT_ASYNC);
-replay_put_byte(event->event_kind);
+g_assert(event->event_kind < REPLAY_ASYNC_COUNT);
+replay_put_event(EVENT_ASYNC + event->event_kind);
 
 /* save event-specific data */
 switch (event->event_kind) {
@@ -220,14 +220,10 @@ void replay_save_events(void)
 static Event *replay_read_event(void)
 {
 Event *event;
-if (replay_state.read_event_kind == -1) {
-replay_state.read_event_kind = replay_get_byte();
-replay_state.read_event_id = -1;
-replay_check_error();
-}
+ReplayAsyncEventKind event_kind = replay_state.data_kind - EVENT_ASYNC;
 
 /* Events that has not to be in the queue */
-switch (replay_state.read_event_kind) {
+switch (event_kind) {
 case REPLAY_ASYNC_EVENT_BH:
 case REPLAY_ASYNC_EVENT_BH_ONESHOT:
 if (replay_state.read_event_id == -1) {
@@ -236,17 +232,17 @@ static Event *replay_read_event(void)
 break;
 case REPLAY_ASYNC_EVENT_INPUT:
 event = g_new0(Event, 1);
-event->event_kind = replay_state.read_event_kind;
+event->event_kind = event_kind;
 event->opaque = replay_read_input_event();
 return event;
 case REPLAY_ASYNC_EVENT_INPUT_SYNC:
 event = g_new0(Event, 1);
-event->event_kind = replay_state.read_event_kind;
+event->event_kind = event_kind;
 event->opaque = 0;
 return event;
 case REPLAY_ASYNC_EVENT_CHAR_READ:
 event = g_new0(Event, 1);
-event->event_kind = replay_state.read_event_kind;
+event->event_kind = event_kind;
 event->opaque = replay_event_char_read_load();
 return event;
 case REPLAY_ASYNC_EVENT_BLOCK:
@@ -256,18 +252,17 @@ static Event *replay_read_event(void)
 break;
 case REPLAY_ASYNC_EVENT_NET:
 event = g_new0(Event, 1);
-event->event_kind = replay_state.read_event_kind;
+event->event_kind = event_kind;
 event->opaque = replay_event_net_load();
 return event;
 default:
-error_report("Unknown ID %d of replay event",
-replay_state.read_event_kind);
+error_report("Unknown ID %d of replay event", event_kind);
 exit(1);
 break;
 }
 
 QTAILQ_FOREACH(event, &events_list, events) {
-if (event->event_kind == replay_state.read_event_kind
+if (event->event_kind == event_kind
 && (replay_state.read_event_id == -1
 || replay_state.read_event_id == event->id)) {
 break;
@@ -276,12 +271,8 @@ static Event *replay_read_event(void)
 
 if (event) {
 QTAILQ_REMOVE(&events_list, event, events);
-} else {
-return NULL;
 }
 
-/* Read event-specific data */
-
 return event;
 }
 
@@ -289,13 +280,14 @@ static Event *replay_read_event(void)
 void replay_read_events(void)
 {
 g_assert(replay_mutex_locked());
-while (replay_state.data_kind == EVENT_ASYNC) {
+while (replay_state.data_kind >= EVENT_ASYNC
+&& replay_state.data_kind <= EVENT_ASYNC_LAST) {
 Event *event = replay_read_event();
 if (!event) {
 break;
 }
 replay_finish_event();
-replay_state.read_event_kind = -1;
+replay_state.read_event_id = -1;
 replay_run_event(event);
 
 g_free(event);
@@ -304,7 +296,7 @@ void replay_read_events(void)
 
 void replay_init_events(void)
 {
-replay_state.read_event_kind = -1;
+replay_state.read_event_id = -1;
 }
 
 void replay_finish_events(void)
diff --git a/replay/replay-internal.h b/replay/replay-internal.h
index 59797c86cf..301131c1e6 100644
--- a/replay/replay-internal.h
+++ b/replay/replay-internal.h
@@ -12,6 +12,19 @@
  *
  */
 
+/* Asynchronous events IDs */
+
+typedef enum ReplayAsyncEventKind {
+REPLAY_ASYNC_EVENT_BH,
+REPLAY_ASYNC_EVENT_BH_ONESHOT,
+REPLAY_ASYNC_EVENT_INPUT,
+REPLAY_ASYNC_EVENT_INPUT_SYNC,
+REPLAY_ASYNC_EVENT_CHAR_READ,
+REPLAY_ASYNC_EVENT_BLOCK,
+REPLAY_ASYNC_EVENT_NET,
+REPLAY_ASYNC_COUNT
+} ReplayAsyncEventKind;
+
 /* Any changes to order/number

[PATCH v4 1/9] replay: fix event queue flush for qemu shutdown

2022-05-27 Thread Pavel Dovgalyuk
This patch fixes event queue flush in the case of emulator
shutdown. replay_finish_events should be called when replay_mode
is not cleared.

Signed-off-by: Pavel Dovgalyuk 
Reviewed-by: Richard Henderson 
---
 replay/replay.c |3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/replay/replay.c b/replay/replay.c
index 6df2abc18c..2d3607998a 100644
--- a/replay/replay.c
+++ b/replay/replay.c
@@ -387,9 +387,8 @@ void replay_finish(void)
 g_free(replay_snapshot);
 replay_snapshot = NULL;
 
-replay_mode = REPLAY_MODE_NONE;
-
 replay_finish_events();
+replay_mode = REPLAY_MODE_NONE;
 }
 
 void replay_add_blocker(Error *reason)




[PATCH v4 6/9] docs: move replay docs to docs/system/replay.rst

2022-05-27 Thread Pavel Dovgalyuk
This patch adds replay description page, converting prior
text from docs/replay.txt.
The text was also updated and some sections were moved
to devel part of the docs.

Signed-off-by: Pavel Dovgalyuk 
Acked-by: Richard Henderson 
---
 docs/devel/replay.rst  |  264 ++-
 docs/replay.txt|  407 
 docs/system/index.rst  |1 
 docs/system/replay.rst |  237 
 4 files changed, 496 insertions(+), 413 deletions(-)
 delete mode 100644 docs/replay.txt
 create mode 100644 docs/system/replay.rst

diff --git a/docs/devel/replay.rst b/docs/devel/replay.rst
index dd8bf3b195..0244be8b9c 100644
--- a/docs/devel/replay.rst
+++ b/docs/devel/replay.rst
@@ -1,20 +1,149 @@
 ..
Copyright (c) 2022, ISP RAS
-   Written by Pavel Dovgalyuk
+   Written by Pavel Dovgalyuk and Alex Bennée
 
 ===
 Execution Record/Replay
 ===
 
-Record/replay mechanism, that could be enabled through icount mode, expects
-the virtual devices to satisfy the following requirements.
+Core concepts
+=
+
+Record/replay functions are used for the deterministic replay of qemu
+execution. Execution recording writes a non-deterministic events log, which
+can be later used for replaying the execution anywhere and for unlimited
+number of times. Execution replaying reads the log and replays all
+non-deterministic events including external input, hardware clocks,
+and interrupts.
+
+Several parts of QEMU include function calls to make event log recording
+and replaying.
+Devices' models that have non-deterministic input from external devices were
+changed to write every external event into the execution log immediately.
+E.g. network packets are written into the log when they arrive into the virtual
+network adapter.
+
+All non-deterministic events are coming from these devices. But to
+replay them we need to know at which moments they occur. We specify
+these moments by counting the number of instructions executed between
+every pair of consecutive events.
+
+Academic papers with description of deterministic replay implementation:
+
+* `Deterministic Replay of System's Execution with Multi-target QEMU Simulator 
for Dynamic Analysis and Reverse Debugging 
`_
+* `Don't panic: reverse debugging of kernel drivers 
`_
+
+Modifications of qemu include:
+
+ * wrappers for clock and time functions to save their return values in the log
+ * saving different asynchronous events (e.g. system shutdown) into the log
+ * synchronization of the bottom halves execution
+ * synchronization of the threads from thread pool
+ * recording/replaying user input (mouse, keyboard, and microphone)
+ * adding internal checkpoints for cpu and io synchronization
+ * network filter for recording and replaying the packets
+ * block driver for making block layer deterministic
+ * serial port input record and replay
+ * recording of random numbers obtained from the external sources
+
+Instruction counting
+
+
+QEMU should work in icount mode to use record/replay feature. icount was
+designed to allow deterministic execution in absence of external inputs
+of the virtual machine. We also use icount to control the occurrence of the
+non-deterministic events. The number of instructions elapsed from the last 
event
+is written to the log while recording the execution. In replay mode we
+can predict when to inject that event using the instruction counter.
+
+Locking and thread synchronisation
+--
+
+Previously the synchronisation of the main thread and the vCPU thread
+was ensured by the holding of the BQL. However the trend has been to
+reduce the time the BQL was held across the system including under TCG
+system emulation. As it is important that batches of events are kept
+in sequence (e.g. expiring timers and checkpoints in the main thread
+while instruction checkpoints are written by the vCPU thread) we need
+another lock to keep things in lock-step. This role is now handled by
+the replay_mutex_lock. It used to be held only for each event being
+written but now it is held for a whole execution period. This results
+in a deterministic ping-pong between the two main threads.
+
+As the BQL is now a finer grained lock than the replay_lock it is almost
+certainly a bug, and a source of deadlocks, to take the
+replay_mutex_lock while the BQL is held. This is enforced by an assert.
+While the unlocks are usually in the reverse order, this is not
+necessary; you can drop the replay_lock while holding the BQL, without
+doing a more complicated unlock_iothread/replay_unlock/lock_iothread
+sequence.
+
+Checkpoints
+---
+
+Replaying the execution of virtual machine is bound by sources of
+non-determinism. These are inputs from clock and peripheral devices

[PATCH v4 8/9] tests/avocado: add replay Linux tests for virtio machine

2022-05-27 Thread Pavel Dovgalyuk
This patch adds two tests for replaying Linux boot process
on x86_64 virtio platform.

Signed-off-by: Pavel Dovgalyuk 
---
 tests/avocado/replay_linux.py |   26 ++
 1 file changed, 26 insertions(+)

diff --git a/tests/avocado/replay_linux.py b/tests/avocado/replay_linux.py
index 1099b5647f..3bb1bc8816 100644
--- a/tests/avocado/replay_linux.py
+++ b/tests/avocado/replay_linux.py
@@ -123,3 +123,29 @@ def test_pc_q35(self):
 :avocado: tags=machine:q35
 """
 self.run_rr(shift=3)
+
+@skipUnless(os.getenv('AVOCADO_TIMEOUT_EXPECTED'), 'Test might timeout')
+class ReplayLinuxX8664Virtio(ReplayLinux):
+"""
+:avocado: tags=arch:x86_64
+:avocado: tags=virtio
+:avocado: tags=accel:tcg
+"""
+
+hdd = 'virtio-blk-pci'
+cd = 'virtio-blk-pci'
+bus = None
+
+chksum = 'e3c1b309d9203604922d6e255c2c5d098a309c2d46215d8fc026954f3c5c27a0'
+
+def test_pc_i440fx(self):
+"""
+:avocado: tags=machine:pc
+"""
+self.run_rr(shift=1)
+
+def test_pc_q35(self):
+"""
+:avocado: tags=machine:q35
+"""
+self.run_rr(shift=3)




[PATCH v4 7/9] tests/avocado: update replay_linux test

2022-05-27 Thread Pavel Dovgalyuk
This patch updates replay_linux test to make it compatible with
new LinuxTest class.

Signed-off-by: Pavel Dovgalyuk 
---
 tests/avocado/replay_linux.py |   19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/tests/avocado/replay_linux.py b/tests/avocado/replay_linux.py
index 15953f9e49..1099b5647f 100644
--- a/tests/avocado/replay_linux.py
+++ b/tests/avocado/replay_linux.py
@@ -32,9 +32,16 @@ class ReplayLinux(LinuxTest):
 bus = 'ide'
 
 def setUp(self):
-super(ReplayLinux, self).setUp()
+# LinuxTest does many replay-incompatible things, but includes
+# useful methods. Do not setup LinuxTest here and just
+# call some functions.
+super(LinuxTest, self).setUp()
+self._set_distro()
 self.boot_path = self.download_boot()
-self.cloudinit_path = self.prepare_cloudinit()
+self.phone_server = cloudinit.PhoneHomeServer(('0.0.0.0', 0),
+  self.name)
+ssh_pubkey, self.ssh_key = self.set_up_existing_ssh_keys()
+self.cloudinit_path = self.prepare_cloudinit(ssh_pubkey)
 
 def vm_add_disk(self, vm, path, id, device):
 bus_string = ''
@@ -50,7 +57,9 @@ def launch_and_wait(self, record, args, shift):
 vm = self.get_vm()
 vm.add_args('-smp', '1')
 vm.add_args('-m', '1024')
-vm.add_args('-object', 'filter-replay,id=replay,netdev=hub0port0')
+vm.add_args('-netdev', 'user,id=vnet,hostfwd=:127.0.0.1:0-:22',
+'-device', 'virtio-net,netdev=vnet')
+vm.add_args('-object', 'filter-replay,id=replay,netdev=vnet')
 if args:
 vm.add_args(*args)
 self.vm_add_disk(vm, self.boot_path, 0, self.hdd)
@@ -75,8 +84,8 @@ def launch_and_wait(self, record, args, shift):
 stop_check=(lambda : not vm.is_running()))
 console_drainer.start()
 if record:
-cloudinit.wait_for_phone_home(('0.0.0.0', self.phone_home_port),
-  self.name)
+while not self.phone_server.instance_phoned_back:
+self.phone_server.handle_request()
 vm.shutdown()
 logger.info('finished the recording with log size %s bytes'
 % os.path.getsize(replay_path))




Re: [libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Peter Krempa
On Fri, May 27, 2022 at 12:35:45 +0200, Claudio Fontana wrote:
> On 5/27/22 12:20 PM, Peter Krempa wrote:
> > On Fri, May 27, 2022 at 10:47:58 +0100, Daniel P. Berrangé wrote:
> >> Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> >> exposed in virsh. It is not especially pleasant, however, using the raw
> >> QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
> >> exposes a human friendly interactive shell. It is not possible to use
> >> this with libvirt managed guest, however, since only one client can
> >> attach to he QMP socket at any point in time.
> >>
> >> The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
> >> socket and listens for incoming client connections, speaking QMP on
> >> the connected socket. It will forward any QMP commands received onto
> >> the running libvirt QEMU guest, and forward any replies back to the
> >> QMP client.
> >>
> >>   $ virsh start demo
> >>   $ virt-qmp-proxy demo demo.qmp &
> >>   $ qmp-shell demo.qmp
> >>   Welcome to the QMP low-level shell!
> >>   Connected to QEMU 6.2.0
> >>
> >>   (QEMU) query-kvm
> >>   {
> >>   "return": {
> >>   "enabled": true,
> >>   "present": true
> >>   }
> >>   }
> >>
> >> Note this tool of course has the same risks as the raw libvirt
> >> QMP passthrough. It is safe to run query commands to fetch information
> >> but commands which change the QEMU state risk disrupting libvirt's
> >> management of QEMU, potentially resulting in data loss/corruption in
> >> the worst case.
> >>
> >> Signed-off-by: Daniel P. Berrangé 
> >> ---
> >>
> >> CC'ing QEMU since this is likely of interest to maintainers and users
> >> who work with QEMU and libvirt
> >>
> >> Note this impl is fairly crude in that it assumes it is receiving
> >> the QMP commands linewise one at a time. None the less it is good
> >> enough to work with qmp-shell already, so I figured it was worth
> >> exposing to the world. It also lacks support for forwarding events
> >> back to the QMP client.
> > 
> > I originally wanted to teach the qemu tools to work with libvirt
> > directly similarly how 'scripts/render_block_graph.py' from the qemu
> > tree already does but I guess this is also an option.
> > 
> > This is an option too albeit a bit more complex to set up, but on the
> > other hand a bit more universal.
> > 
> > I'll have a look at the code a bit later.
> > 
> 
> Would have found it useful, at the time I wrote the multifd save series I 
> ended up just scripting around virsh qemu-monitor-command from either bash or 
> C.

I'd consider this to be just something to achieve compatibility with
tools that already exist and expect monitor socket.

If you are writing new software with libvirt connection in mind you can
use 'virDomainQemuMonitorCommand()' or
'virDomainQemuMonitorCommandWithFiles()' directly or via the language
bindings for your language.

Obviously not for bash though.




[PATCH v4 9/9] tests/avocado: add replay Linux test for Aarch64 machines

2022-05-27 Thread Pavel Dovgalyuk
This patch adds two tests for replaying Linux boot process
on Aarch64 platform.

Signed-off-by: Pavel Dovgalyuk 
---
 tests/avocado/replay_linux.py |   41 +
 1 file changed, 41 insertions(+)

diff --git a/tests/avocado/replay_linux.py b/tests/avocado/replay_linux.py
index 3bb1bc8816..e1f9981a34 100644
--- a/tests/avocado/replay_linux.py
+++ b/tests/avocado/replay_linux.py
@@ -13,6 +13,7 @@
 import time
 
 from avocado import skipUnless
+from avocado_qemu import BUILD_DIR
 from avocado.utils import cloudinit
 from avocado.utils import network
 from avocado.utils import vmimage
@@ -149,3 +150,43 @@ def test_pc_q35(self):
 :avocado: tags=machine:q35
 """
 self.run_rr(shift=3)
+
+@skipUnless(os.getenv('AVOCADO_TIMEOUT_EXPECTED'), 'Test might timeout')
+class ReplayLinuxAarch64(ReplayLinux):
+"""
+:avocado: tags=accel:tcg
+:avocado: tags=arch:aarch64
+:avocado: tags=machine:virt
+:avocado: tags=cpu:max
+"""
+
+chksum = '1e18d9c0cf734940c4b5d5ec592facaed2af0ad0329383d5639c997fdf16fe49'
+
+hdd = 'virtio-blk-device'
+cd = 'virtio-blk-device'
+bus = None
+
+def get_common_args(self):
+return ('-bios',
+os.path.join(BUILD_DIR, 'pc-bios', 'edk2-aarch64-code.fd'),
+"-cpu", "max,lpa2=off",
+'-device', 'virtio-rng-pci,rng=rng0',
+'-object', 'rng-builtin,id=rng0')
+
+def test_virt_gicv2(self):
+"""
+:avocado: tags=machine:gic-version=2
+"""
+
+self.run_rr(shift=3,
+args=(*self.get_common_args(),
+  "-machine", "virt,gic-version=2"))
+
+def test_virt_gicv3(self):
+"""
+:avocado: tags=machine:gic-version=3
+"""
+
+self.run_rr(shift=3,
+args=(*self.get_common_args(),
+  "-machine", "virt,gic-version=3"))




Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Alex Bennée


Dario Faggioli  writes:

> [[PGP Signed Part:Undecided]]
> On Fri, 2022-05-27 at 10:18 +0200, Claudio Fontana wrote:
>> On 5/27/22 9:26 AM, Dario Faggioli wrote:
>> > > 
>> > Yes, this kind of matches what I've also seen and reported about in
>> > <5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
>> > enable/run just one of:
>> > - reconnect
>> > - flags_mismatch
>> > - connect_fail
>> > 
>> > I see no issues.
>> 
>> On the countrary, for me just running a single one of those can fail.
>> 
> Well, but you said (or at least so I understood) that running the test
> for the first time, works.
>
> Then, when you run it multiple times, things start to fail.
>
> That was, in fact, my point... I was making the parallelism between the
> fact running only one of those tests works for me and the fact that
> running the test for the first time works for you too.

Hmm so the qos-test is a bit weird as it:

 - forks itself to run a single subtest (g_test_trap_subprocess)
 - forks itself again for provide the dummy vhost-user daemon
 - as well as the fork/execve for qemu itself

while all the paths used for communication should be unique I wouldn't
be surprised if there is a racey interaction or two in the whole thing.
We even see a bit of this is the fact we don't cleanly tear stuff down
so QEMU sees the vhost-user socket disappear under it's feet.

>
> And between the fact that running two tests, one after the other, fails
> for me and the fact that running the same tests multiple times fails
> for you too.
>
> :-)
>
>> > However, Claudio, AFAIUI, you're seeing this with an older GCC and
>> > without LTO, right?
>> 
>> Yes, to provide a different angle I tried on veteran OpenSUSE Leap
>> 15.2, so gcc is based on 7.5.0.
>> 
>> I don't think LTO is being used in any way.
>> 
> Yep, agreed. Now I don't think it's related to LTO specifically either.
>
> Although, it's at least a bit of an Heisenbug. I mean, we're seeing it
> (with two different setups), but for others, things work fine, I guess?
>
> Regards


-- 
Alex Bennée



Re: [PATCH] ppc: fix boot with sam460ex

2022-05-27 Thread Michael S. Tsirkin
On Fri, May 27, 2022 at 12:46:57PM +0200, BALATON Zoltan wrote:
> Hello,
> 
> Some changes to commit message (patch is OK).

Want to write the commit message for me then?


> On Thu, 26 May 2022, Michael S. Tsirkin wrote:
> > Recent changes to pcie_host corrected size of its internal region to
> > match what it expects - only the low 28 bits are ever decoded. Previous
> > code just ignored bit 29 (if size was 1 << 29) in the address which does
> > not make much sense.  We are now asserting on size > 1 << 28 instead,
> > but it so happened that ppc actually allows guest to configure as large
> > a size as it wants to, and current firmware set it to 1 << 29.
> > 
> > With just qemu-system-ppc -M sam460ex this triggers an assert which
> > seems to happen when the guest (board firmware?) writes a value to
> > CFGMSK reg:
> > 
> > (gdb) bt
> 
> Backtrace is missing but you could just drop this line and replace : with .
> at end of previous line as we probably don't need the full backtrace, the
> commit message is already too long in my opinion.
> 
> > This is done in the board firmware here:
> > 
> > https://git.qemu.org/?p=u-boot-sam460ex.git;a=blob;f=arch/powerpc/cpu/ppc4xx/4xx_pcie.c;h=13348be93dccc74c13ea043d6635a7f8ece4b5f0;hb=HEAD
> > 
> > when trying to map config space.
> > 
> > Note that what firmware does matches
> > https://www.hardware.com.br/comunidade/switch-cisco/1128380/
> 
> That's not it. It's a different hardware and firmware, just quoted it as an
> example that this value seems to be common to that SoC even on different
> hardware/OS/firmware (probably comes from reference hardware/devel board?).
> The sam460ex is here
> 
> https://www.acube-systems.biz/index.php?page=hardware&pid=5
> 
> the U-Boot in above repo is matching the firmware from the acube page but I
> had to fix some bugs in it to make it compile and work.
> 
> Otherwise this should be OK.
> 
> Regards,
> BALATON Zoltan
> 
> > So it's not clear what the proper fix should be.
> > 
> > However, allowing guest to trigger an assert in qemu is not good practice 
> > anyway.
> > 
> > For now let's just force the mask to 256MB on guest write, this way
> > anything outside the expected address range is ignored.
> > 
> > Fixes: commit 1f1a7b2269 ("include/hw/pci/pcie_host: Correct 
> > PCIE_MMCFG_SIZE_MAX")
> > Reviewed-by: BALATON Zoltan 
> > Tested-by: BALATON Zoltan 
> > Signed-off-by: Michael S. Tsirkin 
> > ---
> > 
> > Affected system is orphan so I guess I will merge the patch unless
> > someone objects.
> > 
> > hw/ppc/ppc440_uc.c | 8 
> > 1 file changed, 8 insertions(+)
> > 
> > diff --git a/hw/ppc/ppc440_uc.c b/hw/ppc/ppc440_uc.c
> > index 993e3ba955..a1ecf6dd1c 100644
> > --- a/hw/ppc/ppc440_uc.c
> > +++ b/hw/ppc/ppc440_uc.c
> > @@ -1180,6 +1180,14 @@ static void dcr_write_pcie(void *opaque, int dcrn, 
> > uint32_t val)
> > case PEGPL_CFGMSK:
> > s->cfg_mask = val;
> > size = ~(val & 0xfffe) + 1;
> > +/*
> > + * Firmware sets this register to E001. Why we are not sure,
> > + * but the current guess is anything above PCIE_MMCFG_SIZE_MAX is
> > + * ignored.
> > + */
> > +if (size > PCIE_MMCFG_SIZE_MAX) {
> > +size = PCIE_MMCFG_SIZE_MAX;
> > +}
> > pcie_host_mmcfg_update(PCIE_HOST_BRIDGE(s), val & 1, s->cfg_base, 
> > size);
> > break;
> > case PEGPL_MSGBAH:
> > 




Re: [libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 12:20:39PM +0200, Peter Krempa wrote:
> On Fri, May 27, 2022 at 10:47:58 +0100, Daniel P. Berrangé wrote:
> > Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> > exposed in virsh. It is not especially pleasant, however, using the raw
> > QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
> > exposes a human friendly interactive shell. It is not possible to use
> > this with libvirt managed guest, however, since only one client can
> > attach to he QMP socket at any point in time.
> > 
> > The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
> > socket and listens for incoming client connections, speaking QMP on
> > the connected socket. It will forward any QMP commands received onto
> > the running libvirt QEMU guest, and forward any replies back to the
> > QMP client.
> > 
> >   $ virsh start demo
> >   $ virt-qmp-proxy demo demo.qmp &
> >   $ qmp-shell demo.qmp
> >   Welcome to the QMP low-level shell!
> >   Connected to QEMU 6.2.0
> > 
> >   (QEMU) query-kvm
> >   {
> >   "return": {
> >   "enabled": true,
> >   "present": true
> >   }
> >   }
> > 
> > Note this tool of course has the same risks as the raw libvirt
> > QMP passthrough. It is safe to run query commands to fetch information
> > but commands which change the QEMU state risk disrupting libvirt's
> > management of QEMU, potentially resulting in data loss/corruption in
> > the worst case.
> > 
> > Signed-off-by: Daniel P. Berrangé 
> > ---
> > 
> > CC'ing QEMU since this is likely of interest to maintainers and users
> > who work with QEMU and libvirt
> > 
> > Note this impl is fairly crude in that it assumes it is receiving
> > the QMP commands linewise one at a time. None the less it is good
> > enough to work with qmp-shell already, so I figured it was worth
> > exposing to the world. It also lacks support for forwarding events
> > back to the QMP client.
> 
> I originally wanted to teach the qemu tools to work with libvirt
> directly similarly how 'scripts/render_block_graph.py' from the qemu
> tree already does but I guess this is also an option.

Yes, I do wonder about whether with John's new QMP python APIs,
it would be possible to plug in a livirt transport instead of
the socket transport. I've not spent enough time looking at the
Python QMP code to know if that's viable or not though.

> This is an option too albeit a bit more complex to set up, but on the
> other hand a bit more universal.

The two approaches aren't mutually exclusive either. There's no
reason we can't have both options.


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 12:35:45PM +0200, Claudio Fontana wrote:
> On 5/27/22 12:20 PM, Peter Krempa wrote:
> > On Fri, May 27, 2022 at 10:47:58 +0100, Daniel P. Berrangé wrote:
> >> Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> >> exposed in virsh. It is not especially pleasant, however, using the raw
> >> QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
> >> exposes a human friendly interactive shell. It is not possible to use
> >> this with libvirt managed guest, however, since only one client can
> >> attach to he QMP socket at any point in time.
> >>
> >> The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
> >> socket and listens for incoming client connections, speaking QMP on
> >> the connected socket. It will forward any QMP commands received onto
> >> the running libvirt QEMU guest, and forward any replies back to the
> >> QMP client.
> >>
> >>   $ virsh start demo
> >>   $ virt-qmp-proxy demo demo.qmp &
> >>   $ qmp-shell demo.qmp
> >>   Welcome to the QMP low-level shell!
> >>   Connected to QEMU 6.2.0
> >>
> >>   (QEMU) query-kvm
> >>   {
> >>   "return": {
> >>   "enabled": true,
> >>   "present": true
> >>   }
> >>   }
> >>
> >> Note this tool of course has the same risks as the raw libvirt
> >> QMP passthrough. It is safe to run query commands to fetch information
> >> but commands which change the QEMU state risk disrupting libvirt's
> >> management of QEMU, potentially resulting in data loss/corruption in
> >> the worst case.
> >>
> >> Signed-off-by: Daniel P. Berrangé 
> >> ---
> >>
> >> CC'ing QEMU since this is likely of interest to maintainers and users
> >> who work with QEMU and libvirt
> >>
> >> Note this impl is fairly crude in that it assumes it is receiving
> >> the QMP commands linewise one at a time. None the less it is good
> >> enough to work with qmp-shell already, so I figured it was worth
> >> exposing to the world. It also lacks support for forwarding events
> >> back to the QMP client.
> > 
> > I originally wanted to teach the qemu tools to work with libvirt
> > directly similarly how 'scripts/render_block_graph.py' from the qemu
> > tree already does but I guess this is also an option.
> > 
> > This is an option too albeit a bit more complex to set up, but on the
> > other hand a bit more universal.
> > 
> > I'll have a look at the code a bit later.
> > 
> 
> Would have found it useful, at the time I wrote the multifd save series I 
> ended up just scripting around virsh qemu-monitor-command from either bash or 
> C.
> 
> One challenge I had to face was, when doing fd migration doing
> 
> "execute": "getfd", "arguments": {"fdname":"migrate"}
> 
> in that case we have to use the --pass-fds=N option to pass the FD.
> 
> Does the virt-qmp-proxy tool consider the passing of FDs issue?

My impl here doesn't try to support FD passing, but it is conceptually
possible for us to support it given the new libvirt API Peter added
a few months back to allow monitor passthrough with FDs.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 1/1] nbd: trace long NBD operations

2022-05-27 Thread Denis V. Lunev

On 27.05.2022 11:33, Vladimir Sementsov-Ogievskiy wrote:

On 5/27/22 11:43, Denis V. Lunev wrote:

At the moment there are 2 sources of lengthy operations if configured:
* open connection, which could retry inside and
* reconnect of already opened connection
These operations could be quite lengthy and cumbersome to catch thus
it would be quite natural to add trace points for them.

This patch is based on the original downstream work made by Vladimir.

Signed-off-by: Denis V. Lunev 
CC: Eric Blake 
CC: Vladimir Sementsov-Ogievskiy 
CC: Kevin Wolf 
CC: Hanna Reitz 
CC: Paolo Bonzini 
---
  block/nbd.c | 11 ---
  block/trace-events  |  2 ++
  nbd/client-connection.c |  2 ++
  nbd/trace-events    |  3 +++
  4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 6085ab1d2c..f1a473d36b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -371,6 +371,7 @@ static bool nbd_client_connecting(BDRVNBDState *s)
  /* Called with s->requests_lock taken.  */
  static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
  {
+    int ret;
  bool blocking = s->state == NBD_CLIENT_CONNECTING_WAIT;
    /*
@@ -380,6 +381,8 @@ static coroutine_fn void 
nbd_reconnect_attempt(BDRVNBDState *s)

  assert(nbd_client_connecting(s));
  assert(s->in_flight == 1);
  +    trace_nbd_reconnect_attempt(s->bs->in_flight);
+
  if (blocking && !s->reconnect_delay_timer) {
  /*
   * It's the first reconnect attempt after switching to
@@ -401,7 +404,7 @@ static coroutine_fn void 
nbd_reconnect_attempt(BDRVNBDState *s)

  }
    qemu_mutex_unlock(&s->requests_lock);
-    nbd_co_do_establish_connection(s->bs, blocking, NULL);
+    ret = nbd_co_do_establish_connection(s->bs, blocking, NULL);
  qemu_mutex_lock(&s->requests_lock);
    /*
@@ -410,6 +413,8 @@ static coroutine_fn void 
nbd_reconnect_attempt(BDRVNBDState *s)

   * this I/O request (so draining removes all timers).
   */
  reconnect_delay_timer_del(s);
+
+    trace_nbd_reconnect_attempt_result(ret, s->bs->in_flight);


May be better trace exactly after nbd_co_do_establish_connection(). 
Doesn't really matter, just simpler code.



ya, I'll change this



  }
    static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, 
uint64_t handle)
@@ -1856,8 +1861,8 @@ static int nbd_process_options(BlockDriverState 
*bs, QDict *options,

  goto error;
  }
  -    s->reconnect_delay = qemu_opt_get_number(opts, 
"reconnect-delay", 0);

-    s->open_timeout = qemu_opt_get_number(opts, "open-timeout", 0);
+    s->reconnect_delay = qemu_opt_get_number(opts, 
"reconnect-delay", 300);

+    s->open_timeout = qemu_opt_get_number(opts, "open-timeout", 300);


That's changing defaults. Should not be in this patch. And I don't 
think we can simply change upstream default of open-timeout, as it 
breaks habitual behavior.



whoops :( used for testing, cut before sending once but somehow
this goes out anyway.

Thanks for pointing this.



    ret = 0;
  diff --git a/block/trace-events b/block/trace-events
index 549090d453..caab699c22 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -172,6 +172,8 @@ nbd_read_reply_entry_fail(int ret, const char 
*err) "ret = %d, err: %s"
  nbd_co_request_fail(uint64_t from, uint32_t len, uint64_t handle, 
uint16_t flags, uint16_t type, const char *name, int ret, const char 
*err) "Request failed { .from = %" PRIu64", .len = %" PRIu32 ", 
.handle = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " 
(%s) } ret = %d, err: %s"

  nbd_client_handshake(const char *export_name) "export '%s'"
  nbd_client_handshake_success(const char *export_name) "export '%s'"
+nbd_reconnect_attempt(int in_flight) "in_flight %d"
+nbd_reconnect_attempt_result(int ret, int in_flight) "ret %d 
in_flight %d"


bs->in_flight is "unsigned int", so a bit better would be use 
"unsigned int" and "%u" here



noted



    # ssh.c
  ssh_restart_coroutine(void *co) "co=%p"
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 2a632931c3..a5ee82e804 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -23,6 +23,7 @@
   */
    #include "qemu/osdep.h"
+#include "trace.h"
    #include "block/nbd.h"
  @@ -210,6 +211,7 @@ static void *connect_thread_func(void *opaque)
  object_unref(OBJECT(conn->sioc));
  conn->sioc = NULL;
  if (conn->do_retry && !conn->detached) {
+    trace_nbd_connect_iteration(timeout);


Here we are going to sleep a bit, before next reconnect attempt. I'd 
call the trace-point "trace_nbd_connect_thread_sleep" or something 
like this to be more intuitive.



ok


qemu_mutex_unlock(&conn->mutex);
    sleep(timeout);
diff --git a/nbd/trace-events b/nbd/trace-events
index c4919a2dd5..bdadfdc82d 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -73,3 +73,6 @@ nbd_co_receive_request_decode_type(uint64_t handle, 
uint16_t type, const char *n

Re: [PATCH 1/1] nbd: trace long NBD operations

2022-05-27 Thread Denis V. Lunev

On 27.05.2022 11:36, Vladimir Sementsov-Ogievskiy wrote:

On 5/27/22 11:43, Denis V. Lunev wrote:

+++ b/nbd/client-connection.c
@@ -23,6 +23,7 @@
   */
    #include "qemu/osdep.h"
+#include "trace.h"
    #include "block/nbd.h"
  @@ -210,6 +211,7 @@ static void *connect_thread_func(void *opaque)
  object_unref(OBJECT(conn->sioc));
  conn->sioc = NULL;
  if (conn->do_retry && !conn->detached) {
+    trace_nbd_connect_iteration(timeout);
  qemu_mutex_unlock(&conn->mutex);
    sleep(timeout);
diff --git a/nbd/trace-events b/nbd/trace-events
index c4919a2dd5..bdadfdc82d 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -73,3 +73,6 @@ nbd_co_receive_request_decode_type(uint64_t handle, 
uint16_t type, const char *n
  nbd_co_receive_request_payload_received(uint64_t handle, uint32_t 
len) "Payload received: handle = %" PRIu64 ", len = %" PRIu32
  nbd_co_receive_align_compliance(const char *op, uint64_t from, 
uint32_t len, uint32_t align) "client sent non-compliant unaligned %s 
request: from=0x%" PRIx64 ", len=0x%" PRIx32 ", align=0x%" PRIx32

  nbd_trip(void) "Reading request"
+
+# client-connection.c
+nbd_connect_iteration(int in_flight) "timeout %d"


timeout is uint64_t, so, it should be "uint64_t timeout" here and %" 
PRIu64



Thanks! will change



Re: [PATCH 1/4] qdev: add DEVICE_RUNTIME_ERROR event

2022-05-27 Thread Roman Kagan
On Wed, May 25, 2022 at 12:54:47PM +0200, Markus Armbruster wrote:
> Konstantin Khlebnikov  writes:
> 
> > This event represents device runtime errors to give time and
> > reason why device is broken.
> 
> Can you give an or more examples of the "device runtime errors" you have
> in mind?

Initially we wanted to address a situation when a vhost device
discovered an inconsistency during virtqueue processing and silently
stopped the virtqueue.  This resulted in device stall (partial for
multiqueue devices) and we were the last to notice that.

The solution appeared to be to employ errfd and, upon receiving a
notification through it, to emit a QMP event which is actionable in the
management layer or further up the stack.

Then we observed that virtio (non-vhost) devices suffer from the same
issue: they only log the error but don't signal it to the management
layer.  The case was very similar so we thought it would make sense to
share the infrastructure and the QMP event between virtio and vhost.

Then Konstantin went a bit further and generalized the concept into
generic "device runtime error".  I'm personally not completely convinced
this generalization is appropriate here; we'd appreciate the opinions
from the community on the matter.

HTH,
Roman.



Re: [PATCH 0/5] gitlab: restrict running jobs in forks and upstream master

2022-05-27 Thread Alex Bennée


Daniel P. Berrangé  writes:

> Currently on upstream most jobs will run in both staging
> and master. This is quite wasteful of CI credits. The only
> need to run in master is for the jobs related to publishing
> the website
>
> In forks we run jobs on every push. With restricted CI
> allowance this is quickly going to cause  problems.
>
> With this series jobs will no longer run on forks at all,
> without an opt-in with QEMU_CI=1 (pipeline with manual
> jobs) or QEMU_CI=2 (pipeline with immediate jobs)
>
> This is a rewrite of a previous proposal:
>
> https://lists.nongnu.org/archive/html/qemu-devel/2021-08/msg02104.html
>
> where I've kept it simpler and also split up the patches
> into more understandable chunks

Queued to testing/next, thanks.

I'll fix up the comment and move some stuff into the rst.

>
> Daniel P. Berrangé (5):
>   gitlab: introduce a common base job template
>   gitlab: convert Cirrus jobs to .base_job_template
>   gitlab: convert static checks to .base_job_template
>   gitlab: convert build/container jobs to .base_job_template
>   gitlab: don't run CI jobs in forks by default
>
>  .gitlab-ci.d/base.yml| 72 +++
>  .gitlab-ci.d/buildtest-template.yml  | 16 ++---
>  .gitlab-ci.d/buildtest.yml   | 28 -
>  .gitlab-ci.d/cirrus.yml  | 16 ++---
>  .gitlab-ci.d/container-cross.yml |  6 +-
>  .gitlab-ci.d/container-template.yml  |  1 +
>  .gitlab-ci.d/crossbuild-template.yml |  3 +
>  .gitlab-ci.d/qemu-project.yml|  1 +
>  .gitlab-ci.d/static_checks.yml   | 19 +++---
>  .gitlab-ci.d/windows.yml |  1 +
>  docs/devel/ci-jobs.rst.inc   | 88 +++-
>  11 files changed, 199 insertions(+), 52 deletions(-)
>  create mode 100644 .gitlab-ci.d/base.yml



-- 
Alex Bennée



Re: [PATCH] tests/tcg/s390x: Test overflow conditions

2022-05-27 Thread Alex Bennée


Gautam Agrawal  writes:

> Add a test to check for overflow conditions in s390x.
> This patch is based on the following patches :
> * https://git.qemu.org/?p=qemu.git;a=commitdiff;h=5a2e67a691501
> * https://git.qemu.org/?p=qemu.git;a=commitdiff;h=fc6e0d0f2db51
>  
> Signed-off-by: Gautam Agrawal 

Acked-by: Alex Bennée 

-- 
Alex Bennée



[PULL 13/15] qga/wixl: simplify some pre-processing

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

Sadly, wixl doesn't have 'elif'.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-14-marcandre.lur...@redhat.com>
---
 qga/installer/qemu-ga.wxs | 20 +++-
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index 651db6e51c..e5b0958e18 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -1,21 +1,15 @@
 
 http://schemas.microsoft.com/wix/2006/wi";>
-  
-
-  
-
   
 
 
-  
-
-  
-
-
-  
-
-  
-
+  
+
+  
+  
+
+  
+
   
 
   

Re: [PATCH v2] gitlab-ci: Switch the container of the 'check-patch' & 'check-dco' jobs

2022-05-27 Thread Alex Bennée


Thomas Huth  writes:

> The 'check-patch' and 'check-dco' jobs only need Python and git for
> checking the patches, so it's not really necessary to use a container
> here that has all the other build dependencies installed. By using a
> lightweight Alpine container, we can improve the runtime here quite a
> bit, cutting it down from ca. 1:30 minutes to ca. 45 seconds.
>
> Suggested-by: Daniel P. Berrangé 
> Signed-off-by: Thomas Huth 

Queued to testing/next, thanks.

-- 
Alex Bennée



[PULL 00/15] Misc patches

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

The following changes since commit 2417cbd5916d043e0c56408221fbe9935d0bc8da:

  Merge tag 'ak-pull-request' of https://gitlab.com/berrange/qemu into staging 
(2022-05-26 07:00:04 -0700)

are available in the Git repository at:

  g...@gitlab.com:marcandre.lureau/qemu.git tags/misc-pull-request

for you to fetch changes up to 71a56d6afc28b4175fedb0892e088e67f1d603f1:

  test/qga: use g_auto wherever sensible (2022-05-27 15:40:20 +0200)


Misc cleanups

Mostly qemu-ga related cleanups.



Marc-André Lureau (15):
  include: move qemu_*_exec_dir() to cutils
  util/win32: simplify qemu_get_local_state_dir()
  tests: make libqmp buildable for win32
  qga: flatten safe_open_or_create()
  qga: add qga_open_cloexec() helper
  qga: use qga_open_cloexec() for safe_open_or_create()
  qga: throw an Error in ga_channel_open()
  qga: replace qemu_open_old() with qga_open_cloexec()
  qga: make build_fs_mount_list() return a bool
  test/qga: use G_TEST_DIR to locate os-release test file
  qga/wixl: prefer variables over environment
  qga/wixl: require Mingw_bin
  qga/wixl: simplify some pre-processing
  qga/wixl: replace QEMU_GA_MSI_MINGW_BIN_PATH with glib bindir
  test/qga: use g_auto wherever sensible

 configure|   9 +-
 meson.build  |   5 +-
 include/qemu/cutils.h|   7 ++
 include/qemu/osdep.h |   8 --
 qga/cutils.h |   8 ++
 tests/qtest/libqmp.h |   2 +
 qemu-io.c|   1 +
 qga/channel-posix.c  |  55 +-
 qga/commands-posix.c | 154 +--
 qga/cutils.c |  33 ++
 storage-daemon/qemu-storage-daemon.c |   1 +
 tests/qtest/fuzz/fuzz.c  |   1 +
 tests/qtest/libqmp.c |  34 +-
 tests/unit/test-qga.c| 130 --
 util/cutils.c| 108 +++
 util/oslib-posix.c   |  81 --
 util/oslib-win32.c   |  53 +
 qga/installer/qemu-ga.wxs|  83 +--
 qga/meson.build  |  12 +--
 19 files changed, 385 insertions(+), 400 deletions(-)
 create mode 100644 qga/cutils.h
 create mode 100644 qga/cutils.c

-- 
2.36.1




[PULL 02/15] util/win32: simplify qemu_get_local_state_dir()

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

SHGetFolderPath() is a deprecated API:
https://docs.microsoft.com/en-us/windows/win32/api/shlobj_core/nf-shlobj_core-shgetfolderpatha

It is a wrapper for SHGetKnownFolderPath() and CSIDL_COMMON_PATH is
mapped to FOLDERID_ProgramData:
https://docs.microsoft.com/en-us/windows/win32/shell/csidl

g_get_system_data_dirs() is a suitable replacement, as it will have
FOLDERID_ProgramData in the returned list. However, it follows the XDG
Base Directory Specification, if `XDG_DATA_DIRS` is defined, it will be
returned instead.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Stefan Weil 
Message-Id: <20220525144140.591926-3-marcandre.lur...@redhat.com>
---
 util/oslib-win32.c | 17 -
 1 file changed, 4 insertions(+), 13 deletions(-)

diff --git a/util/oslib-win32.c b/util/oslib-win32.c
index 6c818749d2..5723d3eb4c 100644
--- a/util/oslib-win32.c
+++ b/util/oslib-win32.c
@@ -40,9 +40,6 @@
 #include "qemu/error-report.h"
 #include 
 
-/* this must come after including "trace.h" */
-#include 
-
 static int get_allocation_granularity(void)
 {
 SYSTEM_INFO system_info;
@@ -237,17 +234,11 @@ int qemu_get_thread_id(void)
 char *
 qemu_get_local_state_dir(void)
 {
-HRESULT result;
-char base_path[MAX_PATH+1] = "";
+const char * const *data_dirs = g_get_system_data_dirs();
 
-result = SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA, NULL,
- /* SHGFP_TYPE_CURRENT */ 0, base_path);
-if (result != S_OK) {
-/* misconfigured environment */
-g_critical("CSIDL_COMMON_APPDATA unavailable: %ld", (long)result);
-abort();
-}
-return g_strdup(base_path);
+g_assert(data_dirs && data_dirs[0]);
+
+return g_strdup(data_dirs[0]);
 }
 
 void qemu_set_tty_echo(int fd, bool echo)
-- 
2.36.1




[PULL 05/15] qga: add qga_open_cloexec() helper

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

QGA calls qemu_open_old() in various places. Calling qemu_open() instead
isn't a great alternative, as it has special "/dev/fdset" handling and
depends on QEMU internal monitor data structures.

Instead, provide a simple helper for QGA needs, with Error* support. The
following patches will make use of it.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Markus Armbruster 
Message-Id: <20220525144140.591926-6-marcandre.lur...@redhat.com>
---
 qga/cutils.h|  8 
 qga/cutils.c| 33 +
 qga/meson.build |  1 +
 3 files changed, 42 insertions(+)
 create mode 100644 qga/cutils.h
 create mode 100644 qga/cutils.c

diff --git a/qga/cutils.h b/qga/cutils.h
new file mode 100644
index 00..f0f30a7d28
--- /dev/null
+++ b/qga/cutils.h
@@ -0,0 +1,8 @@
+#ifndef CUTILS_H_
+#define CUTILS_H_
+
+#include "qemu/osdep.h"
+
+int qga_open_cloexec(const char *name, int flags, mode_t mode);
+
+#endif /* CUTILS_H_ */
diff --git a/qga/cutils.c b/qga/cutils.c
new file mode 100644
index 00..b8e142ef64
--- /dev/null
+++ b/qga/cutils.c
@@ -0,0 +1,33 @@
+/*
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#include "cutils.h"
+
+#include "qapi/error.h"
+
+/**
+ * qga_open_cloexec:
+ * @name: the pathname to open
+ * @flags: as in open()
+ * @mode: as in open()
+ *
+ * A wrapper for open() function which sets O_CLOEXEC.
+ *
+ * On error, -1 is returned.
+ */
+int qga_open_cloexec(const char *name, int flags, mode_t mode)
+{
+int ret;
+
+#ifdef O_CLOEXEC
+ret = open(name, flags | O_CLOEXEC, mode);
+#else
+ret = open(name, flags, mode);
+if (ret >= 0) {
+qemu_set_cloexec(ret);
+}
+#endif
+
+return ret;
+}
diff --git a/qga/meson.build b/qga/meson.build
index 6d9f39bb32..35fe2229e9 100644
--- a/qga/meson.build
+++ b/qga/meson.build
@@ -65,6 +65,7 @@ qga_ss.add(files(
   'commands.c',
   'guest-agent-command-state.c',
   'main.c',
+  'cutils.c',
 ))
 qga_ss.add(when: 'CONFIG_POSIX', if_true: files(
   'channel-posix.c',
-- 
2.36.1




[PULL 07/15] qga: throw an Error in ga_channel_open()

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

Allow for a single point of error reporting, and further refactoring.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Markus Armbruster 
Message-Id: <20220525144140.591926-8-marcandre.lur...@redhat.com>
---
 qga/channel-posix.c | 42 +-
 1 file changed, 17 insertions(+), 25 deletions(-)

diff --git a/qga/channel-posix.c b/qga/channel-posix.c
index a996858e24..25fcc5cb1a 100644
--- a/qga/channel-posix.c
+++ b/qga/channel-posix.c
@@ -119,7 +119,7 @@ static int ga_channel_client_add(GAChannel *c, int fd)
 }
 
 static gboolean ga_channel_open(GAChannel *c, const gchar *path,
-GAChannelMethod method, int fd)
+GAChannelMethod method, int fd, Error **errp)
 {
 int ret;
 c->method = method;
@@ -133,21 +133,20 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 #endif
);
 if (fd == -1) {
-g_critical("error opening channel: %s", strerror(errno));
+error_setg_errno(errp, errno, "error opening channel");
 return false;
 }
 #ifdef CONFIG_SOLARIS
 ret = ioctl(fd, I_SETSIG, S_OUTPUT | S_INPUT | S_HIPRI);
 if (ret == -1) {
-g_critical("error setting event mask for channel: %s",
-   strerror(errno));
+error_setg_errno(errp, errno, "error setting event mask for 
channel");
 close(fd);
 return false;
 }
 #endif
 ret = ga_channel_client_add(c, fd);
 if (ret) {
-g_critical("error adding channel to main loop");
+error_setg(errp, "error adding channel to main loop");
 close(fd);
 return false;
 }
@@ -159,7 +158,7 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 assert(fd < 0);
 fd = qemu_open_old(path, O_RDWR | O_NOCTTY | O_NONBLOCK);
 if (fd == -1) {
-g_critical("error opening channel: %s", strerror(errno));
+error_setg_errno(errp, errno, "error opening channel");
 return false;
 }
 tcgetattr(fd, &tio);
@@ -180,7 +179,7 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 tcsetattr(fd, TCSANOW, &tio);
 ret = ga_channel_client_add(c, fd);
 if (ret) {
-g_critical("error adding channel to main loop");
+error_setg(errp, "error adding channel to main loop");
 close(fd);
 return false;
 }
@@ -188,12 +187,8 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 }
 case GA_CHANNEL_UNIX_LISTEN: {
 if (fd < 0) {
-Error *local_err = NULL;
-
-fd = unix_listen(path, &local_err);
-if (local_err != NULL) {
-g_critical("%s", error_get_pretty(local_err));
-error_free(local_err);
+fd = unix_listen(path, errp);
+if (fd < 0) {
 return false;
 }
 }
@@ -202,24 +197,19 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 }
 case GA_CHANNEL_VSOCK_LISTEN: {
 if (fd < 0) {
-Error *local_err = NULL;
 SocketAddress *addr;
 char *addr_str;
 
 addr_str = g_strdup_printf("vsock:%s", path);
-addr = socket_parse(addr_str, &local_err);
+addr = socket_parse(addr_str, errp);
 g_free(addr_str);
-if (local_err != NULL) {
-g_critical("%s", error_get_pretty(local_err));
-error_free(local_err);
+if (!addr) {
 return false;
 }
 
-fd = socket_listen(addr, 1, &local_err);
+fd = socket_listen(addr, 1, errp);
 qapi_free_SocketAddress(addr);
-if (local_err != NULL) {
-g_critical("%s", error_get_pretty(local_err));
-error_free(local_err);
+if (fd < 0) {
 return false;
 }
 }
@@ -227,7 +217,7 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 break;
 }
 default:
-g_critical("error binding/listening to specified socket");
+error_setg(errp, "error binding/listening to specified socket");
 return false;
 }
 
@@ -272,12 +262,14 @@ GIOStatus ga_channel_read(GAChannel *c, gchar *buf, gsize 
size, gsize *count)
 GAChannel *ga_channel_new(GAChannelMethod method, const gchar *path,
   int listen_fd, GAChannelCallback cb, gpointer opaque)
 {
+Error *err = NULL;
 GAChannel *c = g_new0(GAChannel, 1);
 c->event_cb = cb;
 c->user_data = opaque;
 
-if (!ga_channel_open(c, path, method, listen_fd)) {
-g_critical("error opening channel");
+if (!ga_channel_open(c, path, method, listen_fd, &err)) {
+g_critical(

[PULL 01/15] include: move qemu_*_exec_dir() to cutils

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

The function is required by get_relocated_path() (already in cutils),
and used by qemu-ga and may be generally useful.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Markus Armbruster 
Message-Id: <20220525144140.591926-2-marcandre.lur...@redhat.com>
---
 include/qemu/cutils.h|   7 ++
 include/qemu/osdep.h |   8 --
 qemu-io.c|   1 +
 storage-daemon/qemu-storage-daemon.c |   1 +
 tests/qtest/fuzz/fuzz.c  |   1 +
 util/cutils.c| 108 +++
 util/oslib-posix.c   |  81 
 util/oslib-win32.c   |  36 -
 8 files changed, 118 insertions(+), 125 deletions(-)

diff --git a/include/qemu/cutils.h b/include/qemu/cutils.h
index 5c6572d444..40e10e19a7 100644
--- a/include/qemu/cutils.h
+++ b/include/qemu/cutils.h
@@ -193,6 +193,13 @@ int uleb128_decode_small(const uint8_t *in, uint32_t *n);
  */
 int qemu_pstrcmp0(const char **str1, const char **str2);
 
+/* Find program directory, and save it for later usage with
+ * qemu_get_exec_dir().
+ * Try OS specific API first, if not working, parse from argv0. */
+void qemu_init_exec_dir(const char *argv0);
+
+/* Get the saved exec dir.  */
+const char *qemu_get_exec_dir(void);
 
 /**
  * get_relocated_path:
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index a72e99db85..b1c161c035 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -557,14 +557,6 @@ void qemu_set_cloexec(int fd);
  */
 char *qemu_get_local_state_dir(void);
 
-/* Find program directory, and save it for later usage with
- * qemu_get_exec_dir().
- * Try OS specific API first, if not working, parse from argv0. */
-void qemu_init_exec_dir(const char *argv0);
-
-/* Get the saved exec dir.  */
-const char *qemu_get_exec_dir(void);
-
 /**
  * qemu_getauxval:
  * @type: the auxiliary vector key to lookup
diff --git a/qemu-io.c b/qemu-io.c
index d70d3dd4fd..2bd7bfb650 100644
--- a/qemu-io.c
+++ b/qemu-io.c
@@ -16,6 +16,7 @@
 #endif
 
 #include "qemu/help-texts.h"
+#include "qemu/cutils.h"
 #include "qapi/error.h"
 #include "qemu-io.h"
 #include "qemu/error-report.h"
diff --git a/storage-daemon/qemu-storage-daemon.c 
b/storage-daemon/qemu-storage-daemon.c
index 9b8b17f52e..c104817cdd 100644
--- a/storage-daemon/qemu-storage-daemon.c
+++ b/storage-daemon/qemu-storage-daemon.c
@@ -44,6 +44,7 @@
 
 #include "qemu/help-texts.h"
 #include "qemu-version.h"
+#include "qemu/cutils.h"
 #include "qemu/config-file.h"
 #include "qemu/error-report.h"
 #include "qemu/help_option.h"
diff --git a/tests/qtest/fuzz/fuzz.c b/tests/qtest/fuzz/fuzz.c
index a7a5e14fa3..0ad4ba9e94 100644
--- a/tests/qtest/fuzz/fuzz.c
+++ b/tests/qtest/fuzz/fuzz.c
@@ -15,6 +15,7 @@
 
 #include 
 
+#include "qemu/cutils.h"
 #include "qemu/datadir.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/qtest.h"
diff --git a/util/cutils.c b/util/cutils.c
index b2777210e7..6cc7cc8cde 100644
--- a/util/cutils.c
+++ b/util/cutils.c
@@ -931,6 +931,114 @@ static inline const char *next_component(const char *dir, 
int *p_len)
 return dir;
 }
 
+static const char *exec_dir;
+
+void qemu_init_exec_dir(const char *argv0)
+{
+#ifdef G_OS_WIN32
+char *p;
+char buf[MAX_PATH];
+DWORD len;
+
+if (exec_dir) {
+return;
+}
+
+len = GetModuleFileName(NULL, buf, sizeof(buf) - 1);
+if (len == 0) {
+return;
+}
+
+buf[len] = 0;
+p = buf + len - 1;
+while (p != buf && *p != '\\') {
+p--;
+}
+*p = 0;
+if (access(buf, R_OK) == 0) {
+exec_dir = g_strdup(buf);
+} else {
+exec_dir = CONFIG_BINDIR;
+}
+#else
+char *p = NULL;
+char buf[PATH_MAX];
+
+if (exec_dir) {
+return;
+}
+
+#if defined(__linux__)
+{
+int len;
+len = readlink("/proc/self/exe", buf, sizeof(buf) - 1);
+if (len > 0) {
+buf[len] = 0;
+p = buf;
+}
+}
+#elif defined(__FreeBSD__) \
+  || (defined(__NetBSD__) && defined(KERN_PROC_PATHNAME))
+{
+#if defined(__FreeBSD__)
+static int mib[4] = {CTL_KERN, KERN_PROC, KERN_PROC_PATHNAME, -1};
+#else
+static int mib[4] = {CTL_KERN, KERN_PROC_ARGS, -1, KERN_PROC_PATHNAME};
+#endif
+size_t len = sizeof(buf) - 1;
+
+*buf = '\0';
+if (!sysctl(mib, ARRAY_SIZE(mib), buf, &len, NULL, 0) &&
+*buf) {
+buf[sizeof(buf) - 1] = '\0';
+p = buf;
+}
+}
+#elif defined(__APPLE__)
+{
+char fpath[PATH_MAX];
+uint32_t len = sizeof(fpath);
+if (_NSGetExecutablePath(fpath, &len) == 0) {
+p = realpath(fpath, buf);
+if (!p) {
+return;
+}
+}
+}
+#elif defined(__HAIKU__)
+{
+image_info ii;
+int32_t c = 0;
+
+*buf = '\0';
+while (get_next_image_info(0, &c, &ii) == B_OK) {
+if (

[PULL 03/15] tests: make libqmp buildable for win32

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

Signed-off-by: Marc-André Lureau 
Reviewed-by: Thomas Huth 
Message-Id: <20220525144140.591926-4-marcandre.lur...@redhat.com>
---
 tests/qtest/libqmp.h |  2 ++
 tests/qtest/libqmp.c | 34 +-
 2 files changed, 31 insertions(+), 5 deletions(-)

diff --git a/tests/qtest/libqmp.h b/tests/qtest/libqmp.h
index 5cb7eeaa18..3445b753ff 100644
--- a/tests/qtest/libqmp.h
+++ b/tests/qtest/libqmp.h
@@ -21,8 +21,10 @@
 #include "qapi/qmp/qdict.h"
 
 QDict *qmp_fd_receive(int fd);
+#ifndef _WIN32
 void qmp_fd_vsend_fds(int fd, int *fds, size_t fds_num,
   const char *fmt, va_list ap) G_GNUC_PRINTF(4, 0);
+#endif
 void qmp_fd_vsend(int fd, const char *fmt, va_list ap) G_GNUC_PRINTF(2, 0);
 void qmp_fd_send(int fd, const char *fmt, ...) G_GNUC_PRINTF(2, 3);
 void qmp_fd_send_raw(int fd, const char *fmt, ...) G_GNUC_PRINTF(2, 3);
diff --git a/tests/qtest/libqmp.c b/tests/qtest/libqmp.c
index 0358b8313d..ade26c15f0 100644
--- a/tests/qtest/libqmp.c
+++ b/tests/qtest/libqmp.c
@@ -18,6 +18,11 @@
 
 #include "libqmp.h"
 
+#ifndef _WIN32
+#include 
+#endif
+
+#include "qemu/cutils.h"
 #include "qapi/error.h"
 #include "qapi/qmp/json-parser.h"
 #include "qapi/qmp/qjson.h"
@@ -87,6 +92,7 @@ QDict *qmp_fd_receive(int fd)
 return qmp.response;
 }
 
+#ifndef _WIN32
 /* Sends a message and file descriptors to the socket.
  * It's needed for qmp-commands like getfd/add-fd */
 static void socket_send_fds(int socket_fd, int *fds, size_t fds_num,
@@ -120,17 +126,23 @@ static void socket_send_fds(int socket_fd, int *fds, 
size_t fds_num,
 } while (ret < 0 && errno == EINTR);
 g_assert_cmpint(ret, >, 0);
 }
+#endif
 
 /**
  * Allow users to send a message without waiting for the reply,
  * in the case that they choose to discard all replies up until
  * a particular EVENT is received.
  */
-void qmp_fd_vsend_fds(int fd, int *fds, size_t fds_num,
-  const char *fmt, va_list ap)
+static void
+_qmp_fd_vsend_fds(int fd, int *fds, size_t fds_num,
+  const char *fmt, va_list ap)
 {
 QObject *qobj;
 
+#ifdef _WIN32
+assert(fds_num == 0);
+#endif
+
 /* Going through qobject ensures we escape strings properly */
 qobj = qobject_from_vjsonf_nofail(fmt, ap);
 
@@ -148,10 +160,14 @@ void qmp_fd_vsend_fds(int fd, int *fds, size_t fds_num,
 if (log) {
 fprintf(stderr, "%s", str->str);
 }
+
+#ifndef _WIN32
 /* Send QMP request */
 if (fds && fds_num > 0) {
 socket_send_fds(fd, fds, fds_num, str->str, str->len);
-} else {
+} else
+#endif
+{
 socket_send(fd, str->str, str->len);
 }
 
@@ -160,15 +176,23 @@ void qmp_fd_vsend_fds(int fd, int *fds, size_t fds_num,
 }
 }
 
+#ifndef _WIN32
+void qmp_fd_vsend_fds(int fd, int *fds, size_t fds_num,
+  const char *fmt, va_list ap)
+{
+_qmp_fd_vsend_fds(fd, fds, fds_num, fmt, ap);
+}
+#endif
+
 void qmp_fd_vsend(int fd, const char *fmt, va_list ap)
 {
-qmp_fd_vsend_fds(fd, NULL, 0, fmt, ap);
+_qmp_fd_vsend_fds(fd, NULL, 0, fmt, ap);
 }
 
 
 QDict *qmp_fdv(int fd, const char *fmt, va_list ap)
 {
-qmp_fd_vsend_fds(fd, NULL, 0, fmt, ap);
+_qmp_fd_vsend_fds(fd, NULL, 0, fmt, ap);
 
 return qmp_fd_receive(fd);
 }
-- 
2.36.1




[PULL 12/15] qga/wixl: require Mingw_bin

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

No clear reason to make guesses here.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-13-marcandre.lur...@redhat.com>
---
 qga/installer/qemu-ga.wxs | 9 -
 1 file changed, 9 deletions(-)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index 8a19aa1656..651db6e51c 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -4,15 +4,6 @@
 
   
 
-  
-
-  
-
-
-  
-
-  
-
   
 
 
-- 
2.36.1




[PULL 04/15] qga: flatten safe_open_or_create()

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

There is a bit too much nesting in the function, this can be simplified
a bit to improve readability.

This also helps with the following error handling changes.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Markus Armbruster 
Message-Id: <20220525144140.591926-5-marcandre.lur...@redhat.com>
---
 qga/commands-posix.c | 120 +--
 1 file changed, 60 insertions(+), 60 deletions(-)

diff --git a/qga/commands-posix.c b/qga/commands-posix.c
index 12b50b7124..3b2392398e 100644
--- a/qga/commands-posix.c
+++ b/qga/commands-posix.c
@@ -339,73 +339,73 @@ find_open_flag(const char *mode_str, Error **errp)
 static FILE *
 safe_open_or_create(const char *path, const char *mode, Error **errp)
 {
-Error *local_err = NULL;
 int oflag;
+int fd = -1;
+FILE *f = NULL;
+
+oflag = find_open_flag(mode, errp);
+if (oflag < 0) {
+goto end;
+}
+
+/* If the caller wants / allows creation of a new file, we implement it
+ * with a two step process: open() + (open() / fchmod()).
+ *
+ * First we insist on creating the file exclusively as a new file. If
+ * that succeeds, we're free to set any file-mode bits on it. (The
+ * motivation is that we want to set those file-mode bits independently
+ * of the current umask.)
+ *
+ * If the exclusive creation fails because the file already exists
+ * (EEXIST is not possible for any other reason), we just attempt to
+ * open the file, but in this case we won't be allowed to change the
+ * file-mode bits on the preexistent file.
+ *
+ * The pathname should never disappear between the two open()s in
+ * practice. If it happens, then someone very likely tried to race us.
+ * In this case just go ahead and report the ENOENT from the second
+ * open() to the caller.
+ *
+ * If the caller wants to open a preexistent file, then the first
+ * open() is decisive and its third argument is ignored, and the second
+ * open() and the fchmod() are never called.
+ */
+fd = open(path, oflag | ((oflag & O_CREAT) ? O_EXCL : 0), 0);
+if (fd == -1 && errno == EEXIST) {
+oflag &= ~(unsigned)O_CREAT;
+fd = open(path, oflag);
+}
+if (fd == -1) {
+error_setg_errno(errp, errno,
+ "failed to open file '%s' (mode: '%s')",
+ path, mode);
+goto end;
+}
 
-oflag = find_open_flag(mode, &local_err);
-if (local_err == NULL) {
-int fd;
-
-/* If the caller wants / allows creation of a new file, we implement it
- * with a two step process: open() + (open() / fchmod()).
- *
- * First we insist on creating the file exclusively as a new file. If
- * that succeeds, we're free to set any file-mode bits on it. (The
- * motivation is that we want to set those file-mode bits independently
- * of the current umask.)
- *
- * If the exclusive creation fails because the file already exists
- * (EEXIST is not possible for any other reason), we just attempt to
- * open the file, but in this case we won't be allowed to change the
- * file-mode bits on the preexistent file.
- *
- * The pathname should never disappear between the two open()s in
- * practice. If it happens, then someone very likely tried to race us.
- * In this case just go ahead and report the ENOENT from the second
- * open() to the caller.
- *
- * If the caller wants to open a preexistent file, then the first
- * open() is decisive and its third argument is ignored, and the second
- * open() and the fchmod() are never called.
- */
-fd = open(path, oflag | ((oflag & O_CREAT) ? O_EXCL : 0), 0);
-if (fd == -1 && errno == EEXIST) {
-oflag &= ~(unsigned)O_CREAT;
-fd = open(path, oflag);
-}
+qemu_set_cloexec(fd);
 
-if (fd == -1) {
-error_setg_errno(&local_err, errno, "failed to open file '%s' "
- "(mode: '%s')", path, mode);
-} else {
-qemu_set_cloexec(fd);
+if ((oflag & O_CREAT) && fchmod(fd, DEFAULT_NEW_FILE_MODE) == -1) {
+error_setg_errno(errp, errno, "failed to set permission "
+ "0%03o on new file '%s' (mode: '%s')",
+ (unsigned)DEFAULT_NEW_FILE_MODE, path, mode);
+goto end;
+}
 
-if ((oflag & O_CREAT) && fchmod(fd, DEFAULT_NEW_FILE_MODE) == -1) {
-error_setg_errno(&local_err, errno, "failed to set permission "
- "0%03o on new file '%s' (mode: '%s')",
- (unsigned)DEFAULT_NEW_FILE_MODE, path, mode);
-} else {
-FILE *f;
-
-f = fdopen(fd, mode);
-if (f == NULL) {
-

[PULL 08/15] qga: replace qemu_open_old() with qga_open_cloexec()

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

qemu_open_old() uses qemu_open_internal() which handles special
"/dev/fdset/" path for monitor fd sets, set CLOEXEC, and uses Error
reporting (and some O_DIRECT special error casing).

The monitor fdset handling is unnecessary for qga, use
qga_open_cloexec() instead.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-9-marcandre.lur...@redhat.com>
---
 qga/channel-posix.c  | 13 +
 qga/commands-posix.c |  8 
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/qga/channel-posix.c b/qga/channel-posix.c
index 25fcc5cb1a..6796a02cff 100644
--- a/qga/channel-posix.c
+++ b/qga/channel-posix.c
@@ -1,8 +1,10 @@
 #include "qemu/osdep.h"
+#include "qemu/cutils.h"
 #include 
 #include "qapi/error.h"
 #include "qemu/sockets.h"
 #include "channel.h"
+#include "cutils.h"
 
 #ifdef CONFIG_SOLARIS
 #include 
@@ -127,11 +129,14 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 switch (c->method) {
 case GA_CHANNEL_VIRTIO_SERIAL: {
 assert(fd < 0);
-fd = qemu_open_old(path, O_RDWR | O_NONBLOCK
+fd = qga_open_cloexec(
+path,
 #ifndef CONFIG_SOLARIS
-   | O_ASYNC
+O_ASYNC |
 #endif
-   );
+O_RDWR | O_NONBLOCK,
+0
+);
 if (fd == -1) {
 error_setg_errno(errp, errno, "error opening channel");
 return false;
@@ -156,7 +161,7 @@ static gboolean ga_channel_open(GAChannel *c, const gchar 
*path,
 struct termios tio;
 
 assert(fd < 0);
-fd = qemu_open_old(path, O_RDWR | O_NOCTTY | O_NONBLOCK);
+fd = qga_open_cloexec(path, O_RDWR | O_NOCTTY | O_NONBLOCK, 0);
 if (fd == -1) {
 error_setg_errno(errp, errno, "error opening channel");
 return false;
diff --git a/qga/commands-posix.c b/qga/commands-posix.c
index 2ecc43eca9..0047245273 100644
--- a/qga/commands-posix.c
+++ b/qga/commands-posix.c
@@ -1406,7 +1406,7 @@ static void get_nvme_smart(GuestDiskInfo *disk)
  | (((sizeof(log) >> 2) - 1) << 16)
 };
 
-fd = qemu_open_old(disk->name, O_RDONLY);
+fd = qga_open_cloexec(disk->name, O_RDONLY, 0);
 if (fd == -1) {
 g_debug("Failed to open device: %s: %s", disk->name, 
g_strerror(errno));
 return;
@@ -1739,7 +1739,7 @@ int64_t qmp_guest_fsfreeze_freeze_list(bool 
has_mountpoints,
 }
 }
 
-fd = qemu_open_old(mount->dirname, O_RDONLY);
+fd = qga_open_cloexec(mount->dirname, O_RDONLY, 0);
 if (fd == -1) {
 error_setg_errno(errp, errno, "failed to open %s", mount->dirname);
 goto error;
@@ -1806,7 +1806,7 @@ int64_t qmp_guest_fsfreeze_thaw(Error **errp)
 
 QTAILQ_FOREACH(mount, &mounts, next) {
 logged = false;
-fd = qemu_open_old(mount->dirname, O_RDONLY);
+fd = qga_open_cloexec(mount->dirname, O_RDONLY, 0);
 if (fd == -1) {
 continue;
 }
@@ -1892,7 +1892,7 @@ qmp_guest_fstrim(bool has_minimum, int64_t minimum, Error 
**errp)
 
 QAPI_LIST_PREPEND(response->paths, result);
 
-fd = qemu_open_old(mount->dirname, O_RDONLY);
+fd = qga_open_cloexec(mount->dirname, O_RDONLY, 0);
 if (fd == -1) {
 result->error = g_strdup_printf("failed to open: %s",
 strerror(errno));
-- 
2.36.1




[PULL 06/15] qga: use qga_open_cloexec() for safe_open_or_create()

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

The function takes care of setting CLOEXEC.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Markus Armbruster 
Message-Id: <20220525144140.591926-7-marcandre.lur...@redhat.com>
---
 qga/commands-posix.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/qga/commands-posix.c b/qga/commands-posix.c
index 3b2392398e..2ecc43eca9 100644
--- a/qga/commands-posix.c
+++ b/qga/commands-posix.c
@@ -27,6 +27,7 @@
 #include "qemu/cutils.h"
 #include "commands-common.h"
 #include "block/nvme.h"
+#include "cutils.h"
 
 #ifdef HAVE_UTMPX
 #include 
@@ -370,10 +371,10 @@ safe_open_or_create(const char *path, const char *mode, 
Error **errp)
  * open() is decisive and its third argument is ignored, and the second
  * open() and the fchmod() are never called.
  */
-fd = open(path, oflag | ((oflag & O_CREAT) ? O_EXCL : 0), 0);
+fd = qga_open_cloexec(path, oflag | ((oflag & O_CREAT) ? O_EXCL : 0), 0);
 if (fd == -1 && errno == EEXIST) {
 oflag &= ~(unsigned)O_CREAT;
-fd = open(path, oflag);
+fd = qga_open_cloexec(path, oflag, 0);
 }
 if (fd == -1) {
 error_setg_errno(errp, errno,
@@ -382,8 +383,6 @@ safe_open_or_create(const char *path, const char *mode, 
Error **errp)
 goto end;
 }
 
-qemu_set_cloexec(fd);
-
 if ((oflag & O_CREAT) && fchmod(fd, DEFAULT_NEW_FILE_MODE) == -1) {
 error_setg_errno(errp, errno, "failed to set permission "
  "0%03o on new file '%s' (mode: '%s')",
-- 
2.36.1




[PULL 11/15] qga/wixl: prefer variables over environment

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

No need to setup an environment or to check if the variable is undefined
manually.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-12-marcandre.lur...@redhat.com>
---
 qga/installer/qemu-ga.wxs | 30 +-
 qga/meson.build   |  9 -
 2 files changed, 13 insertions(+), 26 deletions(-)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index 0950e8c6be..8a19aa1656 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -1,17 +1,5 @@
 
 http://schemas.microsoft.com/wix/2006/wi";>
-  
-
-  
-
-  
-
-  
-
-  
-
-  
-
   
 
   
@@ -43,20 +31,20 @@
 Name="QEMU guest agent"
 Id="*"
 UpgradeCode="{EB6B8302-C06E-4BEC-ADAC-932C68A3A98D}"
-Manufacturer="$(env.QEMU_GA_MANUFACTURER)"
-Version="$(env.QEMU_GA_VERSION)"
+Manufacturer="$(var.QEMU_GA_MANUFACTURER)"
+Version="$(var.QEMU_GA_VERSION)"
 Language="1033">
 
 NOT VersionNT64
 
 
-
+
 1
 
 
   
-
+
 
   
   
-
+
   
   
-
+
   
   
   
@@ -133,9 +121,9 @@
   
   
 
+ 
Key="Software\$(var.QEMU_GA_MANUFACTURER)\$(var.QEMU_GA_DISTRO)\Tools\QemuGA">
   
-  
+  
 
   
 
diff --git a/qga/meson.build b/qga/meson.build
index 35fe2229e9..31370405f9 100644
--- a/qga/meson.build
+++ b/qga/meson.build
@@ -122,15 +122,14 @@ if targetos == 'windows'
 output: 'qemu-ga-@0@.msi'.format(host_arch),
 depends: deps,
 command: [
-  find_program('env'),
-  'QEMU_GA_VERSION=' + 
config_host['QEMU_GA_VERSION'],
-  'QEMU_GA_MANUFACTURER=' + 
config_host['QEMU_GA_MANUFACTURER'],
-  'QEMU_GA_DISTRO=' + 
config_host['QEMU_GA_DISTRO'],
-  'BUILD_DIR=' + meson.build_root(),
   wixl, '-o', '@OUTPUT0@', '@INPUT0@',
   qemu_ga_msi_arch[cpu],
   qemu_ga_msi_vss,
+  '-D', 'BUILD_DIR=' + meson.build_root(),
   '-D', 'Mingw_bin=' + 
config_host['QEMU_GA_MSI_MINGW_BIN_PATH'],
+  '-D', 'QEMU_GA_VERSION=' + 
config_host['QEMU_GA_VERSION'],
+  '-D', 'QEMU_GA_MANUFACTURER=' + 
config_host['QEMU_GA_MANUFACTURER'],
+  '-D', 'QEMU_GA_DISTRO=' + 
config_host['QEMU_GA_DISTRO'],
 ])
 all_qga += [qga_msi]
 alias_target('msi', qga_msi)
-- 
2.36.1




[PULL 09/15] qga: make build_fs_mount_list() return a bool

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

Change build_fs_mount_list() to return bool, in accordance
with the guidance under = Rules = in include/qapi/error.h

Signed-off-by: Marc-André Lureau 
Suggested-by: Markus Armbruster 
Message-Id: <20220525144140.591926-10-marcandre.lur...@redhat.com>
---
 qga/commands-posix.c | 25 ++---
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/qga/commands-posix.c b/qga/commands-posix.c
index 0047245273..0469dc409d 100644
--- a/qga/commands-posix.c
+++ b/qga/commands-posix.c
@@ -673,7 +673,7 @@ static int dev_major_minor(const char *devpath,
 /*
  * Walk the mount table and build a list of local file systems
  */
-static void build_fs_mount_list_from_mtab(FsMountList *mounts, Error **errp)
+static bool build_fs_mount_list_from_mtab(FsMountList *mounts, Error **errp)
 {
 struct mntent *ment;
 FsMount *mount;
@@ -684,7 +684,7 @@ static void build_fs_mount_list_from_mtab(FsMountList 
*mounts, Error **errp)
 fp = setmntent(mtab, "r");
 if (!fp) {
 error_setg(errp, "failed to open mtab file: '%s'", mtab);
-return;
+return false;
 }
 
 while ((ment = getmntent(fp))) {
@@ -714,6 +714,7 @@ static void build_fs_mount_list_from_mtab(FsMountList 
*mounts, Error **errp)
 }
 
 endmntent(fp);
+return true;
 }
 
 static void decode_mntname(char *name, int len)
@@ -738,7 +739,7 @@ static void decode_mntname(char *name, int len)
 }
 }
 
-static void build_fs_mount_list(FsMountList *mounts, Error **errp)
+static bool build_fs_mount_list(FsMountList *mounts, Error **errp)
 {
 FsMount *mount;
 char const *mountinfo = "/proc/self/mountinfo";
@@ -751,8 +752,7 @@ static void build_fs_mount_list(FsMountList *mounts, Error 
**errp)
 
 fp = fopen(mountinfo, "r");
 if (!fp) {
-build_fs_mount_list_from_mtab(mounts, errp);
-return;
+return build_fs_mount_list_from_mtab(mounts, errp);
 }
 
 while (getline(&line, &n, fp) != -1) {
@@ -794,6 +794,7 @@ static void build_fs_mount_list(FsMountList *mounts, Error 
**errp)
 free(line);
 
 fclose(fp);
+return true;
 }
 #endif
 
@@ -1592,8 +1593,7 @@ GuestFilesystemInfoList *qmp_guest_get_fsinfo(Error 
**errp)
 Error *local_err = NULL;
 
 QTAILQ_INIT(&mounts);
-build_fs_mount_list(&mounts, &local_err);
-if (local_err) {
+if (!build_fs_mount_list(&mounts, &local_err)) {
 error_propagate(errp, local_err);
 return NULL;
 }
@@ -1716,8 +1716,7 @@ int64_t qmp_guest_fsfreeze_freeze_list(bool 
has_mountpoints,
 }
 
 QTAILQ_INIT(&mounts);
-build_fs_mount_list(&mounts, &local_err);
-if (local_err) {
+if (!build_fs_mount_list(&mounts, &local_err)) {
 error_propagate(errp, local_err);
 return -1;
 }
@@ -1798,8 +1797,7 @@ int64_t qmp_guest_fsfreeze_thaw(Error **errp)
 Error *local_err = NULL;
 
 QTAILQ_INIT(&mounts);
-build_fs_mount_list(&mounts, &local_err);
-if (local_err) {
+if (!build_fs_mount_list(&mounts, &local_err)) {
 error_propagate(errp, local_err);
 return 0;
 }
@@ -1872,15 +1870,12 @@ qmp_guest_fstrim(bool has_minimum, int64_t minimum, 
Error **errp)
 FsMountList mounts;
 struct FsMount *mount;
 int fd;
-Error *local_err = NULL;
 struct fstrim_range r;
 
 slog("guest-fstrim called");
 
 QTAILQ_INIT(&mounts);
-build_fs_mount_list(&mounts, &local_err);
-if (local_err) {
-error_propagate(errp, local_err);
+if (!build_fs_mount_list(&mounts, errp)) {
 return NULL;
 }
 
-- 
2.36.1




[PULL 10/15] test/qga: use G_TEST_DIR to locate os-release test file

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

This a more accurate way to lookup the test data, and will allow to move
the test in a subproject.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-11-marcandre.lur...@redhat.com>
---
 tests/unit/test-qga.c | 11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/tests/unit/test-qga.c b/tests/unit/test-qga.c
index d6df1ee92e..ab0b12a2dd 100644
--- a/tests/unit/test-qga.c
+++ b/tests/unit/test-qga.c
@@ -914,15 +914,14 @@ static void test_qga_guest_get_osinfo(gconstpointer data)
 {
 TestFixture fixture;
 const gchar *str;
-gchar *cwd, *env[2];
-QDict *ret, *val;
+QDict *ret = NULL;
+char *env[2];
+QDict *val;
 
-cwd = g_get_current_dir();
 env[0] = g_strdup_printf(
-"QGA_OS_RELEASE=%s%ctests%cdata%ctest-qga-os-release",
-cwd, G_DIR_SEPARATOR, G_DIR_SEPARATOR, G_DIR_SEPARATOR);
+"QGA_OS_RELEASE=%s%c..%cdata%ctest-qga-os-release",
+g_test_get_dir(G_TEST_DIST), G_DIR_SEPARATOR, G_DIR_SEPARATOR, 
G_DIR_SEPARATOR);
 env[1] = NULL;
-g_free(cwd);
 fixture_setup(&fixture, NULL, env);
 
 ret = qmp_fd(fixture.fd, "{'execute': 'guest-get-osinfo'}");
-- 
2.36.1




[PULL 14/15] qga/wixl: replace QEMU_GA_MSI_MINGW_BIN_PATH with glib bindir

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

Use more conventional variables to set the location of pre-built
DLL/bin.

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-15-marcandre.lur...@redhat.com>
---
 configure |  9 ++---
 meson.build   |  5 -
 qga/installer/qemu-ga.wxs | 24 
 qga/meson.build   |  2 +-
 4 files changed, 23 insertions(+), 17 deletions(-)

diff --git a/configure b/configure
index 180ee688dc..f2baf2f526 100755
--- a/configure
+++ b/configure
@@ -1495,6 +1495,11 @@ for i in $glib_modules; do
 fi
 done
 
+glib_bindir="$($pkg_config --variable=bindir glib-2.0)"
+if test -z "$glib_bindir" ; then
+   glib_bindir="$($pkg_config --variable=prefix glib-2.0)"/bin
+fi
+
 # This workaround is required due to a bug in pkg-config file for glib as it
 # doesn't define GLIB_STATIC_COMPILATION for pkg-config --static
 
@@ -1860,8 +1865,6 @@ if test "$QEMU_GA_VERSION" = ""; then
 QEMU_GA_VERSION=$(cat $source_path/VERSION)
 fi
 
-QEMU_GA_MSI_MINGW_BIN_PATH="$($pkg_config --variable=prefix glib-2.0)/bin"
-
 # Mac OS X ships with a broken assembler
 roms=
 if { test "$cpu" = "i386" || test "$cpu" = "x86_64"; } && \
@@ -1948,7 +1951,6 @@ if test "$debug_tcg" = "yes" ; then
 fi
 if test "$mingw32" = "yes" ; then
   echo "CONFIG_WIN32=y" >> $config_host_mak
-  echo "QEMU_GA_MSI_MINGW_BIN_PATH=${QEMU_GA_MSI_MINGW_BIN_PATH}" >> 
$config_host_mak
   echo "QEMU_GA_MANUFACTURER=${QEMU_GA_MANUFACTURER}" >> $config_host_mak
   echo "QEMU_GA_DISTRO=${QEMU_GA_DISTRO}" >> $config_host_mak
   echo "QEMU_GA_VERSION=${QEMU_GA_VERSION}" >> $config_host_mak
@@ -2020,6 +2022,7 @@ echo "QEMU_CXXFLAGS=$QEMU_CXXFLAGS" >> $config_host_mak
 echo "QEMU_OBJCFLAGS=$QEMU_OBJCFLAGS" >> $config_host_mak
 echo "GLIB_CFLAGS=$glib_cflags" >> $config_host_mak
 echo "GLIB_LIBS=$glib_libs" >> $config_host_mak
+echo "GLIB_BINDIR=$glib_bindir" >> $config_host_mak
 echo "GLIB_VERSION=$(pkg-config --modversion glib-2.0)" >> $config_host_mak
 echo "QEMU_LDFLAGS=$QEMU_LDFLAGS" >> $config_host_mak
 echo "LD_I386_EMULATION=$ld_i386_emulation" >> $config_host_mak
diff --git a/meson.build b/meson.build
index df7c34b076..bf318d9cbb 100644
--- a/meson.build
+++ b/meson.build
@@ -466,7 +466,10 @@ add_project_arguments(config_host['GLIB_CFLAGS'].split(),
   native: false, language: ['c', 'cpp', 'objc'])
 glib = declare_dependency(compile_args: config_host['GLIB_CFLAGS'].split(),
   link_args: config_host['GLIB_LIBS'].split(),
-  version: config_host['GLIB_VERSION'])
+  version: config_host['GLIB_VERSION'],
+  variables: {
+'bindir': config_host['GLIB_BINDIR'],
+  })
 # override glib dep with the configure results (for subprojects)
 meson.override_dependency('glib-2.0', glib)
 
diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index e5b0958e18..813d1c6ca6 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -58,7 +58,7 @@
   
   
   
-
+
   
   
 
@@ -69,40 +69,40 @@
   
   
   
-
+
   
   
-
+
   
   
   
   
-
+
   
   
-
+
   
   
   
-
+
   
   
-
+
   
   
-
+
   
   
-
+
   
   
-
+
   
   
-
+
   
   
-
+
   
   
 

[PULL 15/15] test/qga: use g_auto wherever sensible

2022-05-27 Thread marcandre . lureau
From: Marc-André Lureau 

Signed-off-by: Marc-André Lureau 
Reviewed-by: Konstantin Kostiuk 
Message-Id: <20220525144140.591926-16-marcandre.lur...@redhat.com>
---
 tests/unit/test-qga.c | 121 +++---
 1 file changed, 43 insertions(+), 78 deletions(-)

diff --git a/tests/unit/test-qga.c b/tests/unit/test-qga.c
index ab0b12a2dd..530317044b 100644
--- a/tests/unit/test-qga.c
+++ b/tests/unit/test-qga.c
@@ -52,7 +52,10 @@ fixture_setup(TestFixture *fixture, gconstpointer data, 
gchar **envp)
 {
 const gchar *extra_arg = data;
 GError *error = NULL;
-gchar *cwd, *path, *cmd, **argv = NULL;
+g_autofree char *cwd = NULL;
+g_autofree char *path = NULL;
+g_autofree char *cmd = NULL;
+g_auto(GStrv) argv = NULL;
 
 fixture->loop = g_main_loop_new(NULL, FALSE);
 
@@ -78,17 +81,12 @@ fixture_setup(TestFixture *fixture, gconstpointer data, 
gchar **envp)
 
 fixture->fd = connect_qga(path);
 g_assert_cmpint(fixture->fd, !=, -1);
-
-g_strfreev(argv);
-g_free(cmd);
-g_free(cwd);
-g_free(path);
 }
 
 static void
 fixture_tear_down(TestFixture *fixture, gconstpointer data)
 {
-gchar *tmp;
+g_autofree char *tmp = NULL;
 
 kill(fixture->pid, SIGTERM);
 
@@ -107,7 +105,6 @@ fixture_tear_down(TestFixture *fixture, gconstpointer data)
 
 tmp = g_build_filename(fixture->test_dir, "sock", NULL);
 g_unlink(tmp);
-g_free(tmp);
 
 g_rmdir(fixture->test_dir);
 g_free(fixture->test_dir);
@@ -122,7 +119,7 @@ static void qmp_assertion_message_error(const char 
*domain,
 QDict  *dict)
 {
 const char *class, *desc;
-char *s;
+g_autofree char *s = NULL;
 QDict *error;
 
 error = qdict_get_qdict(dict, "error");
@@ -131,7 +128,6 @@ static void qmp_assertion_message_error(const char 
*domain,
 
 s = g_strdup_printf("assertion failed %s: %s %s", expr, class, desc);
 g_assertion_message(domain, file, line, func, s);
-g_free(s);
 }
 
 #define qmp_assert_no_error(err) do {   \
@@ -146,7 +142,7 @@ static void test_qga_sync_delimited(gconstpointer fix)
 const TestFixture *fixture = fix;
 guint32 v, r = g_test_rand_int();
 unsigned char c;
-QDict *ret;
+g_autoptr(QDict) ret = NULL;
 
 qmp_fd_send_raw(fixture->fd, "\xff");
 qmp_fd_send(fixture->fd,
@@ -180,15 +176,13 @@ static void test_qga_sync_delimited(gconstpointer fix)
 
 v = qdict_get_int(ret, "return");
 g_assert_cmpint(r, ==, v);
-
-qobject_unref(ret);
 }
 
 static void test_qga_sync(gconstpointer fix)
 {
 const TestFixture *fixture = fix;
 guint32 v, r = g_test_rand_int();
-QDict *ret;
+g_autoptr(QDict) ret = NULL;
 
 /*
  * TODO guest-sync is inherently limited: we cannot distinguish
@@ -210,33 +204,27 @@ static void test_qga_sync(gconstpointer fix)
 
 v = qdict_get_int(ret, "return");
 g_assert_cmpint(r, ==, v);
-
-qobject_unref(ret);
 }
 
 static void test_qga_ping(gconstpointer fix)
 {
 const TestFixture *fixture = fix;
-QDict *ret;
+g_autoptr(QDict) ret = NULL;
 
 ret = qmp_fd(fixture->fd, "{'execute': 'guest-ping'}");
 g_assert_nonnull(ret);
 qmp_assert_no_error(ret);
-
-qobject_unref(ret);
 }
 
 static void test_qga_id(gconstpointer fix)
 {
 const TestFixture *fixture = fix;
-QDict *ret;
+g_autoptr(QDict) ret = NULL;
 
 ret = qmp_fd(fixture->fd, "{'execute': 'guest-ping', 'id': 1}");
 g_assert_nonnull(ret);
 qmp_assert_no_error(ret);
 g_assert_cmpint(qdict_get_int(ret, "id"), ==, 1);
-
-qobject_unref(ret);
 }
 
 static void test_qga_invalid_oob(gconstpointer fix)
@@ -253,7 +241,8 @@ static void test_qga_invalid_oob(gconstpointer fix)
 static void test_qga_invalid_args(gconstpointer fix)
 {
 const TestFixture *fixture = fix;
-QDict *ret, *error;
+g_autoptr(QDict) ret = NULL;
+QDict *error;
 const gchar *class, *desc;
 
 ret = qmp_fd(fixture->fd, "{'execute': 'guest-ping', "
@@ -266,14 +255,13 @@ static void test_qga_invalid_args(gconstpointer fix)
 
 g_assert_cmpstr(class, ==, "GenericError");
 g_assert_cmpstr(desc, ==, "Parameter 'foo' is unexpected");
-
-qobject_unref(ret);
 }
 
 static void test_qga_invalid_cmd(gconstpointer fix)
 {
 const TestFixture *fixture = fix;
-QDict *ret, *error;
+g_autoptr(QDict) ret = NULL;
+QDict *error;
 const gchar *class, *desc;
 
 ret = qmp_fd(fixture->fd, "{'execute': 'guest-invalid-cmd'}");
@@ -285,14 +273,13 @@ static void test_qga_invalid_cmd(gconstpointer fix)
 
 g_assert_cmpstr(class, ==, "CommandNotFound");
 g_assert_cmpint(strlen(desc), >, 0);
-
-qobject_unref(ret);
 }
 
 static void test_qga_info(gconstpointer fix)
 {
 const TestFixture *fixture = fix;
-QDict *ret, *val;
+g_autoptr(QDict) ret = NULL;
+QDict *val;
 const gchar *version;
 
 ret = qmp_fd(fixture->fd, "{'execute': 

Re: [PATCH v3 06/10] block: Make 'bytes' param of bdrv_co_{pread,pwrite,preadv,pwritev}() an int64_t

2022-05-27 Thread Eric Blake
On Thu, May 26, 2022 at 12:05:55PM +0100, Alberto Faria wrote:
> On Thu, May 26, 2022 at 10:00 AM Stefan Hajnoczi  wrote:
> > Maybe let the existing bdrv_check_request32() call in bdrv_co_preadv()

in bdrv_co_preadv_part()

> > check this? It returns -EIO if bytes is too large.
> 
> I'd be okay with that. Does this warrant changing blk_co_pread() and
> blk_co_pwrite() as well?
> 
> Eric, what do you think?
>

Yes, reusing the existing function covers more cases with common error
messages.  All that matters is that we check for overflow before
trying to populate the qiov.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [libvirt PATCH] tools: add virt-qmp-proxy for proxying QMP clients to libvirt QEMU guests

2022-05-27 Thread Peter Krempa
On Fri, May 27, 2022 at 10:47:58 +0100, Daniel P. Berrangé wrote:
> Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> exposed in virsh. It is not especially pleasant, however, using the raw
> QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
> exposes a human friendly interactive shell. It is not possible to use
> this with libvirt managed guest, however, since only one client can
> attach to he QMP socket at any point in time.
> 
> The virt-qmp-proxy tool aims to solve this problem. It opens a UNIX
> socket and listens for incoming client connections, speaking QMP on
> the connected socket. It will forward any QMP commands received onto
> the running libvirt QEMU guest, and forward any replies back to the
> QMP client.
> 
>   $ virsh start demo
>   $ virt-qmp-proxy demo demo.qmp &
>   $ qmp-shell demo.qmp
>   Welcome to the QMP low-level shell!
>   Connected to QEMU 6.2.0
> 
>   (QEMU) query-kvm
>   {
>   "return": {
>   "enabled": true,
>   "present": true
>   }
>   }
> 
> Note this tool of course has the same risks as the raw libvirt
> QMP passthrough. It is safe to run query commands to fetch information
> but commands which change the QEMU state risk disrupting libvirt's
> management of QEMU, potentially resulting in data loss/corruption in
> the worst case.
> 
> Signed-off-by: Daniel P. Berrangé 
> ---
> 
> CC'ing QEMU since this is likely of interest to maintainers and users
> who work with QEMU and libvirt
> 
> Note this impl is fairly crude in that it assumes it is receiving
> the QMP commands linewise one at a time. None the less it is good
> enough to work with qmp-shell already, so I figured it was worth
> exposing to the world. It also lacks support for forwarding events
> back to the QMP client.
> 
>  docs/manpages/meson.build|   1 +
>  docs/manpages/virt-qmp-proxy.rst | 123 
>  tools/meson.build|   5 ++
>  tools/virt-qmp-proxy | 133 +++
>  4 files changed, 262 insertions(+)
>  create mode 100644 docs/manpages/virt-qmp-proxy.rst
>  create mode 100755 tools/virt-qmp-proxy

[...]

> diff --git a/docs/manpages/virt-qmp-proxy.rst 
> b/docs/manpages/virt-qmp-proxy.rst
> new file mode 100644
> index 00..94679406ab
> --- /dev/null
> +++ b/docs/manpages/virt-qmp-proxy.rst
> @@ -0,0 +1,123 @@
> +==
> +virt-qmp-proxy
> +==
> +
> +--
> +Expose a QMP proxy server for a libvirt QEMU guest
> +--
> +
> +:Manual section: 1
> +:Manual group: Virtualization Support
> +
> +.. contents::
> +
> +
> +SYNOPSIS
> +
> +
> +``virt-qmp-proxy`` [*OPTION*]... *DOMAIN* *QMP-SOCKET-PATH*
> +
> +
> +DESCRIPTION
> +===
> +
> +This tool provides a way to expose a QMP proxy server that communicates
> +with a QEMU guest managed by libvirt. This enables standard QMP client
> +tools to interact with libvirt managed guests.
> +
> +**NOTE: use of this tool will result in the running QEMU guest being
> +marked as tainted.** It is strongly recommended that this tool *only be
> +used to send commands which query information* about the running guest.
> +If this tool is used to make changes to the state of the guest, this
> +may have negative interactions with the QEMU driver, resulting in an
> +inability to manage the guest operation thereafter, and in the worst
> +case **potentially lead to data loss or corruption**.
> +
> +The ``virt-qmp-proxy`` program will listen on a UNIX socket for incoming
> +client connections, and run the QMP protocol over the connection. Any
> +commands received will be sent to the running libvirt guest, and replies
> +sent back.
> +
> +The ``virt-qemu-proxy`` program may be interrupted (eg Ctrl-C) when it
> +is no longer required. The libvirt QEMU guest will continue running.
> +
> +
> +OPTIONS
> +===
> +
> +*DOMAIN*
> +
> +The ID or UUID or Name of the libvirt QEMU guest.
> +
> +*QMP-SOCKET-PATH*
> +
> +The filesystem path at which to run the QMP server, listening for
> +incoming connections.
> +
> +``-c`` *CONNECTION-URI*
> +``--connect``\ =\ *CONNECTION-URI*
> +
> +The URI for the connection to the libvirt QEMU driver. If omitted,
> +a URI will be auto-detected.
> +
> +``-v``, ``--verbose``
> +
> +Run in verbose mode, printing all QMP commands and replies that
> +are handled.
> +
> +``-h``, ``--help``
> +
> +Display the command line help.
> +
> +
> +EXIT STATUS
> +===
> +
> +Upon successful shutdown, an exit status of 0 will be set. Upon
> +failure a non-zero status will be set.
> +
> +
> +AUTHOR
> +==
> +
> +Daniel P. Berrangé
> +
> +
> +BUGS
> +
> +
> +Please report all bugs you discover.  This should be done via either:
> +
> +#. the mailing list
> +
> +   `https://libvirt.org/contact.html `_
> +
> +#. the bug tracker
> +
> +   `https://libvirt.org/bug

Re: [PATCH v3 07/10] block: Implement bdrv_{pread,pwrite,pwrite_zeroes}() using generated_co_wrapper

2022-05-27 Thread Eric Blake
On Thu, May 26, 2022 at 08:23:02PM +0100, Alberto Faria wrote:
> On Thu, May 26, 2022 at 9:55 AM Stefan Hajnoczi  wrote:
> > The bdrv_pread()/bdrv_pwrite() errno for negative bytes changes from
> > EINVAL to EIO. Did you audit the code to see if it matters?
> 
> I don't believe I had, but I checked all calls now. There's ~140 of
> them, so the probability of me having overlooked something isn't
> exactly low, but it seems callers either cannot pass in negative
> values or don't care about the particular error code returned.
> 
> Another option is to make bdrv_co_pread() and bdrv_co_pwrite() (which
> have much fewer callers) fail with -EINVAL when bytes is negative, but
> perhaps just getting rid of this final inconsistency between
> bdrv_[co_]{pread,pwrite}[v]() now will be worth it in the long run.

Failing with -EINVAL for negative bytes makes more sense at
identifying a programming error (whereas EIO tends to mean hardware
failure), so making that sort of cleanup seems reasonable.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v6 8/8] linux-user: Add PowerPC ISA 3.1 and MMA to hwcap

2022-05-27 Thread Laurent Vivier

Le 24/05/2022 à 16:05, Lucas Mateus Castro(alqotel) a écrit :

From: Joel Stanley 

These are new hwcap bits added for power10.

Signed-off-by: Joel Stanley 
Signed-off-by: Lucas Mateus Castro (alqotel) 
Reviewed-by: Richard Henderson 
---
  linux-user/elfload.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 61063fd974..0908692e62 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -779,6 +779,8 @@ enum {
  QEMU_PPC_FEATURE2_DARN = 0x0020, /* darn random number insn */
  QEMU_PPC_FEATURE2_SCV = 0x0010, /* scv syscall */
  QEMU_PPC_FEATURE2_HTM_NO_SUSPEND = 0x0008, /* TM w/o suspended state 
*/
+QEMU_PPC_FEATURE2_ARCH_3_1 = 0x0004, /* ISA 3.1 */
+QEMU_PPC_FEATURE2_MMA = 0x0002, /* Matrix-Multiply Assist */
  };
  
  #define ELF_HWCAP get_elf_hwcap()

@@ -836,6 +838,8 @@ static uint32_t get_elf_hwcap2(void)
QEMU_PPC_FEATURE2_VEC_CRYPTO);
  GET_FEATURE2(PPC2_ISA300, QEMU_PPC_FEATURE2_ARCH_3_00 |
   QEMU_PPC_FEATURE2_DARN | QEMU_PPC_FEATURE2_HAS_IEEE128);
+GET_FEATURE2(PPC2_ISA310, QEMU_PPC_FEATURE2_ARCH_3_1 |
+ QEMU_PPC_FEATURE2_MMA);
  
  #undef GET_FEATURE

  #undef GET_FEATURE2


Reviewed-by: Laurent Vivier 



Re: [PATCH 0/9] tests, python: prepare to expand usage of test venv

2022-05-27 Thread John Snow
Paolo: I assume this falls under your jurisdiction...ish, unless Cleber
(avocado) or Alex (tests more broadly) have any specific inputs.

I'm fine with waiting for reviews, but don't know whose bucket this goes to.


On Wed, May 25, 2022, 8:09 PM John Snow  wrote:

> GitLab CI: https://gitlab.com/jsnow/qemu/-/pipelines/548326343
>
> This series collects some of the uncontroversial elements that serve as
> pre-requisites for a later series that seeks to generate a testing venv
> by default.
>
> This series makes the following material changes:
>
> - Install the 'qemu' package into the avocado testing venv
> - Use the avocado testing venv to run vm-tests
> - Use the avocado testing venv to run device-crash-test
>
> None of these changes impact 'make check'; these are all specialty
> tests that are not run by default. This series also doesn't change how
> iotests are run, doesn't add any new dependencies for SRPM builds, etc.
>
> NOTE: patch 8 isn't strictly required for this series, but including it
> here "early" helps the subsequent series. Since the debian docker files
> are layered, testing downstream pipelines can fail because the base
> image is pulled from the main QEMU repo instead of the downstream.
>
> In other words: I need this patch in origin/main in order to have the
> venv module available for later patches that will actually need it in
> our debian10 derivative images.
>
> (in other-other-words: the 'clang-user' test *will* need it later.)
>
> John Snow (9):
>   python: update for mypy 0.950
>   tests: add "TESTS_PYTHON" variable to Makefile
>   tests: use python3 as the python executable name
>   tests: silence pip upgrade warnings during venv creation
>   tests: add quiet-venv-pip macro
>   tests: install "qemu" namespace package into venv
>   tests: use tests/venv to run basevm.py-based scripts
>   tests: add python3-venv to debian10.docker
>   tests: run 'device-crash-test' from tests/venv
>
>  .gitlab-ci.d/buildtest.yml   |  8 +---
>  python/qemu/qmp/util.py  |  4 +++-
>  python/setup.cfg |  1 +
>  scripts/device-crash-test| 14 +++---
>  tests/Makefile.include   | 18 ++
>  tests/avocado/avocado_qemu/__init__.py   | 11 +--
>  tests/avocado/virtio_check_params.py |  1 -
>  tests/avocado/virtio_version.py  |  1 -
>  tests/docker/dockerfiles/debian10.docker |  1 +
>  tests/requirements.txt   |  1 +
>  tests/vm/Makefile.include| 13 +++--
>  tests/vm/basevm.py   |  6 +++---
>  12 files changed, 47 insertions(+), 32 deletions(-)
>
> --
> 2.34.1
>
>


Re: [PATCH v2] gitlab-ci: Switch the container of the 'check-patch' & 'check-dco' jobs

2022-05-27 Thread Alex Bennée


Alex Bennée  writes:

> Thomas Huth  writes:
>
>> The 'check-patch' and 'check-dco' jobs only need Python and git for
>> checking the patches, so it's not really necessary to use a container
>> here that has all the other build dependencies installed. By using a
>> lightweight Alpine container, we can improve the runtime here quite a
>> bit, cutting it down from ca. 1:30 minutes to ca. 45 seconds.
>>
>> Suggested-by: Daniel P. Berrangé 
>> Signed-off-by: Thomas Huth 
>
> Queued to testing/next, thanks.

And it didn't apply because it was already merged - sorry about that ;-)

-- 
Alex Bennée



Re: [PATCH v3 02/10] block: Change bdrv_{pread, pwrite, pwrite_sync}() param order

2022-05-27 Thread Vladimir Sementsov-Ogievskiy

On 5/19/22 17:48, Alberto Faria wrote:

Swap 'buf' and 'bytes' around for consistency with
bdrv_co_{pread,pwrite}(), and in preparation to implement these
functions using generated_co_wrapper.

Callers were updated using this Coccinelle script:

 @@ expression child, offset, buf, bytes, flags; @@
 - bdrv_pread(child, offset, buf, bytes, flags)
 + bdrv_pread(child, offset, bytes, buf, flags)

 @@ expression child, offset, buf, bytes, flags; @@
 - bdrv_pwrite(child, offset, buf, bytes, flags)
 + bdrv_pwrite(child, offset, bytes, buf, flags)

 @@ expression child, offset, buf, bytes, flags; @@
 - bdrv_pwrite_sync(child, offset, buf, bytes, flags)
 + bdrv_pwrite_sync(child, offset, bytes, buf, flags)

Resulting overly-long lines were then fixed by hand.

Signed-off-by: Alberto Faria
Reviewed-by: Paolo Bonzini


Reviewed-by: Vladimir Sementsov-Ogievskiy 

Checking also, that we covered all occurrences:

git grep '\(bdrv_pread\|bdrv_pwrite\|bdrv_pwrite_sync\)([^)]' | wc -l
174
git show --format= | grep  '^[ 
+].*\(bdrv_pread\|bdrv_pwrite\|bdrv_pwrite_sync\)([^)]' | wc -l
174

(last exclusion of ')' is to ignore things like "bdrv_pwrite()" in comments)

--
Best regards,
Vladimir



Re: [PULL 00/34] ppc queue

2022-05-27 Thread Richard Henderson

On 5/26/22 14:37, Daniel Henrique Barboza wrote:

The following changes since commit 2417cbd5916d043e0c56408221fbe9935d0bc8da:

   Merge tag 'ak-pull-request' of https://gitlab.com/berrange/qemu into staging 
(2022-05-26 07:00:04 -0700)

are available in the Git repository at:

   https://gitlab.com/danielhb/qemu.git tags/pull-ppc-20220526

for you to fetch changes up to 96c343cc774b52b010e464a219d13f8e55e1003f:

   linux-user: Add PowerPC ISA 3.1 and MMA to hwcap (2022-05-26 17:11:33 -0300)


ppc patch queue for 2022-05-26:

Most of the changes are enhancements/fixes made in TCG ppc emulation
code. Several bugs fixes were made across the board as well.

Changes include:

- tcg and target/ppc: VSX MMA implementation, fixes in helper
declarations to use call flags, memory ordering, tlbie and others
- pseries: fixed stdout-path setting with -machine graphics=off
- pseries: allow use of elf parser for kernel address
- other assorted fixes and improvements


Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/7.1 as 
appropriate.


r~





Alexey Kardashevskiy (2):
   spapr: Use address from elf parser for kernel address
   spapr/docs: Add a few words about x-vof

Bernhard Beschow (1):
   hw/ppc/e500: Remove unused BINARY_DEVICE_TREE_FILE

Frederic Barrat (1):
   pnv/xive2: Don't overwrite PC registers when writing TCTXT registers

Joel Stanley (1):
   linux-user: Add PowerPC ISA 3.1 and MMA to hwcap

Leandro Lupori (1):
   target/ppc: Fix tlbie

Lucas Mateus Castro (alqotel) (7):
   target/ppc: Implement xxm[tf]acc and xxsetaccz
   target/ppc: Implemented xvi*ger* instructions
   target/ppc: Implemented pmxvi*ger* instructions
   target/ppc: Implemented xvf*ger*
   target/ppc: Implemented xvf16ger*
   target/ppc: Implemented pmxvf*ger*
   target/ppc: Implemented [pm]xvbf16ger2*

Matheus Ferst (12):
   target/ppc: declare darn32/darn64 helpers with TCG_CALL_NO_RWG
   target/ppc: use TCG_CALL_NO_RWG in vector helpers without env
   target/ppc: use TCG_CALL_NO_RWG in BCD helpers
   target/ppc: use TCG_CALL_NO_RWG in VSX helpers without env
   target/ppc: Use TCG_CALL_NO_RWG_SE in fsel helper
   target/ppc: declare xscvspdpn helper with call flags
   target/ppc: declare xvxsigsp helper with call flags
   target/ppc: declare xxextractuw and xxinsertw helpers with call flags
   target/ppc: introduce do_va_helper
   target/ppc: declare vmsum[um]bm helpers with call flags
   target/ppc: declare vmsumuh[ms] helper with call flags
   target/ppc: declare vmsumsh[ms] helper with call flags

Murilo Opsfelder Araujo (1):
   mos6522: fix linking error when CONFIG_MOS6522 is not set

Nicholas Piggin (4):
   target/ppc: Fix eieio memory ordering semantics
   tcg/ppc: ST_ST memory ordering is not provided with eieio
   tcg/ppc: Optimize memory ordering generation with lwsync
   target/ppc: Implement lwsync with weaker memory ordering

Paolo Bonzini (1):
   pseries: allow setting stdout-path even on machines with a VGA

Víctor Colombo (3):
   target/ppc: Fix FPSCR.FI bit being cleared when it shouldn't
   target/ppc: Fix FPSCR.FI changing in float_overflow_excp()
   target/ppc: Rename sfprf to sfifprf where it's also used as set fi flag

  docs/system/ppc/pseries.rst |  29 ++
  hmp-commands-info.hx|   2 +-
  hw/intc/pnv_xive2.c |   3 -
  hw/ppc/e500.c   |   1 -
  hw/ppc/spapr.c  |  25 +-
  include/hw/ppc/spapr.h  |   2 +-
  linux-user/elfload.c|   4 +
  monitor/misc.c  |   3 +
  target/ppc/cpu.h|  19 +-
  target/ppc/cpu_init.c   |  13 +-
  target/ppc/fpu_helper.c | 571 
  target/ppc/helper.h | 259 +---
  target/ppc/helper_regs.c|   2 +-
  target/ppc/insn32.decode|  80 -
  target/ppc/insn64.decode|  79 +
  target/ppc/int_helper.c | 152 +-
  target/ppc/internal.h   |  15 +
  target/ppc/machine.c|   3 +-
  target/ppc/translate.c  |  35 ++-
  target/ppc/translate/fp-impl.c.inc  |  30 +-
  target/ppc/translate/fp-ops.c.inc   |   1 -
  target/ppc/translate/vmx-impl.c.inc |  54 ++--
  target/ppc/translate/vmx-ops.c.inc  |   4 -
  target/ppc/translate/vsx-impl.c.inc | 237 ---
  target/ppc/translate/vsx-ops.c.inc  |   4 -
  tcg/ppc/tcg-target.c.inc|  12 +-
  26 files changed, 1286 insertions(+), 353 deletions(-)





[PATCH] tests/Makefile.include: Fix 'make check-help' output

2022-05-27 Thread Dario Faggioli
Since commit 3d2f73ef75e ("build: use "meson test" as the test harness"),
check-report.tap is no more, and we have check-report.junit.xml.

Update the output of 'make check-help', which was still listing
'check-report.tap', accordingly.

Fixes: 3d2f73ef75e
Signed-off-by: Dario Faggioli 
---
Cc: Paolo Bonzini 
---
 tests/Makefile.include |   30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/tests/Makefile.include b/tests/Makefile.include
index ec84b2ebc0..5caa3836ad 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -3,28 +3,28 @@
 .PHONY: check-help
 check-help:
@echo "Regression testing targets:"
-   @echo " $(MAKE) checkRun block, qapi-schema, unit, 
softfloat, qtest and decodetree tests"
-   @echo " $(MAKE) benchRun speed tests"
+   @echo " $(MAKE) check  Run block, qapi-schema, unit, 
softfloat, qtest and decodetree tests"
+   @echo " $(MAKE) bench  Run speed tests"
@echo
@echo "Individual test suites:"
-   @echo " $(MAKE) check-qtest-TARGET   Run qtest tests for given target"
-   @echo " $(MAKE) check-qtest  Run qtest tests"
-   @echo " $(MAKE) check-unit   Run qobject tests"
-   @echo " $(MAKE) check-qapi-schemaRun QAPI schema tests"
-   @echo " $(MAKE) check-block  Run block tests"
+   @echo " $(MAKE) check-qtest-TARGET Run qtest tests for given target"
+   @echo " $(MAKE) check-qtestRun qtest tests"
+   @echo " $(MAKE) check-unit Run qobject tests"
+   @echo " $(MAKE) check-qapi-schema  Run QAPI schema tests"
+   @echo " $(MAKE) check-blockRun block tests"
 ifneq ($(filter $(all-check-targets), check-softfloat),)
-   @echo " $(MAKE) check-tcgRun TCG tests"
-   @echo " $(MAKE) check-softfloat  Run FPU emulation tests"
+   @echo " $(MAKE) check-tcg  Run TCG tests"
+   @echo " $(MAKE) check-softfloatRun FPU emulation tests"
 endif
-   @echo " $(MAKE) check-avocadoRun avocado (integration) tests 
for currently configured targets"
+   @echo " $(MAKE) check-avocado  Run avocado (integration) tests 
for currently configured targets"
@echo
-   @echo " $(MAKE) check-report.tap Generates an aggregated TAP test 
report"
-   @echo " $(MAKE) check-venv   Creates a Python venv for tests"
-   @echo " $(MAKE) check-clean  Clean the tests and related data"
+   @echo " $(MAKE) check-report.junit.xml Generates an aggregated TAP test 
report"
+   @echo " $(MAKE) check-venv Creates a Python venv for tests"
+   @echo " $(MAKE) check-cleanClean the tests and related data"
@echo
@echo "The following are useful for CI builds"
-   @echo " $(MAKE) check-build  Build most test binaries"
-   @echo " $(MAKE) get-vm-imagesDownloads all images used by 
avocado tests, according to configured targets (~350 MB each, 1.5 GB max)"
+   @echo " $(MAKE) check-buildBuild most test binaries"
+   @echo " $(MAKE) get-vm-images  Downloads all images used by 
avocado tests, according to configured targets (~350 MB each, 1.5 GB max)"
@echo
@echo
@echo "The variable SPEED can be set to control the gtester speed 
setting."





Re: [PATCH v6 6/8] vduse-blk: Implement vduse-blk export

2022-05-27 Thread Kevin Wolf
Am 23.05.2022 um 10:46 hat Xie Yongji geschrieben:
> This implements a VDUSE block backends based on
> the libvduse library. We can use it to export the BDSs
> for both VM and container (host) usage.
> 
> The new command-line syntax is:
> 
> $ qemu-storage-daemon \
> --blockdev file,node-name=drive0,filename=test.img \
> --export vduse-blk,node-name=drive0,id=vduse-export0,writable=on
> 
> After the qemu-storage-daemon started, we need to use
> the "vdpa" command to attach the device to vDPA bus:
> 
> $ vdpa dev add name vduse-export0 mgmtdev vduse
> 
> Also the device must be removed via the "vdpa" command
> before we stop the qemu-storage-daemon.
> 
> Signed-off-by: Xie Yongji 
> Reviewed-by: Stefan Hajnoczi 
> ---
>  MAINTAINERS   |   4 +-
>  block/export/export.c |   6 +
>  block/export/meson.build  |   5 +
>  block/export/vduse-blk.c  | 307 ++
>  block/export/vduse-blk.h  |  20 +++
>  meson.build   |  13 ++
>  meson_options.txt |   2 +
>  qapi/block-export.json|  28 +++-
>  scripts/meson-buildoptions.sh |   4 +
>  9 files changed, 385 insertions(+), 4 deletions(-)
>  create mode 100644 block/export/vduse-blk.c
>  create mode 100644 block/export/vduse-blk.h

> diff --git a/qapi/block-export.json b/qapi/block-export.json
> index 0685cb8b9a..e4bd4de363 100644
> --- a/qapi/block-export.json
> +++ b/qapi/block-export.json
> @@ -177,6 +177,23 @@
>  '*allow-other': 'FuseExportAllowOther' },
>'if': 'CONFIG_FUSE' }
>  
> +##
> +# @BlockExportOptionsVduseBlk:
> +#
> +# A vduse-blk block export.
> +#
> +# @num-queues: the number of virtqueues. Defaults to 1.
> +# @queue-size: the size of virtqueue. Defaults to 256.
> +# @logical-block-size: Logical block size in bytes. Range [512, PAGE_SIZE]
> +#  and must be power of 2. Defaults to 512 bytes.
> +#
> +# Since: 7.1
> +##
> +{ 'struct': 'BlockExportOptionsVduseBlk',
> +  'data': { '*num-queues': 'uint16',
> +'*queue-size': 'uint16',
> +'*logical-block-size': 'size'} }
> +
>  ##
>  # @NbdServerAddOptions:
>  #
> @@ -280,6 +297,7 @@
>  # @nbd: NBD export
>  # @vhost-user-blk: vhost-user-blk export (since 5.2)
>  # @fuse: FUSE export (since: 6.0)
> +# @vduse-blk: vduse-blk export (since 7.1)
>  #
>  # Since: 4.2
>  ##
> @@ -287,7 +305,8 @@
>'data': [ 'nbd',
>  { 'name': 'vhost-user-blk',
>'if': 'CONFIG_VHOST_USER_BLK_SERVER' },
> -{ 'name': 'fuse', 'if': 'CONFIG_FUSE' } ] }
> +{ 'name': 'fuse', 'if': 'CONFIG_FUSE' },
> +{ 'name': 'vduse-blk', 'if': 'CONFIG_VDUSE_BLK_EXPORT' } ] }
>  
>  ##
>  # @BlockExportOptions:
> @@ -295,7 +314,8 @@
>  # Describes a block export, i.e. how single node should be exported on an
>  # external interface.
>  #
> -# @id: A unique identifier for the block export (across all export types)
> +# @id: A unique identifier for the block export (across the host for 
> vduse-blk
> +#  export type or across all export types for other types)

I find this sentence a bit confusing, but more importantly, it shows
that you are using one value for two different purposes: The ID to
identfy the export within QEMU (must be distinct from any other exports
in the same QEMU process, but can overlap with names used by other
processes), and the VDUSE name to uniquely identify it on the host (must
be distinct from other VDUSE devices on the same host, but can overlap
with other export types like NBD in the same process).

We can fix this on top, but I would suggest having a separate option for
the VDUSE device name, like BlockExportOptionsNbdBase contains a 'name'
option for the export name that is different from the export ID in QEMU.

>  # @node-name: The node name of the block node to be exported (since: 5.2)
>  #
> @@ -331,7 +351,9 @@
>'vhost-user-blk': { 'type': 'BlockExportOptionsVhostUserBlk',
>'if': 'CONFIG_VHOST_USER_BLK_SERVER' },
>'fuse': { 'type': 'BlockExportOptionsFuse',
> -'if': 'CONFIG_FUSE' }
> +'if': 'CONFIG_FUSE' },
> +  'vduse-blk': { 'type': 'BlockExportOptionsVduseBlk',
> + 'if': 'CONFIG_VDUSE_BLK_EXPORT' }
> } }

Kevin




[PATCH v1 00/33] testing/next (gitlab, junit, lcitool, x-compile)

2022-05-27 Thread Alex Bennée
Hi,

After a delay caused with other priorities I've finally managed to
catch up with some of my maintainer duties. The result is the current
testing/next branch which contains:

  - some GitLab fixes from Thomas
  - exposing JUnit to gitlab from Marc-André
  - more lcitool docker conversions from me
  - sharing the testing cross compilers with rom builds from Paolo
  - disable testing on forks by default from Daniel

the last one is important with the upcoming rationing of CI minutes
as well as hopefully avoiding too much wasteful testing while
developing. See the doc tips about setting up aliases to make it easy
to trigger a CI build with a push.

So far it all seems to be hanging together fairly well. I'll probably
look to cut a PR from this next week if the soak testing doesn't throw
up anything else.

My patches could do with someone casting an eye over them as they are
un-reviewed and written on a Friday afternoon ;-)

Alex Bennée (9):
  meson.build: fix summary display of test compilers
  tests/lcitool: fix up indentation to correct style
  tests/docker: update debian-armhf-cross with lcitool
  tests/docker: update debian-armel-cross with lcitool
  tests/docker: update debian-mipsel-cross with lcitool
  tests/docker: update debian-mips64el-cross with lcitool
  tests/docker: update debian-ppc64el-cross with lcitool
  tests/docker: update debian-amd64 with lcitool
  docs/devel: clean-up the CI links in the docs

Daniel P. Berrangé (5):
  gitlab: introduce a common base job template
  gitlab: convert Cirrus jobs to .base_job_template
  gitlab: convert static checks to .base_job_template
  gitlab: convert build/container jobs to .base_job_template
  gitlab: don't run CI jobs in forks by default

Marc-André Lureau (1):
  gitlab-ci: add meson JUnit test result into report

Paolo Bonzini (16):
  configure: do not define or use the CPP variable
  build: clean up ninja invocation
  build: add a more generic way to specify make->ninja dependencies
  build: do a full build before running TCG tests
  configure, meson: move symlinking of ROMs to meson
  tests/tcg: correct target CPU for sparc32
  tests/tcg: merge configure.sh back into main configure script
  configure: add missing cross compiler fallbacks
  configure: handle host compiler in probe_target_compiler
  configure: introduce --cross-prefix-*=
  configure: include more binutils in tests/tcg makefile
  configure: move symlink configuration earlier
  configure: enable cross-compilation of s390-ccw
  configure: enable cross-compilation of optionrom
  configure: enable cross compilation of vof
  configure: remove unused variables from config-host.mak

Thomas Huth (2):
  .gitlab-ci.d/container-cross: Fix RISC-V container dependencies /
stages
  .gitlab-ci.d/crossbuilds: Fix the dependency of the cross-i386-tci job

 docs/devel/ci-jobs.rst.inc| 116 +++-
 docs/devel/ci.rst |  11 +-
 docs/devel/submitting-a-patch.rst |  36 +-
 docs/devel/testing.rst|   2 +
 configure | 605 +++---
 Makefile  |   9 +-
 pc-bios/s390-ccw/netboot.mak  |   2 +-
 meson.build   |   8 +-
 .gitlab-ci.d/base.yml |  72 +++
 .gitlab-ci.d/buildtest-template.yml   |  18 +-
 .gitlab-ci.d/buildtest.yml|  28 +-
 .gitlab-ci.d/cirrus.yml   |  16 +-
 .gitlab-ci.d/container-cross.yml  |  24 +-
 .gitlab-ci.d/container-template.yml   |   1 +
 .gitlab-ci.d/containers.yml   |   3 +-
 .gitlab-ci.d/crossbuild-template.yml  |   3 +
 .gitlab-ci.d/crossbuilds.yml  |   2 +
 .gitlab-ci.d/qemu-project.yml |   1 +
 .gitlab-ci.d/static_checks.yml|  19 +-
 .gitlab-ci.d/windows.yml  |   1 +
 pc-bios/meson.build   |  17 +-
 pc-bios/optionrom/Makefile|   4 +-
 pc-bios/s390-ccw/Makefile |   9 +-
 pc-bios/vof/Makefile  |  17 +-
 scripts/mtest2make.py |   8 +-
 tests/Makefile.include|   4 +-
 tests/docker/Makefile.include |   5 -
 tests/docker/dockerfiles/debian-amd64.docker  | 194 --
 .../dockerfiles/debian-armel-cross.docker | 178 +-
 .../dockerfiles/debian-armhf-cross.docker | 184 +-
 .../dockerfiles/debian-mips64el-cross.docker  | 177 -
 .../dockerfiles/debian-mipsel-cross.docker| 179 +-
 .../dockerfiles/debian-ppc64el-cross.docker   | 178 +-
 tests/lcitool/refresh | 178 --
 tests/tcg/configure.sh| 376 ---
 35 files changed, 1884 insertions(+), 801 deletions(-)
 create mode 100644 .gitlab-ci.d/base.yml
 delete mode 100755 tests/tcg/configure.sh

-- 
2.30.2




[PATCH v1 02/33] .gitlab-ci.d/crossbuilds: Fix the dependency of the cross-i386-tci job

2022-05-27 Thread Alex Bennée
From: Thomas Huth 

The cross-i386-tci job uses the fedora-i386-cross image, so we should make sure
that the corresponding job that builds it (the i386-fedora-cross-container job)
has finished before we start the TCI job.

Signed-off-by: Thomas Huth 
Reviewed-by: Richard Henderson 
Message-Id: <20220524092600.89997-1-th...@redhat.com>
Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/crossbuilds.yml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 17d6cb3e45..4a5fb6ea2a 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -62,6 +62,8 @@ cross-i386-user:
 cross-i386-tci:
   extends: .cross_accel_build_job
   timeout: 60m
+  needs:
+job: i386-fedora-cross-container
   variables:
 IMAGE: fedora-i386-cross
 ACCEL: tcg-interpreter
-- 
2.30.2




[PATCH v1 01/33] .gitlab-ci.d/container-cross: Fix RISC-V container dependencies / stages

2022-05-27 Thread Alex Bennée
From: Thomas Huth 

The "riscv64-debian-cross-container" job does not depend on any other
container job from the first stage, so we can move it to the first
stage, too.

The "riscv64-debian-test-cross-container" job needs the debian11
container, so we should add a proper "needs:" statement here.

Signed-off-by: Thomas Huth 
Message-Id: <20220524093141.91012-1-th...@redhat.com>
Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/container-cross.yml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index e622ac2d21..ac15fce9b6 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -125,7 +125,7 @@ ppc64el-debian-cross-container:
 
 riscv64-debian-cross-container:
   extends: .container_job_template
-  stage: containers-layer2
+  stage: containers
   # as we are currently based on 'sid/unstable' we may break so...
   allow_failure: true
   variables:
@@ -135,6 +135,7 @@ riscv64-debian-cross-container:
 riscv64-debian-test-cross-container:
   extends: .container_job_template
   stage: containers-layer2
+  needs: ['amd64-debian11-container']
   variables:
 NAME: debian-riscv64-test-cross
 
-- 
2.30.2




[PATCH v1 03/33] gitlab-ci: add meson JUnit test result into report

2022-05-27 Thread Alex Bennée
From: Marc-André Lureau 

Signed-off-by: Marc-André Lureau 
Message-Id: <20220525173411.612224-1-marcandre.lur...@redhat.com>
Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/buildtest-template.yml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitlab-ci.d/buildtest-template.yml 
b/.gitlab-ci.d/buildtest-template.yml
index dc6d67aacf..b381345dbc 100644
--- a/.gitlab-ci.d/buildtest-template.yml
+++ b/.gitlab-ci.d/buildtest-template.yml
@@ -44,6 +44,8 @@
 expire_in: 7 days
 paths:
   - build/meson-logs/testlog.txt
+reports:
+  junit: build/meson-logs/testlog.junit.xml
 
 .avocado_test_job_template:
   extends: .common_test_job_template
-- 
2.30.2




[PATCH v1 08/33] tests/docker: update debian-mipsel-cross with lcitool

2022-05-27 Thread Alex Bennée
Use lcitool to update debian-mipsel-cross to a Debian 11 based system.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/container-cross.yml  |   3 +-
 tests/docker/Makefile.include |   1 -
 .../dockerfiles/debian-mipsel-cross.docker| 179 +++---
 tests/lcitool/refresh |   5 +
 4 files changed, 161 insertions(+), 27 deletions(-)

diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index caef7decf4..1a533e6fc0 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -102,8 +102,7 @@ mips-debian-cross-container:
 
 mipsel-debian-cross-container:
   extends: .container_job_template
-  stage: containers-layer2
-  needs: ['amd64-debian10-container']
+  stage: containers
   variables:
 NAME: debian-mipsel-cross
 
diff --git a/tests/docker/Makefile.include b/tests/docker/Makefile.include
index d9109bcc77..0ac5975419 100644
--- a/tests/docker/Makefile.include
+++ b/tests/docker/Makefile.include
@@ -94,7 +94,6 @@ docker-image-debian-m68k-cross: docker-image-debian10
 docker-image-debian-mips-cross: docker-image-debian10
 docker-image-debian-mips64-cross: docker-image-debian10
 docker-image-debian-mips64el-cross: docker-image-debian10
-docker-image-debian-mipsel-cross: docker-image-debian10
 docker-image-debian-ppc64el-cross: docker-image-debian10
 docker-image-debian-sh4-cross: docker-image-debian10
 docker-image-debian-sparc64-cross: docker-image-debian10
diff --git a/tests/docker/dockerfiles/debian-mipsel-cross.docker 
b/tests/docker/dockerfiles/debian-mipsel-cross.docker
index 0e5dd42d3c..b6d99ae324 100644
--- a/tests/docker/dockerfiles/debian-mipsel-cross.docker
+++ b/tests/docker/dockerfiles/debian-mipsel-cross.docker
@@ -1,31 +1,162 @@
+# THIS FILE WAS AUTO-GENERATED
 #
-# Docker mipsel cross-compiler target
+#  $ lcitool dockerfile --layers all --cross mipsel debian-11 qemu
 #
-# This docker target builds on the debian Stretch base image.
-#
-FROM qemu/debian10
+# https://gitlab.com/libvirt/libvirt-ci
 
-MAINTAINER Philippe Mathieu-Daudé 
+FROM docker.io/library/debian:11-slim
 
-# Add the foreign architecture we want and install dependencies
-RUN dpkg --add-architecture mipsel
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt install -y --no-install-recommends \
-gcc-mipsel-linux-gnu
+RUN export DEBIAN_FRONTEND=noninteractive && \
+apt-get update && \
+apt-get install -y eatmydata && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y \
+bash \
+bc \
+bsdextrautils \
+bzip2 \
+ca-certificates \
+ccache \
+dbus \
+debianutils \
+diffutils \
+exuberant-ctags \
+findutils \
+gcovr \
+genisoimage \
+gettext \
+git \
+hostname \
+libpcre2-dev \
+libspice-protocol-dev \
+llvm \
+locales \
+make \
+meson \
+ncat \
+ninja-build \
+openssh-client \
+perl-base \
+pkgconf \
+python3 \
+python3-numpy \
+python3-opencv \
+python3-pillow \
+python3-pip \
+python3-sphinx \
+python3-sphinx-rtd-theme \
+python3-venv \
+python3-yaml \
+rpm2cpio \
+sed \
+sparse \
+tar \
+tesseract-ocr \
+tesseract-ocr-eng \
+texinfo && \
+eatmydata apt-get autoremove -y && \
+eatmydata apt-get autoclean -y && \
+sed -Ei 's,^# (en_US\.UTF-8 .*)$,\1,' /etc/locale.gen && \
+dpkg-reconfigure locales
 
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy -a mipsel --arch-only qemu
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
 
-# Specify the cross prefix for this image (see tests/docker/common.rc)
-ENV QEMU_CONFIGURE_OPTS --cross-prefix=mipsel-linux-gnu-
+RUN export DEBIAN_FRONTEND=noninteractive && \
+dpkg --add-architecture mipsel && \
+eatmydata apt-get update && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
+eatmydata apt-get install --no-install-recommends -y \
+g++-mipsel-linux-gnu \
+gcc-mipsel-linux-gnu \
+libaio-dev:mipsel \
+libasound2-dev:mipsel \
+libattr1-dev:mipsel \
+libbpf-dev:mipsel \
+libbrlapi-dev:mipsel \
+libbz2-dev:mipsel \
+libc6-dev:mipsel \
+libcacard-dev:mipsel \
+libcap-ng-dev:mipsel \
+libcapstone-d

[PATCH v1 06/33] tests/docker: update debian-armhf-cross with lcitool

2022-05-27 Thread Alex Bennée
Use lcitool to update debian-armhf-cross to a Debian 11 based system.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/container-cross.yml  |   3 +-
 tests/docker/Makefile.include |   1 -
 .../dockerfiles/debian-armhf-cross.docker | 184 +++---
 tests/lcitool/refresh |   5 +
 4 files changed, 166 insertions(+), 27 deletions(-)

diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index ac15fce9b6..4d1830f3fc 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -34,8 +34,7 @@ armel-debian-cross-container:
 
 armhf-debian-cross-container:
   extends: .container_job_template
-  stage: containers-layer2
-  needs: ['amd64-debian10-container']
+  stage: containers
   variables:
 NAME: debian-armhf-cross
 
diff --git a/tests/docker/Makefile.include b/tests/docker/Makefile.include
index ca2157db46..d6e0710554 100644
--- a/tests/docker/Makefile.include
+++ b/tests/docker/Makefile.include
@@ -90,7 +90,6 @@ endif
 
 docker-image-debian-alpha-cross: docker-image-debian10
 docker-image-debian-armel-cross: docker-image-debian10
-docker-image-debian-armhf-cross: docker-image-debian10
 docker-image-debian-hppa-cross: docker-image-debian10
 docker-image-debian-m68k-cross: docker-image-debian10
 docker-image-debian-mips-cross: docker-image-debian10
diff --git a/tests/docker/dockerfiles/debian-armhf-cross.docker 
b/tests/docker/dockerfiles/debian-armhf-cross.docker
index 25d7618833..a2ebce96f8 100644
--- a/tests/docker/dockerfiles/debian-armhf-cross.docker
+++ b/tests/docker/dockerfiles/debian-armhf-cross.docker
@@ -1,29 +1,165 @@
+# THIS FILE WAS AUTO-GENERATED
 #
-# Docker armhf cross-compiler target
+#  $ lcitool dockerfile --layers all --cross armv7l debian-11 qemu
 #
-# This docker target builds on the debian Stretch base image.
-#
-FROM qemu/debian10
+# https://gitlab.com/libvirt/libvirt-ci
 
-# Add the foreign architecture we want and install dependencies
-RUN dpkg --add-architecture armhf
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt install -y --no-install-recommends \
-crossbuild-essential-armhf
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy -a armhf --arch-only qemu
+FROM docker.io/library/debian:11-slim
 
-# Specify the cross prefix for this image (see tests/docker/common.rc)
-ENV QEMU_CONFIGURE_OPTS --cross-prefix=arm-linux-gnueabihf-
-ENV DEF_TARGET_LIST arm-softmmu,arm-linux-user,armeb-linux-user
+RUN export DEBIAN_FRONTEND=noninteractive && \
+apt-get update && \
+apt-get install -y eatmydata && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y \
+bash \
+bc \
+bsdextrautils \
+bzip2 \
+ca-certificates \
+ccache \
+dbus \
+debianutils \
+diffutils \
+exuberant-ctags \
+findutils \
+gcovr \
+genisoimage \
+gettext \
+git \
+hostname \
+libpcre2-dev \
+libspice-protocol-dev \
+llvm \
+locales \
+make \
+meson \
+ncat \
+ninja-build \
+openssh-client \
+perl-base \
+pkgconf \
+python3 \
+python3-numpy \
+python3-opencv \
+python3-pillow \
+python3-pip \
+python3-sphinx \
+python3-sphinx-rtd-theme \
+python3-venv \
+python3-yaml \
+rpm2cpio \
+sed \
+sparse \
+tar \
+tesseract-ocr \
+tesseract-ocr-eng \
+texinfo && \
+eatmydata apt-get autoremove -y && \
+eatmydata apt-get autoclean -y && \
+sed -Ei 's,^# (en_US\.UTF-8 .*)$,\1,' /etc/locale.gen && \
+dpkg-reconfigure locales
+
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
 
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt install -y --no-install-recommends \
-libbz2-dev:armhf \
-liblzo2-dev:armhf \
-librdmacm-dev:armhf \
-libsnappy-dev:armhf \
-libxen-dev:armhf
+RUN export DEBIAN_FRONTEND=noninteractive && \
+dpkg --add-architecture armhf && \
+eatmydata apt-get update && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
+eatmydata apt-get install --no-install-recommends -y \
+g++-arm-linux-gnueabihf \
+gcc-arm-linux-gnueabihf \
+libaio-dev:armhf \
+libasan5:armhf \
+libasound2-dev:armhf \
+libattr1-dev:armhf \
+libbpf-dev:

[PATCH v1 04/33] meson.build: fix summary display of test compilers

2022-05-27 Thread Alex Bennée
The recent refactoring of configure.sh dropped a number of variables
we relied on for printing out information. Make it simpler.

Fixes: eebf199c09 (tests/tcg: invoke Makefile.target directly from QEMU's 
makefile)
Signed-off-by: Alex Bennée 
---
 meson.build | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/meson.build b/meson.build
index df7c34b076..b622f37a34 100644
--- a/meson.build
+++ b/meson.build
@@ -3732,12 +3732,8 @@ foreach target: target_dirs
 config_cross_tcg = keyval.load(tcg_mak)
 target = config_cross_tcg['TARGET_NAME']
 compiler = ''
-if 'DOCKER_CROSS_CC_GUEST' in config_cross_tcg
-  summary_info += {target + ' tests': 
config_cross_tcg['DOCKER_CROSS_CC_GUEST'] +
-  ' via ' + 
config_cross_tcg['DOCKER_IMAGE']}
-elif 'CROSS_CC_GUEST' in config_cross_tcg
-  summary_info += {target + ' tests'
-: config_cross_tcg['CROSS_CC_GUEST'] }
+if 'CC' in config_cross_tcg
+  summary_info += {target + ' tests': config_cross_tcg['CC']}
 endif
endif
 endforeach
-- 
2.30.2




[PATCH v1 10/33] tests/docker: update debian-ppc64el-cross with lcitool

2022-05-27 Thread Alex Bennée
Use lcitool to update debian-ppc64el-cross to a Debian 11 based system.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/container-cross.yml  |   3 +-
 tests/docker/Makefile.include |   1 -
 .../dockerfiles/debian-ppc64el-cross.docker   | 178 +++---
 tests/lcitool/refresh |   5 +
 4 files changed, 163 insertions(+), 24 deletions(-)

diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index 411dc06bf8..147e667744 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -114,8 +114,7 @@ powerpc-test-cross-container:
 
 ppc64el-debian-cross-container:
   extends: .container_job_template
-  stage: containers-layer2
-  needs: ['amd64-debian10-container']
+  stage: containers
   variables:
 NAME: debian-ppc64el-cross
 
diff --git a/tests/docker/Makefile.include b/tests/docker/Makefile.include
index d9f37ae8fa..e68f91b853 100644
--- a/tests/docker/Makefile.include
+++ b/tests/docker/Makefile.include
@@ -93,7 +93,6 @@ docker-image-debian-hppa-cross: docker-image-debian10
 docker-image-debian-m68k-cross: docker-image-debian10
 docker-image-debian-mips-cross: docker-image-debian10
 docker-image-debian-mips64-cross: docker-image-debian10
-docker-image-debian-ppc64el-cross: docker-image-debian10
 docker-image-debian-sh4-cross: docker-image-debian10
 docker-image-debian-sparc64-cross: docker-image-debian10
 
diff --git a/tests/docker/dockerfiles/debian-ppc64el-cross.docker 
b/tests/docker/dockerfiles/debian-ppc64el-cross.docker
index 5de12b01cd..bcf04bc90b 100644
--- a/tests/docker/dockerfiles/debian-ppc64el-cross.docker
+++ b/tests/docker/dockerfiles/debian-ppc64el-cross.docker
@@ -1,28 +1,164 @@
+# THIS FILE WAS AUTO-GENERATED
 #
-# Docker ppc64el cross-compiler target
+#  $ lcitool dockerfile --layers all --cross ppc64le debian-11 qemu
 #
-# This docker target builds on the debian Stretch base image.
-#
-FROM qemu/debian10
+# https://gitlab.com/libvirt/libvirt-ci
+
+FROM docker.io/library/debian:11-slim
 
-# Add the foreign architecture we want and install dependencies
-RUN dpkg --add-architecture ppc64el && \
-apt update && \
-apt install -yy crossbuild-essential-ppc64el
+RUN export DEBIAN_FRONTEND=noninteractive && \
+apt-get update && \
+apt-get install -y eatmydata && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y \
+bash \
+bc \
+bsdextrautils \
+bzip2 \
+ca-certificates \
+ccache \
+dbus \
+debianutils \
+diffutils \
+exuberant-ctags \
+findutils \
+gcovr \
+genisoimage \
+gettext \
+git \
+hostname \
+libpcre2-dev \
+libspice-protocol-dev \
+llvm \
+locales \
+make \
+meson \
+ncat \
+ninja-build \
+openssh-client \
+perl-base \
+pkgconf \
+python3 \
+python3-numpy \
+python3-opencv \
+python3-pillow \
+python3-pip \
+python3-sphinx \
+python3-sphinx-rtd-theme \
+python3-venv \
+python3-yaml \
+rpm2cpio \
+sed \
+sparse \
+tar \
+tesseract-ocr \
+tesseract-ocr-eng \
+texinfo && \
+eatmydata apt-get autoremove -y && \
+eatmydata apt-get autoclean -y && \
+sed -Ei 's,^# (en_US\.UTF-8 .*)$,\1,' /etc/locale.gen && \
+dpkg-reconfigure locales
 
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy -a ppc64el --arch-only qemu
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
 
-# Specify the cross prefix for this image (see tests/docker/common.rc)
+RUN export DEBIAN_FRONTEND=noninteractive && \
+dpkg --add-architecture ppc64el && \
+eatmydata apt-get update && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
+eatmydata apt-get install --no-install-recommends -y \
+g++-powerpc64le-linux-gnu \
+gcc-powerpc64le-linux-gnu \
+libaio-dev:ppc64el \
+libasan5:ppc64el \
+libasound2-dev:ppc64el \
+libattr1-dev:ppc64el \
+libbpf-dev:ppc64el \
+libbrlapi-dev:ppc64el \
+libbz2-dev:ppc64el \
+libc6-dev:ppc64el \
+libcacard-dev:ppc64el \
+libcap-ng-dev:ppc64el \
+libcapstone-dev:ppc64el \
+libcurl4-gnutls-dev:ppc64el \
+libdaxctl-dev:ppc64el \
+libdrm-dev:ppc64el \
+libepoxy-dev:ppc64

[PATCH v1 07/33] tests/docker: update debian-armel-cross with lcitool

2022-05-27 Thread Alex Bennée
Use lcitool to update debian-armel-cross to a Debian 11 based system.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/container-cross.yml  |   3 +-
 tests/docker/Makefile.include |   1 -
 .../dockerfiles/debian-armel-cross.docker | 178 --
 tests/lcitool/refresh |   5 +
 4 files changed, 164 insertions(+), 23 deletions(-)

diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index 4d1830f3fc..caef7decf4 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -27,8 +27,7 @@ arm64-debian-cross-container:
 
 armel-debian-cross-container:
   extends: .container_job_template
-  stage: containers-layer2
-  needs: ['amd64-debian10-container']
+  stage: containers
   variables:
 NAME: debian-armel-cross
 
diff --git a/tests/docker/Makefile.include b/tests/docker/Makefile.include
index d6e0710554..d9109bcc77 100644
--- a/tests/docker/Makefile.include
+++ b/tests/docker/Makefile.include
@@ -89,7 +89,6 @@ DOCKER_PARTIAL_IMAGES += fedora
 endif
 
 docker-image-debian-alpha-cross: docker-image-debian10
-docker-image-debian-armel-cross: docker-image-debian10
 docker-image-debian-hppa-cross: docker-image-debian10
 docker-image-debian-m68k-cross: docker-image-debian10
 docker-image-debian-mips-cross: docker-image-debian10
diff --git a/tests/docker/dockerfiles/debian-armel-cross.docker 
b/tests/docker/dockerfiles/debian-armel-cross.docker
index b7b1a3585f..a6153e5a83 100644
--- a/tests/docker/dockerfiles/debian-armel-cross.docker
+++ b/tests/docker/dockerfiles/debian-armel-cross.docker
@@ -1,26 +1,164 @@
+# THIS FILE WAS AUTO-GENERATED
 #
-# Docker armel cross-compiler target
+#  $ lcitool dockerfile --layers all --cross armv6l debian-11 qemu
 #
-# This docker target builds on the debian Stretch base image.
-#
-FROM qemu/debian10
-MAINTAINER Philippe Mathieu-Daudé 
+# https://gitlab.com/libvirt/libvirt-ci
+
+FROM docker.io/library/debian:11-slim
+
+RUN export DEBIAN_FRONTEND=noninteractive && \
+apt-get update && \
+apt-get install -y eatmydata && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y \
+bash \
+bc \
+bsdextrautils \
+bzip2 \
+ca-certificates \
+ccache \
+dbus \
+debianutils \
+diffutils \
+exuberant-ctags \
+findutils \
+gcovr \
+genisoimage \
+gettext \
+git \
+hostname \
+libpcre2-dev \
+libspice-protocol-dev \
+llvm \
+locales \
+make \
+meson \
+ncat \
+ninja-build \
+openssh-client \
+perl-base \
+pkgconf \
+python3 \
+python3-numpy \
+python3-opencv \
+python3-pillow \
+python3-pip \
+python3-sphinx \
+python3-sphinx-rtd-theme \
+python3-venv \
+python3-yaml \
+rpm2cpio \
+sed \
+sparse \
+tar \
+tesseract-ocr \
+tesseract-ocr-eng \
+texinfo && \
+eatmydata apt-get autoremove -y && \
+eatmydata apt-get autoclean -y && \
+sed -Ei 's,^# (en_US\.UTF-8 .*)$,\1,' /etc/locale.gen && \
+dpkg-reconfigure locales
 
-# Add the foreign architecture we want and install dependencies
-RUN dpkg --add-architecture armel && \
-apt update && \
-apt install -yy crossbuild-essential-armel && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy -a armel --arch-only qemu
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
 
-# Specify the cross prefix for this image (see tests/docker/common.rc)
+RUN export DEBIAN_FRONTEND=noninteractive && \
+dpkg --add-architecture armel && \
+eatmydata apt-get update && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
+eatmydata apt-get install --no-install-recommends -y \
+g++-arm-linux-gnueabi \
+gcc-arm-linux-gnueabi \
+libaio-dev:armel \
+libasan5:armel \
+libasound2-dev:armel \
+libattr1-dev:armel \
+libbpf-dev:armel \
+libbrlapi-dev:armel \
+libbz2-dev:armel \
+libc6-dev:armel \
+libcacard-dev:armel \
+libcap-ng-dev:armel \
+libcapstone-dev:armel \
+libcurl4-gnutls-dev:armel \
+libdaxctl-dev:armel \
+libdrm-dev:armel \
+libepoxy-dev:armel \
+libfdt-dev:armel \
+libffi-dev:armel \
+libfuse3-dev:armel \
+libg

[PATCH v1 14/33] build: add a more generic way to specify make->ninja dependencies

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

Let any make target specify ninja goals that needs to be built for it
(though selecting the goals is _not_ recursive on depending targets)
instead of having a custom mechanism only for "make check" and "make
bench".

Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-4-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 Makefile  | 3 +--
 scripts/mtest2make.py | 8 
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/Makefile b/Makefile
index fad312040f..3c0d89057e 100644
--- a/Makefile
+++ b/Makefile
@@ -145,8 +145,7 @@ NINJAFLAGS = $(if $V,-v) $(if $(MAKE.n), -n) $(if 
$(MAKE.k), -k0) \
 $(filter-out -j, $(lastword -j1 $(filter -l% -j%, $(MAKEFLAGS \
 -d keepdepfile
 ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
-ninja-cmd-goals += $(foreach t, $(.check.build-suites), $(.check-$t.deps))
-ninja-cmd-goals += $(foreach t, $(.bench.build-suites), $(.bench-$t.deps))
+ninja-cmd-goals += $(foreach g, $(MAKECMDGOALS), $(.ninja-goals.$g
 
 makefile-targets := build.ninja ctags TAGS cscope dist clean uninstall
 # "ninja -t targets" also lists all prerequisites.  If build system
diff --git a/scripts/mtest2make.py b/scripts/mtest2make.py
index 304634b71e..0fe81efbbc 100644
--- a/scripts/mtest2make.py
+++ b/scripts/mtest2make.py
@@ -81,12 +81,12 @@ def emit_prolog(suites, prefix):
 
 def emit_suite_deps(name, suite, prefix):
 deps = ' '.join(suite.deps)
-targets = f'{prefix}-{name} {prefix}-report-{name}.junit.xml {prefix} 
{prefix}-report.junit.xml'
+targets = [f'{prefix}-{name}', f'{prefix}-report-{name}.junit.xml', 
f'{prefix}', f'{prefix}-report.junit.xml',
+   f'{prefix}-build']
 print()
 print(f'.{prefix}-{name}.deps = {deps}')
-print(f'ifneq ($(filter {prefix}-build {targets}, $(MAKECMDGOALS)),)')
-print(f'.{prefix}.build-suites += {name}')
-print(f'endif')
+for t in targets:
+print(f'.ninja-goals.{t} += $(.{prefix}-{name}.deps)')
 
 def emit_suite(name, suite, prefix):
 emit_suite_deps(name, suite, prefix)
-- 
2.30.2




[PATCH v1 05/33] tests/lcitool: fix up indentation to correct style

2022-05-27 Thread Alex Bennée
3 space indentation snuck into the initial commit. Clean it up before
we let it get established. I've also:

  - removed unused os import
  - added double lines between functions
  - added some comments and grouped and sorted the generation stanzas

My lint tool is also recommending using f-strings but that requires
python 3.6.

Signed-off-by: Alex Bennée 
Cc: Daniel P. Berrangé 
---
 tests/lcitool/refresh | 134 --
 1 file changed, 76 insertions(+), 58 deletions(-)

diff --git a/tests/lcitool/refresh b/tests/lcitool/refresh
index fb49bbc441..dc1fc21ef9 100755
--- a/tests/lcitool/refresh
+++ b/tests/lcitool/refresh
@@ -13,14 +13,13 @@
 # the top-level directory.
 
 import sys
-import os
 import subprocess
 
 from pathlib import Path
 
 if len(sys.argv) != 1:
-   print("syntax: %s" % sys.argv[0], file=sys.stderr)
-   sys.exit(1)
+print("syntax: %s" % sys.argv[0], file=sys.stderr)
+sys.exit(1)
 
 self_dir = Path(__file__).parent
 src_dir = self_dir.parent.parent
@@ -30,76 +29,95 @@ lcitool_path = Path(self_dir, "libvirt-ci", "lcitool")
 
 lcitool_cmd = [lcitool_path, "--data-dir", self_dir]
 
+
 def atomic_write(filename, content):
-   tmp = filename.with_suffix(filename.suffix + ".tmp")
-   try:
-  with tmp.open("w") as fp:
- print(content, file=fp, end="")
- tmp.rename(filename)
-   except Exception as ex:
-  tmp.unlink()
-  raise
+tmp = filename.with_suffix(filename.suffix + ".tmp")
+try:
+with tmp.open("w") as fp:
+print(content, file=fp, end="")
+tmp.rename(filename)
+except Exception as ex:
+tmp.unlink()
+raise
+
 
 def generate(filename, cmd, trailer):
-   print("Generate %s" % filename)
-   lcitool=subprocess.run(cmd, capture_output=True)
+print("Generate %s" % filename)
+lcitool = subprocess.run(cmd, capture_output=True)
 
-   if lcitool.returncode != 0:
-  raise Exception("Failed to generate %s: %s" % (filename, lcitool.stderr))
+if lcitool.returncode != 0:
+raise Exception("Failed to generate %s: %s" % (filename, 
lcitool.stderr))
+
+content = lcitool.stdout.decode("utf8")
+if trailer is not None:
+content += trailer
+atomic_write(filename, content)
 
-   content = lcitool.stdout.decode("utf8")
-   if trailer is not None:
-  content += trailer
-   atomic_write(filename, content)
 
 def generate_dockerfile(host, target, cross=None, trailer=None):
-   filename = Path(src_dir, "tests", "docker", "dockerfiles", host + ".docker")
-   cmd = lcitool_cmd + ["dockerfile"]
-   if cross is not None:
-  cmd.extend(["--cross", cross])
-   cmd.extend([target, "qemu"])
-   generate(filename, cmd, trailer)
+filename = Path(src_dir, "tests", "docker", "dockerfiles", host + 
".docker")
+cmd = lcitool_cmd + ["dockerfile"]
+if cross is not None:
+cmd.extend(["--cross", cross])
+cmd.extend([target, "qemu"])
+generate(filename, cmd, trailer)
+
 
 def generate_cirrus(target, trailer=None):
-   filename = Path(src_dir, ".gitlab-ci.d", "cirrus", target + ".vars")
-   cmd = lcitool_cmd + ["variables", target, "qemu"]
-   generate(filename, cmd, trailer)
+filename = Path(src_dir, ".gitlab-ci.d", "cirrus", target + ".vars")
+cmd = lcitool_cmd + ["variables", target, "qemu"]
+generate(filename, cmd, trailer)
+
 
 ubuntu2004_tsanhack = [
-   "# Apply patch https://reviews.llvm.org/D75820\n";,
-   "# This is required for TSan in clang-10 to compile with QEMU.\n",
-   "RUN sed -i 's/^const/static const/g' 
/usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h\n"
+"# Apply patch https://reviews.llvm.org/D75820\n";,
+"# This is required for TSan in clang-10 to compile with QEMU.\n",
+"RUN sed -i 's/^const/static const/g' 
/usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h\n"
 ]
 
+
 def debian_cross_build(prefix, targets):
-   conf = "ENV QEMU_CONFIGURE_OPTS --cross-prefix=%s\n" % (prefix)
-   targets = "ENV DEF_TARGET_LIST %s\n" % (targets)
-   return "".join([conf, targets])
+conf = "ENV QEMU_CONFIGURE_OPTS --cross-prefix=%s\n" % (prefix)
+targets = "ENV DEF_TARGET_LIST %s\n" % (targets)
+return "".join([conf, targets])
 
+#
+# Update all the various build configurations.
+# Please keep each group sorted alphabetically for easy reading.
+#
 
 try:
-   generate_dockerfile("centos8", "centos-stream-8")
-   generate_dockerfile("fedora", "fedora-35")
-   generate_dockerfile("ubuntu2004", "ubuntu-2004",
-   trailer="".join(ubuntu2004_tsanhack))
-   generate_dockerfile("opensuse-leap", "opensuse-leap-152")
-   generate_dockerfile("alpine", "alpine-edge")
-
-   generate_dockerfile("debian-arm64-cross", "debian-11",
-   cross="aarch64",
-   trailer=debian_cross_build("aarch64-linux-gnu-",
-  
"aarch64-softmmu,aarch64-linux-user"))
-
-   generate_docker

[PATCH v1 18/33] tests/tcg: merge configure.sh back into main configure script

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

tests/tcg/configure.sh has a complicated story.

In the beginning its code ran as part of the creation of config-target.mak
files, and that is where it placed the information on the target compiler.
However, probing for the buildability of TCG tests required multiple
inclusions of config-target.mak in the _main_ Makefile (not in
Makefile.target, which took care of building the QEMU executables in
the pre-Meson era), which polluted the namespace.

Thus, it was moved to a separate directory.  It created small config-*.mak
files in $(BUILD_DIR)/tests/tcg.  Those were also included multiple
times, but at least they were small and manageable; this was also an
important step in disentangling the TCG tests from Makefile.target.

Since then, Meson has allowed the configure script to go on a diet.
A few compilation tests survive (mostly for sanitizers) but these days
it mostly takes care of command line parsing, looking for tools, and
setting up the environment for Meson to do its stuff.

It's time to extend configure with the capability to build for more
than just one target: not just tests, but also firmware.  As a first
step, integrate all the logic to find cross compilers in the configure
script, and move tests/tcg/configure.sh back there (though as a
separate loop, not integrated in the one that generates target
configurations for Meson).

tests/tcg is actually very close to being buildable as a standalone
project, so I actually expect the compiler tests to move back to
tests/tcg, as a "configure" script of sorts which would run at Make
time after the docker images are built.  The GCC tree has a similar idea
of doing only bare-bones tree-wide configuration and leaving the rest
for Make time.

Signed-off-by: Paolo Bonzini 
Acked-by: Richard Henderson 
Message-Id: <20220517092616.1272238-8-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure  | 398 +++--
 tests/Makefile.include |   1 -
 tests/tcg/configure.sh | 376 --
 3 files changed, 388 insertions(+), 387 deletions(-)
 delete mode 100755 tests/tcg/configure.sh

diff --git a/configure b/configure
index 2138f61e54..bb05e70bcc 100755
--- a/configure
+++ b/configure
@@ -109,6 +109,20 @@ error_exit() {
 }
 
 do_compiler() {
+  # Run the compiler, capturing its output to the log. First argument
+  # is compiler binary to execute.
+  local compiler="$1"
+  shift
+  if test -n "$BASH_VERSION"; then eval '
+  echo >>config.log "
+funcs: ${FUNCNAME[*]}
+lines: ${BASH_LINENO[*]}"
+  '; fi
+  echo $compiler "$@" >> config.log
+  $compiler "$@" >> config.log 2>&1 || return $?
+}
+
+do_compiler_werror() {
 # Run the compiler, capturing its output to the log. First argument
 # is compiler binary to execute.
 compiler="$1"
@@ -142,15 +156,15 @@ lines: ${BASH_LINENO[*]}"
 }
 
 do_cc() {
-do_compiler "$cc" $CPU_CFLAGS "$@"
+do_compiler_werror "$cc" $CPU_CFLAGS "$@"
 }
 
 do_cxx() {
-do_compiler "$cxx" $CPU_CFLAGS "$@"
+do_compiler_werror "$cxx" $CPU_CFLAGS "$@"
 }
 
 do_objc() {
-do_compiler "$objcc" $CPU_CFLAGS "$@"
+do_compiler_werror "$objcc" $CPU_CFLAGS "$@"
 }
 
 # Append $2 to the variable named $1, with space separation
@@ -345,11 +359,9 @@ for opt do
   ;;
   --cross-cc-cflags-*) cc_arch=${opt#--cross-cc-cflags-}; 
cc_arch=${cc_arch%%=*}
   eval "cross_cc_cflags_${cc_arch}=\$optarg"
-  cross_cc_vars="$cross_cc_vars cross_cc_cflags_${cc_arch}"
   ;;
   --cross-cc-*) cc_arch=${opt#--cross-cc-}; cc_arch=${cc_arch%%=*}
 eval "cross_cc_${cc_arch}=\$optarg"
-cross_cc_vars="$cross_cc_vars cross_cc_${cc_arch}"
   ;;
   esac
 done
@@ -944,7 +956,6 @@ esac
 
 if eval test -z "\${cross_cc_$cpu}"; then
 eval "cross_cc_${cpu}=\$cc"
-cross_cc_vars="$cross_cc_vars cross_cc_${cpu}"
 fi
 
 default_target_list=""
@@ -1795,6 +1806,248 @@ case "$slirp" in
 ;;
 esac
 
+##
+# functions to probe cross compilers
+
+container="no"
+if test $use_containers = "yes"; then
+if has "docker" || has "podman"; then
+container=$($python $source_path/tests/docker/docker.py probe)
+fi
+fi
+
+# cross compilers defaults, can be overridden with --cross-cc-ARCH
+: ${cross_cc_aarch64="aarch64-linux-gnu-gcc"}
+: ${cross_cc_aarch64_be="$cross_cc_aarch64"}
+: ${cross_cc_cflags_aarch64_be="-mbig-endian"}
+: ${cross_cc_alpha="alpha-linux-gnu-gcc"}
+: ${cross_cc_arm="arm-linux-gnueabihf-gcc"}
+: ${cross_cc_cflags_armeb="-mbig-endian"}
+: ${cross_cc_hexagon="hexagon-unknown-linux-musl-clang"}
+: ${cross_cc_cflags_hexagon="-mv67 -O2 -static"}
+: ${cross_cc_hppa="hppa-linux-gnu-gcc"}
+: ${cross_cc_i386="i686-linux-gnu-gcc"}
+: ${cross_cc_cflags_i386="-m32"}
+: ${cross_cc_m68k="m68k-linux-gnu-gcc"}
+: ${cross_cc_microblaze="microblaze-linux-musl-gcc"}
+: ${cross_cc_mips64el="mips64el-linux-gnuabi64-gcc"}
+: ${cross_cc_mips64="m

Re: [PATCH v1 06/33] tests/docker: update debian-armhf-cross with lcitool

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 04:35:36PM +0100, Alex Bennée wrote:
> Use lcitool to update debian-armhf-cross to a Debian 11 based system.
> 
> Signed-off-by: Alex Bennée 
> ---
>  .gitlab-ci.d/container-cross.yml  |   3 +-
>  tests/docker/Makefile.include |   1 -
>  .../dockerfiles/debian-armhf-cross.docker | 184 +++---
>  tests/lcitool/refresh |   5 +
>  4 files changed, 166 insertions(+), 27 deletions(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH v1 09/33] tests/docker: update debian-mips64el-cross with lcitool

2022-05-27 Thread Alex Bennée
Use lcitool to update debian-mips64el-cross to a Debian 11 based system.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/container-cross.yml  |   3 +-
 tests/docker/Makefile.include |   1 -
 .../dockerfiles/debian-mips64el-cross.docker  | 177 +++---
 tests/lcitool/refresh |   5 +
 4 files changed, 159 insertions(+), 27 deletions(-)

diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index 1a533e6fc0..411dc06bf8 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -88,8 +88,7 @@ mips64-debian-cross-container:
 
 mips64el-debian-cross-container:
   extends: .container_job_template
-  stage: containers-layer2
-  needs: ['amd64-debian10-container']
+  stage: containers
   variables:
 NAME: debian-mips64el-cross
 
diff --git a/tests/docker/Makefile.include b/tests/docker/Makefile.include
index 0ac5975419..d9f37ae8fa 100644
--- a/tests/docker/Makefile.include
+++ b/tests/docker/Makefile.include
@@ -93,7 +93,6 @@ docker-image-debian-hppa-cross: docker-image-debian10
 docker-image-debian-m68k-cross: docker-image-debian10
 docker-image-debian-mips-cross: docker-image-debian10
 docker-image-debian-mips64-cross: docker-image-debian10
-docker-image-debian-mips64el-cross: docker-image-debian10
 docker-image-debian-ppc64el-cross: docker-image-debian10
 docker-image-debian-sh4-cross: docker-image-debian10
 docker-image-debian-sparc64-cross: docker-image-debian10
diff --git a/tests/docker/dockerfiles/debian-mips64el-cross.docker 
b/tests/docker/dockerfiles/debian-mips64el-cross.docker
index c990b683b7..b02dcb7fd9 100644
--- a/tests/docker/dockerfiles/debian-mips64el-cross.docker
+++ b/tests/docker/dockerfiles/debian-mips64el-cross.docker
@@ -1,33 +1,162 @@
+# THIS FILE WAS AUTO-GENERATED
 #
-# Docker mips64el cross-compiler target
-#
-# This docker target builds on the debian Stretch base image.
+#  $ lcitool dockerfile --layers all --cross mips64el debian-11 qemu
 #
+# https://gitlab.com/libvirt/libvirt-ci
 
-FROM qemu/debian10
+FROM docker.io/library/debian:11-slim
 
-MAINTAINER Philippe Mathieu-Daudé 
+RUN export DEBIAN_FRONTEND=noninteractive && \
+apt-get update && \
+apt-get install -y eatmydata && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y \
+bash \
+bc \
+bsdextrautils \
+bzip2 \
+ca-certificates \
+ccache \
+dbus \
+debianutils \
+diffutils \
+exuberant-ctags \
+findutils \
+gcovr \
+genisoimage \
+gettext \
+git \
+hostname \
+libpcre2-dev \
+libspice-protocol-dev \
+llvm \
+locales \
+make \
+meson \
+ncat \
+ninja-build \
+openssh-client \
+perl-base \
+pkgconf \
+python3 \
+python3-numpy \
+python3-opencv \
+python3-pillow \
+python3-pip \
+python3-sphinx \
+python3-sphinx-rtd-theme \
+python3-venv \
+python3-yaml \
+rpm2cpio \
+sed \
+sparse \
+tar \
+tesseract-ocr \
+tesseract-ocr-eng \
+texinfo && \
+eatmydata apt-get autoremove -y && \
+eatmydata apt-get autoclean -y && \
+sed -Ei 's,^# (en_US\.UTF-8 .*)$,\1,' /etc/locale.gen && \
+dpkg-reconfigure locales
 
-# Add the foreign architecture we want and install dependencies
-RUN dpkg --add-architecture mips64el && \
-apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt install -y --no-install-recommends \
-gcc-mips64el-linux-gnuabi64
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
 
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy -a mips64el --arch-only qemu
+RUN export DEBIAN_FRONTEND=noninteractive && \
+dpkg --add-architecture mips64el && \
+eatmydata apt-get update && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
+eatmydata apt-get install --no-install-recommends -y \
+g++-mips64el-linux-gnuabi64 \
+gcc-mips64el-linux-gnuabi64 \
+libaio-dev:mips64el \
+libasound2-dev:mips64el \
+libattr1-dev:mips64el \
+libbpf-dev:mips64el \
+libbrlapi-dev:mips64el \
+libbz2-dev:mips64el \
+libc6-dev:mips64el \
+libcacard-dev:mips64el \
+libcap-ng-dev:mips64el \
+libcapstone-dev:mips64el \
+libcurl4-gnutls-dev:mips64el \
+ 

Re: [PATCH v1 05/33] tests/lcitool: fix up indentation to correct style

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 04:35:35PM +0100, Alex Bennée wrote:
> 3 space indentation snuck into the initial commit. Clean it up before
> we let it get established. I've also:
> 
>   - removed unused os import
>   - added double lines between functions
>   - added some comments and grouped and sorted the generation stanzas
> 
> My lint tool is also recommending using f-strings but that requires
> python 3.6.
> 
> Signed-off-by: Alex Bennée 
> Cc: Daniel P. Berrangé 
> ---
>  tests/lcitool/refresh | 134 --
>  1 file changed, 76 insertions(+), 58 deletions(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH v1 12/33] configure: do not define or use the CPP variable

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

Just hardcode $(CC) -E, it should be enough.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Richard Henderson 
Message-Id: <20220517092616.1272238-2-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure  | 3 ---
 pc-bios/optionrom/Makefile | 2 +-
 2 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/configure b/configure
index 180ee688dc..7a071c161a 100755
--- a/configure
+++ b/configure
@@ -376,7 +376,6 @@ fi
 ar="${AR-${cross_prefix}ar}"
 as="${AS-${cross_prefix}as}"
 ccas="${CCAS-$cc}"
-cpp="${CPP-$cc -E}"
 objcopy="${OBJCOPY-${cross_prefix}objcopy}"
 ld="${LD-${cross_prefix}ld}"
 ranlib="${RANLIB-${cross_prefix}ranlib}"
@@ -2012,7 +2011,6 @@ echo "CC=$cc" >> $config_host_mak
 echo "AR=$ar" >> $config_host_mak
 echo "AS=$as" >> $config_host_mak
 echo "CCAS=$ccas" >> $config_host_mak
-echo "CPP=$cpp" >> $config_host_mak
 echo "OBJCOPY=$objcopy" >> $config_host_mak
 echo "LD=$ld" >> $config_host_mak
 echo "QEMU_CFLAGS=$QEMU_CFLAGS" >> $config_host_mak
@@ -2254,7 +2252,6 @@ preserve_env() {
 preserve_env AR
 preserve_env AS
 preserve_env CC
-preserve_env CPP
 preserve_env CFLAGS
 preserve_env CXX
 preserve_env CXXFLAGS
diff --git a/pc-bios/optionrom/Makefile b/pc-bios/optionrom/Makefile
index 2494ad9c25..17ccc76241 100644
--- a/pc-bios/optionrom/Makefile
+++ b/pc-bios/optionrom/Makefile
@@ -50,7 +50,7 @@ override LDFLAGS = -m $(LD_I386_EMULATION) -T 
$(SRC_DIR)/flat.lds
 pvh.img: pvh.o pvh_main.o
 
 %.o: %.S
-   $(call quiet-command,$(CPP) $(CPPFLAGS) -c -o - $< | $(AS) $(ASFLAGS) 
-o $@,"AS","$@")
+   $(call quiet-command,$(CC) $(CPPFLAGS) -E -o - $< | $(AS) $(ASFLAGS) -o 
$@,"AS","$@")
 
 %.o: %.c
$(call quiet-command,$(CC) $(CPPFLAGS) $(CFLAGS) -c $< -o $@,"CC","$@")
-- 
2.30.2




Re: [PATCH v1 09/33] tests/docker: update debian-mips64el-cross with lcitool

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 04:35:39PM +0100, Alex Bennée wrote:
> Use lcitool to update debian-mips64el-cross to a Debian 11 based system.
> 
> Signed-off-by: Alex Bennée 
> ---
>  .gitlab-ci.d/container-cross.yml  |   3 +-
>  tests/docker/Makefile.include |   1 -
>  .../dockerfiles/debian-mips64el-cross.docker  | 177 +++---
>  tests/lcitool/refresh |   5 +
>  4 files changed, 159 insertions(+), 27 deletions(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH v1 23/33] configure: move symlink configuration earlier

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

Ensure that the pc-bios/optionrom and pc-bios/s390-ccw directory
exist at the time when we'll write out the compiler configuration
for them.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-13-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure | 49 -
 1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/configure b/configure
index b8c21e096c..82c2ddc79a 100755
--- a/configure
+++ b/configure
@@ -2187,6 +2187,30 @@ fi
 
 QEMU_GA_MSI_MINGW_BIN_PATH="$($pkg_config --variable=prefix glib-2.0)/bin"
 
+# Set up build tree symlinks that point back into the source tree
+# (these can be both files and directories).
+# Caution: avoid adding files or directories here using wildcards. This
+# will result in problems later if a new file matching the wildcard is
+# added to the source tree -- nothing will cause configure to be rerun
+# so the build tree will be missing the link back to the new file, and
+# tests might fail. Prefer to keep the relevant files in their own
+# directory and symlink the directory instead.
+LINKS="Makefile"
+LINKS="$LINKS tests/tcg/Makefile.target"
+LINKS="$LINKS pc-bios/optionrom/Makefile"
+LINKS="$LINKS pc-bios/s390-ccw/Makefile"
+LINKS="$LINKS .gdbinit scripts" # scripts needed by relative path in .gdbinit
+LINKS="$LINKS tests/avocado tests/data"
+LINKS="$LINKS tests/qemu-iotests/check"
+LINKS="$LINKS python"
+LINKS="$LINKS contrib/plugins/Makefile "
+for f in $LINKS ; do
+if [ -e "$source_path/$f" ]; then
+mkdir -p `dirname ./$f`
+symlink "$source_path/$f" "$f"
+fi
+done
+
 # Mac OS X ships with a broken assembler
 roms=
 if { test "$cpu" = "i386" || test "$cpu" = "x86_64"; } && \
@@ -2405,31 +2429,6 @@ if test "$safe_stack" = "yes"; then
   echo "CONFIG_SAFESTACK=y" >> $config_host_mak
 fi
 
-# If we're using a separate build tree, set it up now.
-# LINKS are things to symlink back into the source tree
-# (these can be both files and directories).
-# Caution: do not add files or directories here using wildcards. This
-# will result in problems later if a new file matching the wildcard is
-# added to the source tree -- nothing will cause configure to be rerun
-# so the build tree will be missing the link back to the new file, and
-# tests might fail. Prefer to keep the relevant files in their own
-# directory and symlink the directory instead.
-LINKS="Makefile"
-LINKS="$LINKS tests/tcg/Makefile.target"
-LINKS="$LINKS pc-bios/optionrom/Makefile"
-LINKS="$LINKS pc-bios/s390-ccw/Makefile"
-LINKS="$LINKS .gdbinit scripts" # scripts needed by relative path in .gdbinit
-LINKS="$LINKS tests/avocado tests/data"
-LINKS="$LINKS tests/qemu-iotests/check"
-LINKS="$LINKS python"
-LINKS="$LINKS contrib/plugins/Makefile "
-for f in $LINKS ; do
-if [ -e "$source_path/$f" ]; then
-mkdir -p `dirname ./$f`
-symlink "$source_path/$f" "$f"
-fi
-done
-
 # tests/tcg configuration
 (makefile=tests/tcg/Makefile.prereqs
 echo "# Automatically generated by configure - do not modify" > $makefile
-- 
2.30.2




[PATCH v1 17/33] tests/tcg: correct target CPU for sparc32

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

We do not want v8plus for pure sparc32, as the difference with the V8 ABI
are only meaningful on 64-bit CPUs suh as ultrasparc; supersparc is the
best CPU to use for 32-bit.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Richard Henderson 
Message-Id: <20220517092616.1272238-7-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 tests/tcg/configure.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh
index 691d90abac..59f2403d1a 100755
--- a/tests/tcg/configure.sh
+++ b/tests/tcg/configure.sh
@@ -70,7 +70,7 @@ fi
 : ${cross_cc_riscv64="riscv64-linux-gnu-gcc"}
 : ${cross_cc_s390x="s390x-linux-gnu-gcc"}
 : ${cross_cc_sh4="sh4-linux-gnu-gcc"}
-: ${cross_cc_cflags_sparc="-m32 -mv8plus -mcpu=ultrasparc"}
+: ${cross_cc_cflags_sparc="-m32 -mcpu=supersparc"}
 : ${cross_cc_sparc64="sparc64-linux-gnu-gcc"}
 : ${cross_cc_cflags_sparc64="-m64 -mcpu=ultrasparc"}
 : ${cross_cc_x86_64="x86_64-linux-gnu-gcc"}
-- 
2.30.2




[PATCH v1 26/33] configure: enable cross compilation of vof

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

While container-based cross compilers are not supported, this already
makes it possible to build vof on any machine that has an installation
of GCC and binutils for 32- or 64-bit PowerPC.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-16-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure| 10 ++
 pc-bios/vof/Makefile | 17 +
 2 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/configure b/configure
index b974db3ebd..89a0470cc2 100755
--- a/configure
+++ b/configure
@@ -2209,6 +2209,7 @@ LINKS="Makefile"
 LINKS="$LINKS tests/tcg/Makefile.target"
 LINKS="$LINKS pc-bios/optionrom/Makefile"
 LINKS="$LINKS pc-bios/s390-ccw/Makefile"
+LINKS="$LINKS pc-bios/vof/Makefile"
 LINKS="$LINKS .gdbinit scripts" # scripts needed by relative path in .gdbinit
 LINKS="$LINKS tests/avocado tests/data"
 LINKS="$LINKS tests/qemu-iotests/check"
@@ -2246,6 +2247,15 @@ if test -n "$target_cc" &&
 fi
 fi
 
+probe_target_compilers ppc ppc64
+if test -n "$target_cc" && test "$softmmu" = yes; then
+roms="$roms vof"
+config_mak=pc-bios/vof/config.mak
+echo "# Automatically generated by configure - do not modify" > $config_mak
+echo "SRC_DIR=$source_path/pc-bios/vof" >> $config_mak
+write_target_makefile >> $config_mak
+fi
+
 # Only build s390-ccw bios if the compiler has -march=z900 or -march=z10
 # (which is the lowest architecture level that Clang supports)
 probe_target_compiler s390x
diff --git a/pc-bios/vof/Makefile b/pc-bios/vof/Makefile
index aa1678c4d8..391ac0d600 100644
--- a/pc-bios/vof/Makefile
+++ b/pc-bios/vof/Makefile
@@ -1,11 +1,10 @@
-all: build-all
+include config.mak
+VPATH=$(SRC_DIR)
+all: vof.bin
 
-build-all: vof.bin
-
-CROSS ?=
-CC = $(CROSS)gcc
-LD = $(CROSS)ld
-OBJCOPY = $(CROSS)objcopy
+CC ?= $(CROSS)gcc
+LD ?= $(CROSS)ld
+OBJCOPY ?= $(CROSS)objcopy
 
 %.o: %.S
$(CC) -m32 -mbig-endian -mcpu=power4 -c -o $@ $<
@@ -14,10 +13,12 @@ OBJCOPY = $(CROSS)objcopy
$(CC) -m32 -mbig-endian -mcpu=power4 -c -fno-stack-protector -o $@ $<
 
 vof.elf: entry.o main.o ci.o bootmem.o libc.o
-   $(LD) -nostdlib -e_start -Tvof.lds -EB -o $@ $^
+   $(LD) -nostdlib -e_start -T$(SRC_DIR)/vof.lds -EB -o $@ $^
 
 %.bin: %.elf
$(OBJCOPY) -O binary -j .text -j .data -j .toc -j .got2 $^ $@
 
 clean:
rm -f *.o vof.bin vof.elf *~
+
+.PHONY: all clean
-- 
2.30.2




Re: [PATCH v1 33/33] docs/devel: clean-up the CI links in the docs

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 04:36:03PM +0100, Alex Bennée wrote:
> There where some broken links so fix those up with proper references
> to the devel docs. I also did a little light copy-editing to reflect
> the current state and broke up a paragraph to reduce the "wall of
> text" effect.
> 
> Signed-off-by: Alex Bennée 
> ---
>  docs/devel/ci-jobs.rst.inc|  2 ++
>  docs/devel/ci.rst | 11 +-
>  docs/devel/submitting-a-patch.rst | 36 ---
>  docs/devel/testing.rst|  2 ++
>  4 files changed, 29 insertions(+), 22 deletions(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH v1 11/33] tests/docker: update debian-amd64 with lcitool

2022-05-27 Thread Alex Bennée
The one minor wrinkle we need to account for is the netmap support
still requires building from source. We also include cscope and GNU
global as they are used in one of the builds.

Signed-off-by: Alex Bennée 
Cc: Philippe Mathieu-Daudé 
Cc: Luigi Rizzo 
Cc: Giuseppe Lettieri 
Cc: Vincenzo Maffione 
---
 .gitlab-ci.d/containers.yml  |   3 +-
 tests/docker/dockerfiles/debian-amd64.docker | 194 ++-
 tests/lcitool/refresh|  19 ++
 3 files changed, 164 insertions(+), 52 deletions(-)

diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml
index e9df90bbdd..be34cbc7ba 100644
--- a/.gitlab-ci.d/containers.yml
+++ b/.gitlab-ci.d/containers.yml
@@ -14,8 +14,7 @@ amd64-debian11-container:
 
 amd64-debian-container:
   extends: .container_job_template
-  stage: containers-layer2
-  needs: ['amd64-debian10-container']
+  stage: containers
   variables:
 NAME: debian-amd64
 
diff --git a/tests/docker/dockerfiles/debian-amd64.docker 
b/tests/docker/dockerfiles/debian-amd64.docker
index ed546edcd6..503e282802 100644
--- a/tests/docker/dockerfiles/debian-amd64.docker
+++ b/tests/docker/dockerfiles/debian-amd64.docker
@@ -1,59 +1,153 @@
+# THIS FILE WAS AUTO-GENERATED
 #
-# Docker x86_64 target
+#  $ lcitool dockerfile --layers all debian-11 qemu
 #
-# This docker target builds on the Debian Buster base image. Further
-# libraries which are not widely available are installed by hand.
-#
-FROM qemu/debian10
-MAINTAINER Philippe Mathieu-Daudé 
-
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy qemu
+# https://gitlab.com/libvirt/libvirt-ci
 
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt install -y --no-install-recommends \
-cscope \
-genisoimage \
-exuberant-ctags \
-global \
-libbz2-dev \
-liblzo2-dev \
-libgcrypt20-dev \
-libfdt-dev \
-librdmacm-dev \
-libsasl2-dev \
-libsnappy-dev \
-libvte-dev \
-netcat-openbsd \
-openssh-client \
-python3-numpy \
-python3-opencv \
-python3-venv
+FROM docker.io/library/debian:11-slim
 
-# virgl
-RUN apt update && \
-DEBIAN_FRONTEND=noninteractive eatmydata \
-apt install -y --no-install-recommends \
-libegl1-mesa-dev \
-libepoxy-dev \
-libgbm-dev
-RUN git clone https://gitlab.freedesktop.org/virgl/virglrenderer.git 
/usr/src/virglrenderer && \
-cd /usr/src/virglrenderer && git checkout virglrenderer-0.8.0
-RUN cd /usr/src/virglrenderer && ./autogen.sh && ./configure --disable-tests 
&& make install
+RUN export DEBIAN_FRONTEND=noninteractive && \
+apt-get update && \
+apt-get install -y eatmydata && \
+eatmydata apt-get dist-upgrade -y && \
+eatmydata apt-get install --no-install-recommends -y \
+bash \
+bc \
+bsdextrautils \
+bzip2 \
+ca-certificates \
+ccache \
+clang \
+dbus \
+debianutils \
+diffutils \
+exuberant-ctags \
+findutils \
+g++ \
+gcc \
+gcovr \
+genisoimage \
+gettext \
+git \
+hostname \
+libaio-dev \
+libasan5 \
+libasound2-dev \
+libattr1-dev \
+libbpf-dev \
+libbrlapi-dev \
+libbz2-dev \
+libc6-dev \
+libcacard-dev \
+libcap-ng-dev \
+libcapstone-dev \
+libcurl4-gnutls-dev \
+libdaxctl-dev \
+libdrm-dev \
+libepoxy-dev \
+libfdt-dev \
+libffi-dev \
+libfuse3-dev \
+libgbm-dev \
+libgcrypt20-dev \
+libglib2.0-dev \
+libglusterfs-dev \
+libgnutls28-dev \
+libgtk-3-dev \
+libibumad-dev \
+libibverbs-dev \
+libiscsi-dev \
+libjemalloc-dev \
+libjpeg62-turbo-dev \
+liblttng-ust-dev \
+liblzo2-dev \
+libncursesw5-dev \
+libnfs-dev \
+libnuma-dev \
+libpam0g-dev \
+libpcre2-dev \
+libpixman-1-dev \
+libpmem-dev \
+libpng-dev \
+libpulse-dev \
+librbd-dev \
+librdmacm-dev \
+libsasl2-dev \
+libsdl2-dev \
+libsdl2-image-dev \
+libseccomp-dev \
+libselinux1-dev \
+libslirp-dev \
+libsnappy-dev \
+libspice-protocol-dev \
+libspice-server-dev \
+libssh-gcrypt-dev \
+libsystemd-dev \
+libtasn1-6-dev \
+libubsan1 \
+libudev-dev \
+liburing-dev \
+ 

[PATCH v1 20/33] configure: handle host compiler in probe_target_compiler

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

In preparation for handling more binaries than just cc, handle
the case of "probe_target_compiler $cpu" directly in the function,
setting the target_* variables based on the ones that are used to
build QEMU.  The clang check also needs to be moved after this
fallback.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Richard Henderson 
Message-Id: <20220517092616.1272238-10-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure | 25 ++---
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/configure b/configure
index 31c1ab2579..addbb0fe44 100755
--- a/configure
+++ b/configure
@@ -954,10 +954,6 @@ case $git_submodules_action in
 ;;
 esac
 
-if eval test -z "\${cross_cc_$cpu}"; then
-eval "cross_cc_${cpu}=\$cc"
-fi
-
 default_target_list=""
 mak_wilds=""
 
@@ -2003,13 +1999,6 @@ probe_target_compiler() {
   if eval test -n "\"\${cross_cc_$1}\""; then
 if eval has "\"\${cross_cc_$1}\""; then
   eval "target_cc=\"\${cross_cc_$1}\""
-  case $1 in
-i386|x86_64)
-  if $target_cc --version | grep -qi "clang"; then
-unset target_cc
-  fi
-  ;;
-  esac
 fi
   fi
   if eval test -n "\"\${cross_as_$1}\""; then
@@ -2022,6 +2011,20 @@ probe_target_compiler() {
   eval "target_ld=\"\${cross_ld_$1}\""
 fi
   fi
+  if test "$1" = $cpu; then
+: ${target_cc:=$cc}
+: ${target_as:=$as}
+: ${target_ld:=$ld}
+  fi
+  if test -n "$target_cc"; then
+case $1 in
+  i386|x86_64)
+if $target_cc --version | grep -qi "clang"; then
+  unset target_cc
+fi
+;;
+esac
+  fi
 }
 
 write_target_makefile() {
-- 
2.30.2




[PATCH v1 25/33] configure: enable cross-compilation of optionrom

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

While container-based cross compilers are not supported, this already makes
it possible to build x86 optionroms on any machine that has an installation
of GCC and binutils for 32- or 64-bit x86.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-15-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure  | 29 +
 pc-bios/optionrom/Makefile |  2 --
 2 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/configure b/configure
index 99626df869..b974db3ebd 100755
--- a/configure
+++ b/configure
@@ -2077,6 +2077,13 @@ probe_target_compiler() {
   fi
 }
 
+probe_target_compilers() {
+  for i; do
+probe_target_compiler $i
+test -n "$target_cc" && return 0
+  done
+}
+
 write_target_makefile() {
   if test -n "$target_cc"; then
 echo "CC=$target_cc"
@@ -2187,6 +2194,9 @@ fi
 
 QEMU_GA_MSI_MINGW_BIN_PATH="$($pkg_config --variable=prefix glib-2.0)/bin"
 
+###
+# cross-compiled firmware targets
+
 # Set up build tree symlinks that point back into the source tree
 # (these can be both files and directories).
 # Caution: avoid adding files or directories here using wildcards. This
@@ -2213,19 +2223,27 @@ done
 
 # Mac OS X ships with a broken assembler
 roms=
-if { test "$cpu" = "i386" || test "$cpu" = "x86_64"; } && \
+probe_target_compilers i386 x86_64
+if test -n "$target_cc" &&
 test "$targetos" != "darwin" && test "$targetos" != "sunos" && \
 test "$targetos" != "haiku" && test "$softmmu" = yes ; then
 # Different host OS linkers have different ideas about the name of the ELF
 # emulation. Linux and OpenBSD/amd64 use 'elf_i386'; FreeBSD uses the _fbsd
 # variant; OpenBSD/i386 uses the _obsd variant; and Windows uses i386pe.
 for emu in elf_i386 elf_i386_fbsd elf_i386_obsd i386pe; do
-if "$ld" -verbose 2>&1 | grep -q "^[[:space:]]*$emu[[:space:]]*$"; then
+if "$target_ld" -verbose 2>&1 | grep -q 
"^[[:space:]]*$emu[[:space:]]*$"; then
 ld_i386_emulation="$emu"
-roms="optionrom"
 break
 fi
 done
+if test -n "$ld_i386_emulation"; then
+roms="optionrom"
+config_mak=pc-bios/optionrom/config.mak
+echo "# Automatically generated by configure - do not modify" > 
$config_mak
+echo "TOPSRC_DIR=$source_path" >> $config_mak
+echo "LD_I386_EMULATION=$ld_i386_emulation" >> $config_mak
+write_target_makefile >> $config_mak
+fi
 fi
 
 # Only build s390-ccw bios if the compiler has -march=z900 or -march=z10
@@ -2378,7 +2396,6 @@ echo "GLIB_CFLAGS=$glib_cflags" >> $config_host_mak
 echo "GLIB_LIBS=$glib_libs" >> $config_host_mak
 echo "GLIB_VERSION=$(pkg-config --modversion glib-2.0)" >> $config_host_mak
 echo "QEMU_LDFLAGS=$QEMU_LDFLAGS" >> $config_host_mak
-echo "LD_I386_EMULATION=$ld_i386_emulation" >> $config_host_mak
 echo "STRIP=$strip" >> $config_host_mak
 echo "EXESUF=$EXESUF" >> $config_host_mak
 
@@ -2568,10 +2585,6 @@ for target in $target_list; do
 done
 echo "TCG_TESTS_TARGETS=$tcg_tests_targets" >> $makefile)
 
-config_mak=pc-bios/optionrom/config.mak
-echo "# Automatically generated by configure - do not modify" > $config_mak
-echo "TOPSRC_DIR=$source_path" >> $config_mak
-
 if test "$skip_meson" = no; then
   cross="config-meson.cross.new"
   meson_quote() {
diff --git a/pc-bios/optionrom/Makefile b/pc-bios/optionrom/Makefile
index 17ccc76241..f639915b4f 100644
--- a/pc-bios/optionrom/Makefile
+++ b/pc-bios/optionrom/Makefile
@@ -6,7 +6,6 @@ all: multiboot.bin multiboot_dma.bin linuxboot.bin 
linuxboot_dma.bin kvmvapic.bi
 # Dummy command so that make thinks it has done something
@true
 
-include ../../config-host.mak
 CFLAGS = -O2 -g
 
 quiet-command = $(if $(V),$1,$(if $(2),@printf "  %-7s %s\n" $2 $3 && $1, @$1))
@@ -44,7 +43,6 @@ Wa = -Wa,
 override ASFLAGS += -32
 override CFLAGS += $(call cc-option, $(Wa)-32)
 
-LD_I386_EMULATION ?= elf_i386
 override LDFLAGS = -m $(LD_I386_EMULATION) -T $(SRC_DIR)/flat.lds
 
 pvh.img: pvh.o pvh_main.o
-- 
2.30.2




[PATCH v1 29/33] gitlab: convert Cirrus jobs to .base_job_template

2022-05-27 Thread Alex Bennée
From: Daniel P. Berrangé 

This folds the Cirrus job rules into the base job
template, introducing two new variables

  - QEMU_JOB_CIRRUS - identifies the job as making
use of Cirrus CI via cirrus-run

  - QEMU_JOB_OPTIONAL - identifies the job as one
that is not run by default, primarily due to
resource constraints. It can be manually invoked
by users if they wish to validate that scenario.

Signed-off-by: Daniel P. Berrangé 
Message-Id: <20220526110705.59952-3-berra...@redhat.com>
Signed-off-by: Alex Bennée 
---
 docs/devel/ci-jobs.rst.inc | 14 ++
 .gitlab-ci.d/base.yml  |  9 +
 .gitlab-ci.d/cirrus.yml| 16 
 3 files changed, 31 insertions(+), 8 deletions(-)

diff --git a/docs/devel/ci-jobs.rst.inc b/docs/devel/ci-jobs.rst.inc
index eb6a9e6122..a539f502da 100644
--- a/docs/devel/ci-jobs.rst.inc
+++ b/docs/devel/ci-jobs.rst.inc
@@ -52,6 +52,20 @@ Maintainer controlled job variables
 The following variables may be set when defining a job in the
 CI configuration file.
 
+QEMU_JOB_CIRRUS
+~~~
+
+The job makes use of Cirrus CI infrastructure, requiring the
+configuration setup for cirrus-run to be present in the repository
+
+QEMU_JOB_OPTIONAL
+~
+
+The job is expected to be successful in general, but is not run
+by default due to need to conserve limited CI resources. It is
+available to be started manually by the contributor in the CI
+pipelines UI.
+
 Contributor controlled runtime variables
 
 
diff --git a/.gitlab-ci.d/base.yml b/.gitlab-ci.d/base.yml
index 10eb6ab8bc..5734caf9fe 100644
--- a/.gitlab-ci.d/base.yml
+++ b/.gitlab-ci.d/base.yml
@@ -12,12 +12,21 @@
 # want jobs to run
 #
 
+# Cirrus jobs can't run unless the creds / target repo are set
+- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == "" || $CIRRUS_API_TOKEN 
== "")'
+  when: never
+
 
 #
 # Stage 2: fine tune execution of jobs in specific scenarios
 # where the catch all logic is inapprorpaite
 #
 
+# Optional jobs should not be run unless manually triggered
+- if: '$QEMU_JOB_OPTIONAL'
+  when: manual
+  allow_failure: true
+
 
 #
 # Stage 3: catch all logic applying to any job not matching
diff --git a/.gitlab-ci.d/cirrus.yml b/.gitlab-ci.d/cirrus.yml
index b96b22e269..609c364308 100644
--- a/.gitlab-ci.d/cirrus.yml
+++ b/.gitlab-ci.d/cirrus.yml
@@ -11,6 +11,7 @@
 # special care, because we can't just override it at the GitLab CI job
 # definition level or we risk breaking it completely.
 .cirrus_build_job:
+  extends: .base_job_template
   stage: build
   image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
   needs: []
@@ -40,11 +41,8 @@
   <.gitlab-ci.d/cirrus/build.yml >.gitlab-ci.d/cirrus/$NAME.yml
 - cat .gitlab-ci.d/cirrus/$NAME.yml
 - cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
-  rules:
-# Allow on 'staging' branch and 'stable-X.Y-staging' branches only
-- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH !~ 
/staging/'
-  when: never
-- if: "$CIRRUS_GITHUB_REPO && $CIRRUS_API_TOKEN"
+  variables:
+QEMU_JOB_CIRRUS: 1
 
 x64-freebsd-12-build:
   extends: .cirrus_build_job
@@ -90,11 +88,11 @@ x64-macos-11-base-build:
 
 # The following jobs run VM-based tests via KVM on a Linux-based Cirrus-CI job
 .cirrus_kvm_job:
+  extends: .base_job_template
   stage: build
   image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
   needs: []
   timeout: 80m
-  allow_failure: true
   script:
 - sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
   -e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
@@ -105,8 +103,10 @@ x64-macos-11-base-build:
   <.gitlab-ci.d/cirrus/kvm-build.yml >.gitlab-ci.d/cirrus/$NAME.yml
 - cat .gitlab-ci.d/cirrus/$NAME.yml
 - cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
-  rules:
-- when: manual
+  variables:
+QEMU_JOB_CIRRUS: 1
+QEMU_JOB_OPTIONAL: 1
+
 
 x86-netbsd:
   extends: .cirrus_kvm_job
-- 
2.30.2




[PATCH v1 24/33] configure: enable cross-compilation of s390-ccw

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

While container-based cross compilers are not supported, this already makes
it possible to build s390-ccw on any machine that has s390x GCC and binutils
installed.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-14-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure| 18 +-
 pc-bios/s390-ccw/netboot.mak |  2 +-
 pc-bios/s390-ccw/Makefile|  9 +
 3 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/configure b/configure
index 82c2ddc79a..99626df869 100755
--- a/configure
+++ b/configure
@@ -2228,24 +2228,32 @@ if { test "$cpu" = "i386" || test "$cpu" = "x86_64"; } 
&& \
 done
 fi
 
-# Only build s390-ccw bios if we're on s390x and the compiler has -march=z900
-# or -march=z10 (which is the lowest architecture level that Clang supports)
-if test "$cpu" = "s390x" ; then
+# Only build s390-ccw bios if the compiler has -march=z900 or -march=z10
+# (which is the lowest architecture level that Clang supports)
+probe_target_compiler s390x
+if test -n "$target_cc" && test "$softmmu" = yes; then
   write_c_skeleton
-  compile_prog "-march=z900" ""
+  do_compiler "$target_cc" $target_cc_cflags -march=z900 -o $TMPO -c $TMPC
   has_z900=$?
-  if [ $has_z900 = 0 ] || compile_object "-march=z10 -msoft-float -Werror"; 
then
+  if [ $has_z900 = 0 ] || do_compiler "$target_cc" $target_cc_cflags 
-march=z10 -msoft-float -Werror -o $TMPO -c $TMPC; then
 if [ $has_z900 != 0 ]; then
   echo "WARNING: Your compiler does not support the z900!"
   echo " The s390-ccw bios will only work with guest CPUs >= z10."
 fi
 roms="$roms s390-ccw"
+config_mak=pc-bios/s390-ccw/config-host.mak
+echo "# Automatically generated by configure - do not modify" > $config_mak
+echo "SRC_PATH=$source_path/pc-bios/s390-ccw" >> $config_mak
+write_target_makefile >> $config_mak
 # SLOF is required for building the s390-ccw firmware on s390x,
 # since it is using the libnet code from SLOF for network booting.
 git_submodules="${git_submodules} roms/SLOF"
   fi
 fi
 
+###
+# generate config-host.mak
+
 # Check that the C++ compiler exists and works with the C compiler.
 # All the QEMU_CXXFLAGS are based on QEMU_CFLAGS. Keep this at the end to 
don't miss any other that could be added.
 if has $cxx; then
diff --git a/pc-bios/s390-ccw/netboot.mak b/pc-bios/s390-ccw/netboot.mak
index 68b4d7edcb..1a06befa4b 100644
--- a/pc-bios/s390-ccw/netboot.mak
+++ b/pc-bios/s390-ccw/netboot.mak
@@ -1,5 +1,5 @@
 
-SLOF_DIR := $(SRC_PATH)/roms/SLOF
+SLOF_DIR := $(SRC_PATH)/../../roms/SLOF
 
 NETOBJS := start.o sclp.o cio.o virtio.o virtio-net.o jump2ipl.o netmain.o
 
diff --git a/pc-bios/s390-ccw/Makefile b/pc-bios/s390-ccw/Makefile
index 0eb68efc7b..6eb713bf37 100644
--- a/pc-bios/s390-ccw/Makefile
+++ b/pc-bios/s390-ccw/Makefile
@@ -2,8 +2,9 @@ all: build-all
 # Dummy command so that make thinks it has done something
@true
 
-include ../../config-host.mak
+include config-host.mak
 CFLAGS = -O2 -g
+MAKEFLAGS += -rR
 
 quiet-command = $(if $(V),$1,$(if $(2),@printf "  %-7s %s\n" $2 $3 && $1, @$1))
 cc-option = $(if $(shell $(CC) $1 $2 -S -o /dev/null -xc /dev/null \
@@ -11,7 +12,7 @@ cc-option = $(if $(shell $(CC) $1 $2 -S -o /dev/null -xc 
/dev/null \
 
 VPATH_SUFFIXES = %.c %.h %.S %.m %.mak %.sh %.rc Kconfig% %.json.in
 set-vpath = $(if $1,$(foreach PATTERN,$(VPATH_SUFFIXES),$(eval vpath 
$(PATTERN) $1)))
-$(call set-vpath, $(SRC_PATH)/pc-bios/s390-ccw)
+$(call set-vpath, $(SRC_PATH))
 
 # Flags for dependency generation
 QEMU_DGFLAGS = -MMD -MP -MT $@ -MF $(@D)/$(*F).d
@@ -49,8 +50,8 @@ s390-ccw.img: s390-ccw.elf
 
 $(OBJECTS): Makefile
 
-ifneq ($(wildcard $(SRC_PATH)/roms/SLOF/lib/libnet),)
-include $(SRC_PATH)/pc-bios/s390-ccw/netboot.mak
+ifneq ($(wildcard $(SRC_PATH)/../../roms/SLOF/lib/libnet),)
+include $(SRC_PATH)/netboot.mak
 else
 s390-netboot.img:
@echo "s390-netboot.img not built since roms/SLOF/ is not available."
-- 
2.30.2




[PATCH v1 28/33] gitlab: introduce a common base job template

2022-05-27 Thread Alex Bennée
From: Daniel P. Berrangé 

Currently job rules are spread across the various templates
and jobs, making it hard to understand exactly what runs in
what scenario. This leads to inconsistency in the rules and
increased maint burden.

The intent is that we introduce a common '.base_job_template'
which will have a general purpose 'rules:' block. No other
template or job should define 'rules:', but instead they must
rely on the inherited rules. To allow behaviour to be tweaked,
rules will be influenced by a number of variables with the
naming scheme 'QEMU_JOB_'.

Signed-off-by: Daniel P. Berrangé 
Message-Id: <20220526110705.59952-2-berra...@redhat.com>
Signed-off-by: Alex Bennée 
---
 docs/devel/ci-jobs.rst.inc| 36 ++-
 .gitlab-ci.d/base.yml | 28 +++
 .gitlab-ci.d/qemu-project.yml |  1 +
 3 files changed, 64 insertions(+), 1 deletion(-)
 create mode 100644 .gitlab-ci.d/base.yml

diff --git a/docs/devel/ci-jobs.rst.inc b/docs/devel/ci-jobs.rst.inc
index 92e25872aa..eb6a9e6122 100644
--- a/docs/devel/ci-jobs.rst.inc
+++ b/docs/devel/ci-jobs.rst.inc
@@ -28,7 +28,35 @@ For further information about how to set these variables, 
please refer to::
 
   
https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd
 
-Here is a list of the most used variables:
+Variable naming and grouping
+
+
+The variables used by QEMU's CI configuration are grouped together
+in a handful of namespaces
+
+ * QEMU_JOB_ - variables to be defined in individual jobs
+   or templates, to influence the shared rules defined in the
+   .base_job_template.
+
+ * QEMU_CI_nnn - variables to be set by contributors in their
+   repository CI settings, or as git push variables, to influence
+   which jobs get run in a pipeline
+
+ * nnn - other misc variables not falling into the above
+   categories, or using different names for historical reasons
+   and not yet converted.
+
+Maintainer controlled job variables
+---
+
+The following variables may be set when defining a job in the
+CI configuration file.
+
+Contributor controlled runtime variables
+
+
+The following variables may be set by contributors to control
+job execution
 
 QEMU_CI_AVOCADO_TESTING
 ~~~
@@ -38,6 +66,12 @@ these artifacts are not already cached, downloading them 
make the jobs
 reach the timeout limit). Set this variable to have the tests using the
 Avocado framework run automatically.
 
+Other misc variables
+
+
+These variables are primarily to control execution of jobs on
+private runners
+
 AARCH64_RUNNER_AVAILABLE
 
 If you've got access to an aarch64 host that can be used as a gitlab-CI
diff --git a/.gitlab-ci.d/base.yml b/.gitlab-ci.d/base.yml
new file mode 100644
index 00..10eb6ab8bc
--- /dev/null
+++ b/.gitlab-ci.d/base.yml
@@ -0,0 +1,28 @@
+
+# The order of rules defined here is critically important.
+# They are evaluated in order and first match wins.
+#
+# Thus we group them into a number of stages, ordered from
+# most restrictive to least restrictive
+#
+.base_job_template:
+  rules:
+#
+# Stage 1: exclude scenarios where we definitely don't
+# want jobs to run
+#
+
+
+#
+# Stage 2: fine tune execution of jobs in specific scenarios
+# where the catch all logic is inapprorpaite
+#
+
+
+#
+# Stage 3: catch all logic applying to any job not matching
+# an earlier criteria
+#
+
+# Jobs can run if any jobs they depend on were successfull
+- when: on_success
diff --git a/.gitlab-ci.d/qemu-project.yml b/.gitlab-ci.d/qemu-project.yml
index 871262fe0e..691d9bf5dc 100644
--- a/.gitlab-ci.d/qemu-project.yml
+++ b/.gitlab-ci.d/qemu-project.yml
@@ -2,6 +2,7 @@
 # https://gitlab.com/qemu-project/qemu/-/pipelines
 
 include:
+  - local: '/.gitlab-ci.d/base.yml'
   - local: '/.gitlab-ci.d/stages.yml'
   - local: '/.gitlab-ci.d/edk2.yml'
   - local: '/.gitlab-ci.d/opensbi.yml'
-- 
2.30.2




[PATCH v1 30/33] gitlab: convert static checks to .base_job_template

2022-05-27 Thread Alex Bennée
From: Daniel P. Berrangé 

This folds the static checks into using the base job
template rules, introducing one new variable

 - QEMU_JOB_ONLY_FORKS - a job that should never run
   on an upstream pipeline. The information it reports
   is only applicable to contributors in a pre-submission
   scenario, not time of merge.

Signed-off-by: Daniel P. Berrangé 
Message-Id: <20220526110705.59952-4-berra...@redhat.com>
Signed-off-by: Alex Bennée 
---
 docs/devel/ci-jobs.rst.inc |  7 +++
 .gitlab-ci.d/base.yml  |  4 
 .gitlab-ci.d/static_checks.yml | 19 +++
 3 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/docs/devel/ci-jobs.rst.inc b/docs/devel/ci-jobs.rst.inc
index a539f502da..4c7e30ab08 100644
--- a/docs/devel/ci-jobs.rst.inc
+++ b/docs/devel/ci-jobs.rst.inc
@@ -66,6 +66,13 @@ by default due to need to conserve limited CI resources. It 
is
 available to be started manually by the contributor in the CI
 pipelines UI.
 
+QEMU_JOB_ONLY_FORKS
+~~~
+
+The job results are only of interest to contributors prior to
+submitting code. They are not required as part of the gating
+CI pipeline.
+
 Contributor controlled runtime variables
 
 
diff --git a/.gitlab-ci.d/base.yml b/.gitlab-ci.d/base.yml
index 5734caf9fe..9a0b8d7f97 100644
--- a/.gitlab-ci.d/base.yml
+++ b/.gitlab-ci.d/base.yml
@@ -16,6 +16,10 @@
 - if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == "" || $CIRRUS_API_TOKEN 
== "")'
   when: never
 
+# Jobs only intended for forks should always be skipped on upstram
+- if: '$QEMU_JOB_ONLY_FORKS == "1" && $CI_PROJECT_NAMESPACE == 
"qemu-project"'
+  when: never
+
 
 #
 # Stage 2: fine tune execution of jobs in specific scenarios
diff --git a/.gitlab-ci.d/static_checks.yml b/.gitlab-ci.d/static_checks.yml
index 94858e3272..289ad1359e 100644
--- a/.gitlab-ci.d/static_checks.yml
+++ b/.gitlab-ci.d/static_checks.yml
@@ -1,4 +1,5 @@
 check-patch:
+  extends: .base_job_template
   stage: build
   image: python:3.10-alpine
   needs: []
@@ -6,15 +7,13 @@ check-patch:
 - .gitlab-ci.d/check-patch.py
   variables:
 GIT_DEPTH: 1000
+QEMU_JOB_ONLY_FORKS: 1
   before_script:
 - apk -U add git perl
-  rules:
-- if: '$CI_PROJECT_NAMESPACE == "qemu-project"'
-  when: never
-- when: on_success
-  allow_failure: true
+  allow_failure: true
 
 check-dco:
+  extends: .base_job_template
   stage: build
   image: python:3.10-alpine
   needs: []
@@ -23,12 +22,9 @@ check-dco:
 GIT_DEPTH: 1000
   before_script:
 - apk -U add git
-  rules:
-- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH == 
$CI_DEFAULT_BRANCH'
-  when: never
-- when: on_success
 
 check-python-pipenv:
+  extends: .base_job_template
   stage: test
   image: $CI_REGISTRY_IMAGE/qemu/python:latest
   script:
@@ -39,6 +35,7 @@ check-python-pipenv:
 job: python-container
 
 check-python-tox:
+  extends: .base_job_template
   stage: test
   image: $CI_REGISTRY_IMAGE/qemu/python:latest
   script:
@@ -46,8 +43,6 @@ check-python-tox:
   variables:
 GIT_DEPTH: 1
 QEMU_TOX_EXTRA_ARGS: --skip-missing-interpreters=false
+QEMU_JOB_OPTIONAL: 1
   needs:
 job: python-container
-  rules:
-- when: manual
-  allow_failure: true
-- 
2.30.2




[PATCH v1 15/33] build: do a full build before running TCG tests

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

TCG tests need both QEMU and firmware to be built, so do "ninja all" before
trying to run them.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Richard Henderson 
Message-Id: <20220517092616.1272238-5-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 tests/Makefile.include | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/Makefile.include b/tests/Makefile.include
index ec84b2ebc0..72ce0561f4 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -57,7 +57,7 @@ $(TCG_TESTS_TARGETS:%=build-tcg-tests-%): build-tcg-tests-%: 
$(BUILD_DIR)/tests/
 "BUILD","$* guest-tests")
 
 .PHONY: $(TCG_TESTS_TARGETS:%=run-tcg-tests-%)
-$(TCG_TESTS_TARGETS:%=run-tcg-tests-%): run-tcg-tests-%: build-tcg-tests-% 
$(if $(CONFIG_PLUGIN),test-plugins)
+$(TCG_TESTS_TARGETS:%=run-tcg-tests-%): run-tcg-tests-%: build-tcg-tests-%
$(call quiet-command, \
$(MAKE) -C tests/tcg/$* -f ../Makefile.target $(SUBDIR_MAKEFLAGS) \
 TARGET="$*" SRC_PATH="$(SRC_PATH)" SPEED=$(SPEED) run, 
\
@@ -74,6 +74,7 @@ $(TCG_TESTS_TARGETS:%=clean-tcg-tests-%): clean-tcg-tests-%:
 build-tcg: $(BUILD_TCG_TARGET_RULES)
 
 .PHONY: check-tcg
+.ninja-goals.check-tcg = all $(if $(CONFIG_PLUGIN),test-plugins)
 check-tcg: $(RUN_TCG_TARGET_RULES)
 
 .PHONY: clean-tcg
-- 
2.30.2




[PATCH v1 22/33] configure: include more binutils in tests/tcg makefile

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

Firmware builds require paths to all the binutils; it is not enough to
use only cc, or even as/ld as in the case of tests/tcg/tricore.
Adjust the cross-compiler configurator to detect also ar, nm, objcopy,
ranlib and strip.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-12-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure | 51 +++
 1 file changed, 51 insertions(+)

diff --git a/configure b/configure
index c2b16c17b9..b8c21e096c 100755
--- a/configure
+++ b/configure
@@ -1875,11 +1875,21 @@ probe_target_compiler() {
   container_image=
   container_hosts=
   container_cross_cc=
+  container_cross_ar=
   container_cross_as=
   container_cross_ld=
+  container_cross_nm=
+  container_cross_objcopy=
+  container_cross_ranlib=
+  container_cross_strip=
   target_cc=
+  target_ar=
   target_as=
   target_ld=
+  target_nm=
+  target_objcopy=
+  target_ranlib=
+  target_strip=
 
   case $1 in
 aarch64) container_hosts="x86_64 aarch64" ;;
@@ -2018,8 +2028,13 @@ probe_target_compiler() {
 ;;
 esac
 : ${container_cross_cc:=${container_cross_prefix}gcc}
+: ${container_cross_ar:=${container_cross_prefix}ar}
 : ${container_cross_as:=${container_cross_prefix}as}
 : ${container_cross_ld:=${container_cross_prefix}ld}
+: ${container_cross_nm:=${container_cross_prefix}nm}
+: ${container_cross_objcopy:=${container_cross_prefix}objcopy}
+: ${container_cross_ranlib:=${container_cross_prefix}ranlib}
+: ${container_cross_strip:=${container_cross_prefix}strip}
   done
 
   eval "target_cflags=\${cross_cc_cflags_$1}"
@@ -2030,12 +2045,26 @@ probe_target_compiler() {
   else
 compute_target_variable $1 target_cc gcc
   fi
+  target_ccas=$target_cc
+  compute_target_variable $1 target_ar ar
   compute_target_variable $1 target_as as
   compute_target_variable $1 target_ld ld
+  compute_target_variable $1 target_nm nm
+  compute_target_variable $1 target_objcopy objcopy
+  compute_target_variable $1 target_ranlib ranlib
+  compute_target_variable $1 target_strip strip
   if test "$1" = $cpu; then
 : ${target_cc:=$cc}
+: ${target_ccas:=$ccas}
 : ${target_as:=$as}
 : ${target_ld:=$ld}
+: ${target_ar:=$ar}
+: ${target_as:=$as}
+: ${target_ld:=$ld}
+: ${target_nm:=$nm}
+: ${target_objcopy:=$objcopy}
+: ${target_ranlib:=$ranlib}
+: ${target_strip:=$strip}
   fi
   if test -n "$target_cc"; then
 case $1 in
@@ -2051,6 +2080,10 @@ probe_target_compiler() {
 write_target_makefile() {
   if test -n "$target_cc"; then
 echo "CC=$target_cc"
+echo "CCAS=$target_ccas"
+  fi
+  if test -n "$target_ar"; then
+echo "AR=$target_ar"
   fi
   if test -n "$target_as"; then
 echo "AS=$target_as"
@@ -2058,14 +2091,32 @@ write_target_makefile() {
   if test -n "$target_ld"; then
 echo "LD=$target_ld"
   fi
+  if test -n "$target_nm"; then
+echo "NM=$target_nm"
+  fi
+  if test -n "$target_objcopy"; then
+echo "OBJCOPY=$target_objcopy"
+  fi
+  if test -n "$target_ranlib"; then
+echo "RANLIB=$target_ranlib"
+  fi
+  if test -n "$target_strip"; then
+echo "STRIP=$target_strip"
+  fi
 }
 
 write_container_target_makefile() {
   if test -n "$container_cross_cc"; then
 echo "CC=\$(DOCKER_SCRIPT) cc --cc $container_cross_cc -i 
qemu/$container_image -s $source_path --"
+echo "CCAS=\$(DOCKER_SCRIPT) cc --cc $container_cross_cc -i 
qemu/$container_image -s $source_path --"
   fi
+  echo "AR=\$(DOCKER_SCRIPT) cc --cc $container_cross_ar -i 
qemu/$container_image -s $source_path --"
   echo "AS=\$(DOCKER_SCRIPT) cc --cc $container_cross_as -i 
qemu/$container_image -s $source_path --"
   echo "LD=\$(DOCKER_SCRIPT) cc --cc $container_cross_ld -i 
qemu/$container_image -s $source_path --"
+  echo "NM=\$(DOCKER_SCRIPT) cc --cc $container_cross_nm -i 
qemu/$container_image -s $source_path --"
+  echo "OBJCOPY=\$(DOCKER_SCRIPT) cc --cc $container_cross_objcopy -i 
qemu/$container_image -s $source_path --"
+  echo "RANLIB=\$(DOCKER_SCRIPT) cc --cc $container_cross_ranlib -i 
qemu/$container_image -s $source_path --"
+  echo "STRIP=\$(DOCKER_SCRIPT) cc --cc $container_cross_strip -i 
qemu/$container_image -s $source_path --"
 }
 
 
-- 
2.30.2




[PATCH v1 32/33] gitlab: don't run CI jobs in forks by default

2022-05-27 Thread Alex Bennée
From: Daniel P. Berrangé 

To preserve CI shared runner credits we don't want to run
pipelines on every push.

This sets up the config so that pipelines are never created
for contributors by default. To override this the QEMU_CI
variable can be set to a non-zero value. If set to 1, the
pipeline will be created but all jobs will remain manually
started. The contributor can selectively run jobs that they
care about. If set to 2, the pipeline will be created and
all jobs will immediately start.

This behavior can be controlled using push variables

  git push -o ci.variable=QEMU_CI=1

To make this more convenient define an alias

   git config --local alias.push-ci "push -o ci.variable=QEMU_CI=1"
   git config --local alias.push-ci-now "push -o ci.variable=QEMU_CI=2"

Which lets you run

  git push-ci

to create the pipeline, or

  git push-ci-now

to create and run the pipeline

Signed-off-by: Daniel P. Berrangé 
Message-Id: <20220526110705.59952-6-berra...@redhat.com>
[AJB: fix typo, replicate alias tips in ci.rst]
Signed-off-by: Alex Bennée 
---
 docs/devel/ci-jobs.rst.inc | 38 ++
 .gitlab-ci.d/base.yml  |  9 +
 2 files changed, 47 insertions(+)

diff --git a/docs/devel/ci-jobs.rst.inc b/docs/devel/ci-jobs.rst.inc
index 0b4926e537..13d448b54d 100644
--- a/docs/devel/ci-jobs.rst.inc
+++ b/docs/devel/ci-jobs.rst.inc
@@ -28,6 +28,32 @@ For further information about how to set these variables, 
please refer to::
 
   
https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd
 
+Setting aliases in your git config
+--
+
+You can use aliases to make it easier to push branches with different
+CI configurations. For example define an alias for triggering CI:
+
+.. code::
+
+   git config --local alias.push-ci "push -o ci.variable=QEMU_CI=1"
+   git config --local alias.push-ci-now "push -o ci.variable=QEMU_CI=2"
+
+Which lets you run:
+
+.. code::
+
+   git push-ci
+
+to create the pipeline, or:
+
+.. code::
+
+   git push-ci-now
+
+to create and run the pipeline
+
+  
 Variable naming and grouping
 
 
@@ -98,6 +124,18 @@ Contributor controlled runtime variables
 The following variables may be set by contributors to control
 job execution
 
+QEMU_CI
+~~~
+
+By default, no pipelines will be created on contributor forks
+in order to preserve CI credits
+
+Set this variable to 1 to create the pipelines, but leave all
+the jobs to be manually started from the UI
+
+Set this variable to 2 to create the pipelines and run all
+the jobs immediately, as was historicaly behaviour
+
 QEMU_CI_AVOCADO_TESTING
 ~~~
 By default, tests using the Avocado framework are not run automatically in
diff --git a/.gitlab-ci.d/base.yml b/.gitlab-ci.d/base.yml
index 6a918abbda..62f2a850c3 100644
--- a/.gitlab-ci.d/base.yml
+++ b/.gitlab-ci.d/base.yml
@@ -28,6 +28,10 @@
 - if: '$QEMU_JOB_ONLY_FORKS == "1" && $CI_PROJECT_NAMESPACE == 
"qemu-project"'
   when: never
 
+# Forks don't get pipelines unless QEMU_CI=1 or QEMU_CI=2 is set
+- if: '$QEMU_CI != "1" && $QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != 
"qemu-project"'
+  when: never
+
 # Avocado jobs don't run in forks unless $QEMU_CI_AVOCADO_TESTING is set
 - if: '$QEMU_JOB_AVOCADO && $QEMU_CI_AVOCADO_TESTING != "1" && 
$CI_PROJECT_NAMESPACE != "qemu-project"'
   when: never
@@ -59,5 +63,10 @@
 # an earlier criteria
 #
 
+# Forks pipeline jobs don't start automatically unless
+# QEMU_CI=2 is set
+- if: '$QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != "qemu-project"'
+  when: manual
+
 # Jobs can run if any jobs they depend on were successfull
 - when: on_success
-- 
2.30.2




[PATCH v1 27/33] configure: remove unused variables from config-host.mak

2022-05-27 Thread Alex Bennée
From: Paolo Bonzini 

The only compiler variable that is still needed is $(CC), for
contrib/plugins/Makefile.  All firmware builds have their own
config-host.mak file.

Signed-off-by: Paolo Bonzini 
Message-Id: <20220517092616.1272238-17-pbonz...@redhat.com>
Signed-off-by: Alex Bennée 
---
 configure | 6 --
 1 file changed, 6 deletions(-)

diff --git a/configure b/configure
index 89a0470cc2..4c01625459 100755
--- a/configure
+++ b/configure
@@ -2394,11 +2394,6 @@ echo "GENISOIMAGE=$genisoimage" >> $config_host_mak
 echo "MESON=$meson" >> $config_host_mak
 echo "NINJA=$ninja" >> $config_host_mak
 echo "CC=$cc" >> $config_host_mak
-echo "AR=$ar" >> $config_host_mak
-echo "AS=$as" >> $config_host_mak
-echo "CCAS=$ccas" >> $config_host_mak
-echo "OBJCOPY=$objcopy" >> $config_host_mak
-echo "LD=$ld" >> $config_host_mak
 echo "QEMU_CFLAGS=$QEMU_CFLAGS" >> $config_host_mak
 echo "QEMU_CXXFLAGS=$QEMU_CXXFLAGS" >> $config_host_mak
 echo "QEMU_OBJCFLAGS=$QEMU_OBJCFLAGS" >> $config_host_mak
@@ -2406,7 +2401,6 @@ echo "GLIB_CFLAGS=$glib_cflags" >> $config_host_mak
 echo "GLIB_LIBS=$glib_libs" >> $config_host_mak
 echo "GLIB_VERSION=$(pkg-config --modversion glib-2.0)" >> $config_host_mak
 echo "QEMU_LDFLAGS=$QEMU_LDFLAGS" >> $config_host_mak
-echo "STRIP=$strip" >> $config_host_mak
 echo "EXESUF=$EXESUF" >> $config_host_mak
 
 # use included Linux headers
-- 
2.30.2




Re: [PATCH v1 07/33] tests/docker: update debian-armel-cross with lcitool

2022-05-27 Thread Daniel P . Berrangé
On Fri, May 27, 2022 at 04:35:37PM +0100, Alex Bennée wrote:
> Use lcitool to update debian-armel-cross to a Debian 11 based system.
> 
> Signed-off-by: Alex Bennée 
> ---
>  .gitlab-ci.d/container-cross.yml  |   3 +-
>  tests/docker/Makefile.include |   1 -
>  .../dockerfiles/debian-armel-cross.docker | 178 --
>  tests/lcitool/refresh |   5 +
>  4 files changed, 164 insertions(+), 23 deletions(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH] linux-user: Adjust child_tidptr on set_tid_address() syscall

2022-05-27 Thread Helge Deller
Keep track of the new child tidptr given by a set_tid_address() syscall.

Signed-off-by: Helge Deller 

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index f65045efe6..fdf5c1c03e 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -12202,7 +12202,11 @@ static abi_long do_syscall1(void *cpu_env, int num, 
abi_long arg1,

 #if defined(TARGET_NR_set_tid_address) && defined(__NR_set_tid_address)
 case TARGET_NR_set_tid_address:
-return get_errno(set_tid_address((int *)g2h(cpu, arg1)));
+{
+TaskState *ts = cpu->opaque;
+ts->child_tidptr = arg1;
+return get_errno(set_tid_address((int *)g2h(cpu, ts->child_tidptr)));
+}
 #endif

 case TARGET_NR_tkill:



  1   2   3   4   >