Re: [PATCH 1/1] migration: Terminate multifd threads on yank

2021-08-03 Thread Leonardo Bras Soares Passos
Hello Dave,

> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 377da78f5b..744a180dfe 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -1040,6 +1040,17 @@ void multifd_recv_sync_main(void)
> >  trace_multifd_recv_sync_main(multifd_recv_state->packet_num);
> >  }
> >
> > +void multifd_shutdown(void)
> > +{
> > +if (!migrate_use_multifd()) {
> > +return;
> > +}
> > +
> > +if (multifd_send_state) {
> > +multifd_send_terminate_threads(NULL);
> > +}
>
> That calls :
> for (i = 0; i < migrate_multifd_channels(); i++) {
> MultiFDSendParams *p = &multifd_send_state->params[i];
>
> qemu_mutex_lock(&p->mutex);
> p->quit = true;
> qemu_sem_post(&p->sem);
> qemu_mutex_unlock(&p->mutex);
> }
>
> so why doesn't this also get stuck in the same mutex you're trying to
> fix?

You are right, I got confused over the locks.
I need to get a better look at the code, and truly understand why this
patch fixes (?) the issue.

>
> Does the qio_channel_shutdown actually cause a shutdown on all fd's
> for the multifd?

As far as I tested, it does shutdown a single fd, but whenever this fd
fails in it's first sendmsg it causes migration to fail and all the
other fds get shutdown as well.

>
> (I've just seen the multifd/cancel test fail stuck in multifd_send_sync_main
> waiting on one of the locks).
>
> Dave
>

> >
> --
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
>

I will do a little more reading / debugging in this code.
Thanks Dave!




Re: [PATCH for-6.1] qga-win/msi: fix missing libstdc++-6 DLL in MSI installer

2021-08-03 Thread Marc-André Lureau
On Tue, Aug 3, 2021 at 8:36 AM Michael Roth  wrote:

> libstdc++ is required for the qga-vss.dll that provides fsfreeze
> functionality. Currently it is not provided by the MSI installer,
> resulting in fsfreeze being disabled in guest environments where it has
> not been installed by other means.
>
> In the future this would be better handled via gcc-cpp ComponentGroup
> provided by msitools, but that would be better handled with a general
> rework of DLL dependency handling in the installer build. Keep it
> simple for now to fix this regression.
>
> Tested with Fedora 34 mingw build environment.
>
> Cc: Gerd Hoffmann 
> Cc: Kostiantyn Kostiuk 
> Cc: Marc-André Lureau 
> Cc: Philippe Mathieu-Daudé 
> Signed-off-by: Michael Roth 
>

Reviewed-by: Marc-André Lureau 

---
>  qga/installer/qemu-ga.wxs | 4 
>  1 file changed, 4 insertions(+)
>
> diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
> index ce7b25b5e1..0950e8c6be 100644
> --- a/qga/installer/qemu-ga.wxs
> +++ b/qga/installer/qemu-ga.wxs
> @@ -84,6 +84,9 @@
>   Remove="uninstall" Name="QEMU-GA" Wait="yes" />
>
>
> +   Guid="{55E737B5-9127-4A11-9FC3-A29367714574}">
> + Source="$(var.Mingw_bin)/libstdc++-6.dll" KeyPath="yes" DiskId="1"/>
> +  
> Guid="{CB19C453-FABB-4BB1-ABAB-6B74F687BFBB}">
>   Source="$(env.BUILD_DIR)/qga/vss-win32/qga-vss.dll" KeyPath="yes"
> DiskId="1"/>
>
> @@ -164,6 +167,7 @@
>  
>
>
> +  
>
>
>
> --
> 2.25.1
>
>


Re: [PATCH 1/1] migration: Terminate multifd threads on yank

2021-08-03 Thread Leonardo Bras Soares Passos
Hello Lukas,

On Tue, Aug 3, 2021 at 3:42 AM Lukas Straub  wrote:
> Hi,
> There is an easier explanation: I forgot the send side of multifd
> altogether (I thought it was covered by migration_channel_connect()).
> So yank won't actually shutdown() the multifd sockets on the send side.

If I could get that correctly, it seems to abort migration (and
therefore close all fds) if the ft that ends up qio_channel_shutdown()
get to sendmsg(), which can take a while.
But it really does not close thew fds before that.

>
> In the bugreport you wrote
> > (As a test, I called qio_channel_shutdown() in every multifd iochannel and 
> > yank worked just fine, but I could not retry migration, because it was 
> > still 'ongoing')
> That sounds like a bug in the error handling for multifd. But quickly
> looking at the code, it should properly fail the migration.

In the end, just asking each thread to just exit ended up getting me a
smoother migration abort.
>
> BTW: You can shutdown outgoing sockets from outside of qemu with the
> 'ss' utility, like this: 'sudo ss -K dst  dport = 
> '

Very nice tool, thanks for sharing!

>
> Regards,
> Lukas Straub

Best regards,
Leonardo Bras




Re: need help with my config

2021-08-03 Thread Cédric Le Goater
Hello,

On 8/3/21 1:22 AM, Lindsay Ryan wrote:
> Hi Cedric, 
> Thanks for replying.
> I think I want to go down the PowerNV Power 9
> Which I will need the OpenPower firmware. 
> Looks like the webpage for downloading prebuild witherspoon and skiboot is 
> down/dead. Hasn't been working for me for 24hours anyway

Indeed :/

> Is that the only place to download that firmware?

Until we find a new hosting place, probably GH, you can try to build 
the FW images :

  https://github.com/open-power/op-build

from :

  https://github.com/open-power/op-build/archive/refs/tags/v2.6.tar.gz

or you can use these v2.6 files :

  https://www.kaod.org/qemu/powernv/

Cheers,

C. 




Re: [PATCH-for-6.2 v6 08/10] tests: Use QMP to check whether a TPM device model is available

2021-08-03 Thread Marc-André Lureau
Hi

On Tue, Aug 3, 2021 at 1:54 AM Stefan Berger  wrote:

> Use QMP to check whether a given TPM device model is available and if it
> is not the case then do not register the tests that require it.
>
> Signed-off-by: Stefan Berger 
>

lgtm
Reviewed-by: Marc-André Lureau 

---
>  tests/qtest/bios-tables-test.c |  8 +++
>  tests/qtest/tpm-emu.c  | 38 ++
>  tests/qtest/tpm-emu.h  |  2 ++
>  3 files changed, 43 insertions(+), 5 deletions(-)
>
> diff --git a/tests/qtest/bios-tables-test.c
> b/tests/qtest/bios-tables-test.c
> index 4ccbe56158..89bf55c838 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -1094,7 +1094,6 @@ uint64_t tpm_tis_base_addr;
>  static void test_acpi_tcg_tpm(const char *machine, const char *tpm_if,
>uint64_t base, enum TPMVersion tpm_version)
>  {
> -#ifdef CONFIG_TPM
>  gchar *tmp_dir_name =
> g_strdup_printf("qemu-test_acpi_%s_tcg_%s.XX",
>machine, tpm_if);
>  char *tmp_path = g_dir_make_tmp(tmp_dir_name, NULL);
> @@ -1140,9 +1139,6 @@ static void test_acpi_tcg_tpm(const char *machine,
> const char *tpm_if,
>  g_free(tmp_dir_name);
>  g_free(args);
>  free_test_data(&data);
> -#else
> -g_test_skip("TPM disabled");
> -#endif
>  }
>
>  static void test_acpi_q35_tcg_tpm_tis(void)
> @@ -1518,7 +1514,9 @@ int main(int argc, char *argv[])
>  return ret;
>  }
>  qtest_add_func("acpi/q35/oem-fields", test_acpi_oem_fields_q35);
> -qtest_add_func("acpi/q35/tpm-tis", test_acpi_q35_tcg_tpm_tis);
> +if (tpm_model_is_available("-machine q35", "tpm-tis")) {
> +qtest_add_func("acpi/q35/tpm2-tis",
> test_acpi_q35_tcg_tpm_tis);
> +}
>

(I noted the test doesn't use qos)

 qtest_add_func("acpi/piix4", test_acpi_piix4_tcg);
>  qtest_add_func("acpi/oem-fields", test_acpi_oem_fields_pc);
>  qtest_add_func("acpi/piix4/bridge", test_acpi_piix4_tcg_bridge);
> diff --git a/tests/qtest/tpm-emu.c b/tests/qtest/tpm-emu.c
> index 32c704194b..2994d1cf42 100644
> --- a/tests/qtest/tpm-emu.c
> +++ b/tests/qtest/tpm-emu.c
> @@ -16,6 +16,8 @@
>  #include "backends/tpm/tpm_ioctl.h"
>  #include "io/channel-socket.h"
>  #include "qapi/error.h"
> +#include "qapi/qmp/qlist.h"
> +#include "qapi/qmp/qstring.h"
>  #include "tpm-emu.h"
>
>  void tpm_emu_test_wait_cond(TPMTestState *s)
> @@ -192,3 +194,39 @@ void *tpm_emu_ctrl_thread(void *data)
>  object_unref(OBJECT(lioc));
>  return NULL;
>  }
> +
> +bool tpm_model_is_available(const char *args, const char *tpm_if)
> +{
> +QTestState *qts;
> +QDict *rsp_tpm;
>
+bool ret = false;
> +
> +qts = qtest_init(args);
> +if (!qts) {
> +return false;
> +}
>
+
> +rsp_tpm = qtest_qmp(qts, "{ 'execute': 'query-tpm'}");
> +if (!qdict_haskey(rsp_tpm, "error")) {
> +QDict *rsp_models = qtest_qmp(qts,
> +  "{ 'execute': 'query-tpm-models'}");
> +if (qdict_haskey(rsp_models, "return")) {
> +QList *models = qdict_get_qlist(rsp_models, "return");
> +QListEntry *e;
> +
> +QLIST_FOREACH_ENTRY(models, e) {
> +QString *s = qobject_to(QString, qlist_entry_obj(e));
> +const char *ename = qstring_get_str(s);
> +if (!strcmp(ename, tpm_if)) {
> +ret = true;
> +break;
> +}
> +}
> +}
> +qobject_unref(rsp_models);
> +}
> +qobject_unref(rsp_tpm);
> +qtest_quit(qts);
> +
> +return ret;
> +}
> diff --git a/tests/qtest/tpm-emu.h b/tests/qtest/tpm-emu.h
> index fcb5d7a1d6..c33d99af37 100644
> --- a/tests/qtest/tpm-emu.h
> +++ b/tests/qtest/tpm-emu.h
> @@ -22,6 +22,7 @@
>  #include "qemu/sockets.h"
>  #include "io/channel.h"
>  #include "sysemu/tpm.h"
> +#include "libqos/libqtest.h"
>
>  struct tpm_hdr {
>  uint16_t tag;
> @@ -50,5 +51,6 @@ typedef struct TPMTestState {
>
>  void tpm_emu_test_wait_cond(TPMTestState *s);
>  void *tpm_emu_ctrl_thread(void *data);
> +bool tpm_model_is_available(const char *args, const char *tpm_if);
>
>  #endif /* TESTS_TPM_EMU_H */
> --
> 2.31.1
>
>
>

-- 
Marc-André Lureau


[PATCH for-6.2 v4 07/14] machine: Use ms instead of global current_machine in sanity-check

2021-08-03 Thread Yanan Wang
In the sanity-check of smp_cpus and max_cpus against mc in function
machine_set_smp(), we are now using ms->smp.max_cpus for the check
but using current_machine->smp.max_cpus in the error message.
Tweak this by uniformly using the local ms.

Reviewed-by: Cornelia Huck 
Reviewed-by: Pankaj Gupta 
Reviewed-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index a8173a0f45..e13a8f2f34 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -878,7 +878,7 @@ static void machine_set_smp(Object *obj, Visitor *v, const 
char *name,
 } else if (ms->smp.max_cpus > mc->max_cpus) {
 error_setg(errp, "Invalid SMP CPUs %d. The max CPUs "
"supported by machine '%s' is %d",
-   current_machine->smp.max_cpus,
+   ms->smp.max_cpus,
mc->name, mc->max_cpus);
 }
 
-- 
2.19.1




[PATCH for-6.2 v4 05/14] hw: Add compat machines for 6.2

2021-08-03 Thread Yanan Wang
Add 6.2 machine types for arm/i440fx/q35/s390x/spapr.

Reviewed-by: Pankaj Gupta 
Reviewed-by: Cornelia Huck 
Reviewed-by: Andrew Jones 
Acked-by: David Gibson 
Signed-off-by: Yanan Wang 
---
 hw/arm/virt.c  |  9 -
 hw/core/machine.c  |  3 +++
 hw/i386/pc.c   |  3 +++
 hw/i386/pc_piix.c  | 14 +-
 hw/i386/pc_q35.c   | 13 -
 hw/ppc/spapr.c | 15 +--
 hw/s390x/s390-virtio-ccw.c | 14 +-
 include/hw/boards.h|  3 +++
 include/hw/i386/pc.h   |  3 +++
 9 files changed, 71 insertions(+), 6 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 81eda46b0b..01165f7f53 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2788,10 +2788,17 @@ static void machvirt_machine_init(void)
 }
 type_init(machvirt_machine_init);
 
+static void virt_machine_6_2_options(MachineClass *mc)
+{
+}
+DEFINE_VIRT_MACHINE_AS_LATEST(6, 2)
+
 static void virt_machine_6_1_options(MachineClass *mc)
 {
+virt_machine_6_2_options(mc);
+compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
 }
-DEFINE_VIRT_MACHINE_AS_LATEST(6, 1)
+DEFINE_VIRT_MACHINE(6, 1)
 
 static void virt_machine_6_0_options(MachineClass *mc)
 {
diff --git a/hw/core/machine.c b/hw/core/machine.c
index e879163c3b..458d9736e3 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -37,6 +37,9 @@
 #include "hw/virtio/virtio.h"
 #include "hw/virtio/virtio-pci.h"
 
+GlobalProperty hw_compat_6_1[] = {};
+const size_t hw_compat_6_1_len = G_N_ELEMENTS(hw_compat_6_1);
+
 GlobalProperty hw_compat_6_0[] = {
 { "gpex-pcihost", "allow-unmapped-accesses", "false" },
 { "i8042", "extended-state", "false"},
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index fcf6905219..afd8b9c283 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -94,6 +94,9 @@
 #include "trace.h"
 #include CONFIG_DEVICES
 
+GlobalProperty pc_compat_6_1[] = {};
+const size_t pc_compat_6_1_len = G_N_ELEMENTS(pc_compat_6_1);
+
 GlobalProperty pc_compat_6_0[] = {
 { "qemu64" "-" TYPE_X86_CPU, "family", "6" },
 { "qemu64" "-" TYPE_X86_CPU, "model", "6" },
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 30b8bd6ea9..fd5c2277f2 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -413,7 +413,7 @@ static void pc_i440fx_machine_options(MachineClass *m)
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_VMBUS_BRIDGE);
 }
 
-static void pc_i440fx_6_1_machine_options(MachineClass *m)
+static void pc_i440fx_6_2_machine_options(MachineClass *m)
 {
 PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
 pc_i440fx_machine_options(m);
@@ -422,6 +422,18 @@ static void pc_i440fx_6_1_machine_options(MachineClass *m)
 pcmc->default_cpu_version = 1;
 }
 
+DEFINE_I440FX_MACHINE(v6_2, "pc-i440fx-6.2", NULL,
+  pc_i440fx_6_2_machine_options);
+
+static void pc_i440fx_6_1_machine_options(MachineClass *m)
+{
+pc_i440fx_6_2_machine_options(m);
+m->alias = NULL;
+m->is_default = false;
+compat_props_add(m->compat_props, hw_compat_6_1, hw_compat_6_1_len);
+compat_props_add(m->compat_props, pc_compat_6_1, pc_compat_6_1_len);
+}
+
 DEFINE_I440FX_MACHINE(v6_1, "pc-i440fx-6.1", NULL,
   pc_i440fx_6_1_machine_options);
 
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 04b4a4788d..b45903b15e 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -355,7 +355,7 @@ static void pc_q35_machine_options(MachineClass *m)
 m->max_cpus = 288;
 }
 
-static void pc_q35_6_1_machine_options(MachineClass *m)
+static void pc_q35_6_2_machine_options(MachineClass *m)
 {
 PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
 pc_q35_machine_options(m);
@@ -363,6 +363,17 @@ static void pc_q35_6_1_machine_options(MachineClass *m)
 pcmc->default_cpu_version = 1;
 }
 
+DEFINE_Q35_MACHINE(v6_2, "pc-q35-6.2", NULL,
+   pc_q35_6_2_machine_options);
+
+static void pc_q35_6_1_machine_options(MachineClass *m)
+{
+pc_q35_6_2_machine_options(m);
+m->alias = NULL;
+compat_props_add(m->compat_props, hw_compat_6_1, hw_compat_6_1_len);
+compat_props_add(m->compat_props, pc_compat_6_1, pc_compat_6_1_len);
+}
+
 DEFINE_Q35_MACHINE(v6_1, "pc-q35-6.1", NULL,
pc_q35_6_1_machine_options);
 
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 81699d4f8b..d39fd4e644 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4685,15 +4685,26 @@ static void 
spapr_machine_latest_class_options(MachineClass *mc)
 }\
 type_init(spapr_machine_register_##suffix)
 
+/*
+ * pseries-6.2
+ */
+static void spapr_machine_6_2_class_options(MachineClass *mc)
+{
+/* Defaults for the latest behaviour inherited from the base class */
+}
+
+DEFINE_SPAPR_MACHINE(6_2, "6.2", true);
+
 /*
  * pseries-6.1
  */
 static void spapr_machine_6_1_class_options(MachineClass *mc)
 {
-/* Defaults for the latest behaviour inherited from the base class */
+

[PATCH for-6.2 v4 04/14] machine: Improve the error reporting of smp parsing

2021-08-03 Thread Yanan Wang
We have two requirements for a valid SMP configuration:
the product of "sockets * cores * threads" must represent all the
possible cpus, i.e., max_cpus, and then must include the initially
present cpus, i.e., smp_cpus.

So we only need to ensure 1) "sockets * cores * threads == maxcpus"
at first and then ensure 2) "maxcpus >= cpus". With a reasonable
order of the sanity check, we can simplify the error reporting code.
When reporting an error message we also report the exact value of
each topology member to make users easily see what's going on.

Reviewed-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c | 22 +-
 hw/i386/pc.c  | 24 ++--
 2 files changed, 19 insertions(+), 27 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index 958e6e7107..e879163c3b 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -777,25 +777,21 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 maxcpus = maxcpus > 0 ? maxcpus : sockets * cores * threads;
 cpus = cpus > 0 ? cpus : maxcpus;
 
-if (sockets * cores * threads < cpus) {
-error_setg(errp, "cpu topology: "
-   "sockets (%u) * cores (%u) * threads (%u) < "
-   "smp_cpus (%u)",
-   sockets, cores, threads, cpus);
+if (sockets * cores * threads != maxcpus) {
+error_setg(errp, "Invalid CPU topology: "
+   "product of the hierarchy must match maxcpus: "
+   "sockets (%u) * cores (%u) * threads (%u) "
+   "!= maxcpus (%u)",
+   sockets, cores, threads, maxcpus);
 return;
 }
 
 if (maxcpus < cpus) {
-error_setg(errp, "maxcpus must be equal to or greater than smp");
-return;
-}
-
-if (sockets * cores * threads != maxcpus) {
 error_setg(errp, "Invalid CPU topology: "
+   "maxcpus must be equal to or greater than smp: "
"sockets (%u) * cores (%u) * threads (%u) "
-   "!= maxcpus (%u)",
-   sockets, cores, threads,
-   maxcpus);
+   "== maxcpus (%u) < smp_cpus (%u)",
+   sockets, cores, threads, maxcpus, cpus);
 return;
 }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 9ad7ae5254..fcf6905219 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -747,25 +747,21 @@ static void pc_smp_parse(MachineState *ms, 
SMPConfiguration *config, Error **err
 maxcpus = maxcpus > 0 ? maxcpus : sockets * dies * cores * threads;
 cpus = cpus > 0 ? cpus : maxcpus;
 
-if (sockets * dies * cores * threads < cpus) {
-error_setg(errp, "cpu topology: "
-   "sockets (%u) * dies (%u) * cores (%u) * threads (%u) < "
-   "smp_cpus (%u)",
-   sockets, dies, cores, threads, cpus);
+if (sockets * dies * cores * threads != maxcpus) {
+error_setg(errp, "Invalid CPU topology: "
+   "product of the hierarchy must match maxcpus: "
+   "sockets (%u) * dies (%u) * cores (%u) * threads (%u) "
+   "!= maxcpus (%u)",
+   sockets, dies, cores, threads, maxcpus);
 return;
 }
 
 if (maxcpus < cpus) {
-error_setg(errp, "maxcpus must be equal to or greater than smp");
-return;
-}
-
-if (sockets * dies * cores * threads != maxcpus) {
-error_setg(errp, "Invalid CPU topology deprecated: "
+error_setg(errp, "Invalid CPU topology: "
+   "maxcpus must be equal to or greater than smp: "
"sockets (%u) * dies (%u) * cores (%u) * threads (%u) "
-   "!= maxcpus (%u)",
-   sockets, dies, cores, threads,
-   maxcpus);
+   "== maxcpus (%u) < smp_cpus (%u)",
+   sockets, dies, cores, threads, maxcpus, cpus);
 return;
 }
 
-- 
2.19.1




[PATCH for-6.2 v4 13/14] machine: Split out the smp parsing code

2021-08-03 Thread Yanan Wang
We are going to introduce an unit test for the parser smp_parse()
in hw/core/machine.c, but now machine.c is only built in softmmu.

In order to solve the build dependency on the smp parsing code and
avoid building unrelated stuff for the unit tests, move the related
code from machine.c into a new common file, i.e., machine-smp.c.

Signed-off-by: Yanan Wang 
---
 MAINTAINERS   |   1 +
 hw/core/machine-smp.c | 199 ++
 hw/core/machine.c | 177 -
 hw/core/meson.build   |   1 +
 include/hw/boards.h   |   1 +
 5 files changed, 202 insertions(+), 177 deletions(-)
 create mode 100644 hw/core/machine-smp.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 42ac45c3e5..e38d6d5b9d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1629,6 +1629,7 @@ F: cpu.c
 F: hw/core/cpu.c
 F: hw/core/machine-qmp-cmds.c
 F: hw/core/machine.c
+F: hw/core/machine-smp.c
 F: hw/core/null-machine.c
 F: hw/core/numa.c
 F: hw/cpu/cluster.c
diff --git a/hw/core/machine-smp.c b/hw/core/machine-smp.c
new file mode 100644
index 00..a14bd4dec1
--- /dev/null
+++ b/hw/core/machine-smp.c
@@ -0,0 +1,199 @@
+/*
+ * QEMU Machine (related to SMP)
+ *
+ * Copyright (c) 2021 Huawei Technologies Co., Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License,
+ * or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see .
+ */
+
+#include "qemu/osdep.h"
+#include "hw/boards.h"
+#include "qapi/error.h"
+#include "qemu/cutils.h"
+
+static char *cpu_topology_hierarchy(MachineState *ms)
+{
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+SMPCompatProps *smp_props = &mc->smp_props;
+char topo_msg[256] = "";
+
+/*
+ * Topology members should be ordered from the largest to the smallest.
+ * Concept of sockets/cores/threads is supported by default and will be
+ * reported in the hierarchy. Unsupported members will not be reported.
+ */
+g_autofree char *sockets_msg = g_strdup_printf(
+" * sockets (%u)", ms->smp.sockets);
+pstrcat(topo_msg, sizeof(topo_msg), sockets_msg);
+
+if (smp_props->dies_supported) {
+g_autofree char *dies_msg = g_strdup_printf(
+" * dies (%u)", ms->smp.dies);
+pstrcat(topo_msg, sizeof(topo_msg), dies_msg);
+}
+
+g_autofree char *cores_msg = g_strdup_printf(
+" * cores (%u)", ms->smp.cores);
+pstrcat(topo_msg, sizeof(topo_msg), cores_msg);
+
+g_autofree char *threads_msg = g_strdup_printf(
+" * threads (%u)", ms->smp.threads);
+pstrcat(topo_msg, sizeof(topo_msg), threads_msg);
+
+return g_strdup_printf("%s", topo_msg + 3);
+}
+
+/*
+ * smp_parse - Generic function used to parse the given SMP configuration
+ *
+ * If not supported by the machine, a topology parameter must be omitted
+ * or specified equal to 1. Concept of sockets/cores/threads is supported
+ * by default. Unsupported members will not be reported in the topology
+ * hierarchy message.
+ *
+ * For compatibility, omitted arch-specific members (e.g. dies) will not
+ * be computed, but will directly default to 1 instead. This logic should
+ * also apply to future introduced ones.
+ *
+ * Omitted arch-neutral parameters (i.e. cpus/sockets/cores/threads/maxcpus)
+ * will be computed based on the provided ones. When both maxcpus and cpus
+ * are omitted, maxcpus will be computed from the given parameters and cpus
+ * will be set equal to maxcpus. When only one of maxcpus and cpus is given
+ * then the omitted one will be set to its given counterpart's value.
+ * Both maxcpus and cpus may be specified, but maxcpus must be equal to or
+ * greater than cpus.
+ *
+ * In calculation of omitted sockets/cores/threads, we prefer sockets over
+ * cores over threads before 6.2, while preferring cores over sockets over
+ * threads since 6.2.
+ */
+void smp_parse(MachineState *ms, SMPConfiguration *config, Error **errp)
+{
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+unsigned cpus= config->has_cpus ? config->cpus : 0;
+unsigned sockets = config->has_sockets ? config->sockets : 0;
+unsigned dies= config->has_dies ? config->dies : 0;
+unsigned cores   = config->has_cores ? config->cores : 0;
+unsigned threads = config->has_threads ? config->threads : 0;
+unsigned maxcpus = config->has_maxcpus ? config->maxcpus : 0;
+
+/*
+ * A specified topology parameter must be greater than zero,
+ * explicit configura

[PATCH for-6.2 v4 00/14] machine: smp parsing fixes and improvement

2021-08-03 Thread Yanan Wang
Hi,

This is new version (v4) of the series [1] that I have posted to introduce
some fixes and improvement for SMP parsing.

[1] 
https://lore.kernel.org/qemu-devel/20210728034848.75228-1-wangyana...@huawei.com/

Most of this series is about the SMP parsers:
maxcpus is now uniformly used to calculate the omitted topology members,
calculation of omitted maxcpus/cpus is improved, the error reporting is
improved. It's also suggested that we should start to prefer cores over
sockets over threads on the newer machine types, which will make the
computed virtual topology more reflective of the real hardware.

In order to reduce code duplication and ease the code maintenance, smp_parse
in now converted into a parser generic enough for all arches, so that the PC
specific one can be removed. It's also convenient to introduce more topology
members to the generic parser in the future.

Finally, a QEMU unit test for the parsing of given SMP configuration is added.
Since all the parsing logic is in generic function smp_parse(), this test
passes different SMP configurations to the function and compare the parsing
result with what is expected. In the test, all possible collections of the
topology parameters and the corresponding expected results are listed,
including the valid and invalid ones. The preference of sockets over cores
and the preference of cores over sockets, and the support of multi-dies are
also taken into consideration.

---

Changelogs:

v3->v4:
- put all the sanity check into the parser
- refine the unit test and add it back to the series
- add the R-b/A-b tags for the reviewed/acked patches
- v3: 
https://lore.kernel.org/qemu-devel/20210728034848.75228-1-wangyana...@huawei.com/

v2->v3:
- apply the calculation improvement to smp_parse and pc_smp_parse
  separately and then convert the finally improved parsers into a
  generic one, so that patches can be reviewed separately.
- to ease review, drop the unit test part for a while until we have
  a good enough generic parser.
- send the patch "machine: Disallow specifying topology parameters as zero"
  for 6.1 separately.
- v2: 
https://lore.kernel.org/qemu-devel/20210719032043.25416-1-wangyana...@huawei.com/

v1->v2:
- disallow "anything=0" in the smp configuration (Andrew)
- make function smp_parse() a generic helper for all arches
- improve the error reporting in the parser
- start to prefer cores over sockets since 6.2 (Daniel)
- add a unit test for the smp parsing (Daniel)
- v1: https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg00259.html

---

Yanan Wang (14):
  machine: Minor refactor/cleanup for the smp parsers
  machine: Uniformly use maxcpus to calculate the omitted parameters
  machine: Set the value of cpus to match maxcpus if it's omitted
  machine: Improve the error reporting of smp parsing
  hw: Add compat machines for 6.2
  machine: Prefer cores over sockets in smp parsing since 6.2
  machine: Use ms instead of global current_machine in sanity-check
  machine: Tweak the order of topology members in struct CpuTopology
  machine: Make smp_parse generic enough for all arches
  machine: Remove smp_parse callback from MachineClass
  machine: Move smp_prefer_sockets to struct SMPCompatProps
  machine: Put all sanity-check in the generic SMP parser
  machine: Split out the smp parsing code
  tests/unit: Add a unit test for smp parsing

 MAINTAINERS |   2 +
 hw/arm/virt.c   |  10 +-
 hw/core/machine-smp.c   | 199 
 hw/core/machine.c   | 106 +
 hw/core/meson.build |   1 +
 hw/i386/pc.c|  66 +--
 hw/i386/pc_piix.c   |  15 +-
 hw/i386/pc_q35.c|  14 +-
 hw/ppc/spapr.c  |  16 +-
 hw/s390x/s390-virtio-ccw.c  |  15 +-
 include/hw/boards.h |  27 +-
 include/hw/i386/pc.h|   3 +
 qemu-options.hx |  14 +-
 tests/unit/meson.build  |   1 +
 tests/unit/test-smp-parse.c | 892 
 15 files changed, 1204 insertions(+), 177 deletions(-)
 create mode 100644 hw/core/machine-smp.c
 create mode 100644 tests/unit/test-smp-parse.c

--
2.19.1




[PATCH for-6.2 v4 08/14] machine: Tweak the order of topology members in struct CpuTopology

2021-08-03 Thread Yanan Wang
Now that all the possible topology parameters are integrated in struct
CpuTopology, tweak the order of topology members to be "cpus/sockets/
dies/cores/threads/maxcpus" for readability and consistency. We also
tweak the comment by adding explanation of dies parameter.

Reviewed-by: Pankaj Gupta 
Reviewed-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c   | 8 
 include/hw/boards.h | 7 ---
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index e13a8f2f34..46eaf522ee 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -826,11 +826,11 @@ static void machine_get_smp(Object *obj, Visitor *v, 
const char *name,
 {
 MachineState *ms = MACHINE(obj);
 SMPConfiguration *config = &(SMPConfiguration){
-.has_cores = true, .cores = ms->smp.cores,
+.has_cpus = true, .cpus = ms->smp.cpus,
 .has_sockets = true, .sockets = ms->smp.sockets,
 .has_dies = true, .dies = ms->smp.dies,
+.has_cores = true, .cores = ms->smp.cores,
 .has_threads = true, .threads = ms->smp.threads,
-.has_cpus = true, .cpus = ms->smp.cpus,
 .has_maxcpus = true, .maxcpus = ms->smp.max_cpus,
 };
 if (!visit_type_SMPConfiguration(v, name, &config, &error_abort)) {
@@ -1057,10 +1057,10 @@ static void machine_initfn(Object *obj)
 /* default to mc->default_cpus */
 ms->smp.cpus = mc->default_cpus;
 ms->smp.max_cpus = mc->default_cpus;
-ms->smp.cores = 1;
+ms->smp.sockets = 1;
 ms->smp.dies = 1;
+ms->smp.cores = 1;
 ms->smp.threads = 1;
-ms->smp.sockets = 1;
 }
 
 static void machine_finalize(Object *obj)
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 2ae039b74f..2a1bba86c0 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -275,17 +275,18 @@ typedef struct DeviceMemoryState {
 /**
  * CpuTopology:
  * @cpus: the number of present logical processors on the machine
- * @cores: the number of cores in one package
- * @threads: the number of threads in one core
  * @sockets: the number of sockets on the machine
+ * @dies: the number of dies in one socket
+ * @cores: the number of cores in one die
+ * @threads: the number of threads in one core
  * @max_cpus: the maximum number of logical processors on the machine
  */
 typedef struct CpuTopology {
 unsigned int cpus;
+unsigned int sockets;
 unsigned int dies;
 unsigned int cores;
 unsigned int threads;
-unsigned int sockets;
 unsigned int max_cpus;
 } CpuTopology;
 
-- 
2.19.1




[PATCH for-6.2 v4 02/14] machine: Uniformly use maxcpus to calculate the omitted parameters

2021-08-03 Thread Yanan Wang
We are currently using maxcpus to calculate the omitted sockets
but using cpus to calculate the omitted cores/threads. This makes
cmdlines like:
  -smp cpus=8,maxcpus=16
  -smp cpus=8,cores=4,maxcpus=16
  -smp cpus=8,threads=2,maxcpus=16
work fine but the ones like:
  -smp cpus=8,sockets=2,maxcpus=16
  -smp cpus=8,sockets=2,cores=4,maxcpus=16
  -smp cpus=8,sockets=2,threads=2,maxcpus=16
break the sanity check.

Since we require for a valid config that the product of "sockets * cores
* threads" should equal to the maxcpus, we should uniformly use maxcpus
to calculate their omitted values.

Also the if-branch of "cpus == 0 || sockets == 0" was split into two
branches of "cpus == 0" and "sockets == 0" so that we can clearly read
that we are parsing the configuration with a preference on cpus over
sockets over cores over threads.

Note: change in this patch won't affect any existing working cmdlines
but improves consistency and allows more incomplete configs to be valid.

Reviewed-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c | 30 +++---
 hw/i386/pc.c  | 30 +++---
 2 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index 696d9e8e47..69979c93dd 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -755,24 +755,26 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 }
 
 /* compute missing values, prefer sockets over cores over threads */
-if (cpus == 0 || sockets == 0) {
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
+
+if (cpus == 0) {
+sockets = sockets > 0 ? sockets : 1;
 cores = cores > 0 ? cores : 1;
 threads = threads > 0 ? threads : 1;
-if (cpus == 0) {
-sockets = sockets > 0 ? sockets : 1;
-cpus = cores * threads * sockets;
-} else {
-maxcpus = maxcpus > 0 ? maxcpus : cpus;
-sockets = maxcpus / (cores * threads);
-}
+cpus = sockets * cores * threads;
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
+} else if (sockets == 0) {
+cores = cores > 0 ? cores : 1;
+threads = threads > 0 ? threads : 1;
+sockets = maxcpus / (cores * threads);
 } else if (cores == 0) {
 threads = threads > 0 ? threads : 1;
-cores = cpus / (sockets * threads);
-cores = cores > 0 ? cores : 1;
+cores = maxcpus / (sockets * threads);
 } else if (threads == 0) {
-threads = cpus / (cores * sockets);
-threads = threads > 0 ? threads : 1;
-} else if (sockets * cores * threads < cpus) {
+threads = maxcpus / (sockets * cores);
+}
+
+if (sockets * cores * threads < cpus) {
 error_setg(errp, "cpu topology: "
"sockets (%u) * cores (%u) * threads (%u) < "
"smp_cpus (%u)",
@@ -780,8 +782,6 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 return;
 }
 
-maxcpus = maxcpus > 0 ? maxcpus : cpus;
-
 if (maxcpus < cpus) {
 error_setg(errp, "maxcpus must be equal to or greater than smp");
 return;
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index acd31af452..a9ff9ef52c 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -725,24 +725,26 @@ static void pc_smp_parse(MachineState *ms, 
SMPConfiguration *config, Error **err
 dies = dies > 0 ? dies : 1;
 
 /* compute missing values, prefer sockets over cores over threads */
-if (cpus == 0 || sockets == 0) {
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
+
+if (cpus == 0) {
+sockets = sockets > 0 ? sockets : 1;
 cores = cores > 0 ? cores : 1;
 threads = threads > 0 ? threads : 1;
-if (cpus == 0) {
-sockets = sockets > 0 ? sockets : 1;
-cpus = cores * threads * dies * sockets;
-} else {
-maxcpus = maxcpus > 0 ? maxcpus : cpus;
-sockets = maxcpus / (dies * cores * threads);
-}
+cpus = sockets * dies * cores * threads;
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
+} else if (sockets == 0) {
+cores = cores > 0 ? cores : 1;
+threads = threads > 0 ? threads : 1;
+sockets = maxcpus / (dies * cores * threads);
 } else if (cores == 0) {
 threads = threads > 0 ? threads : 1;
-cores = cpus / (sockets * dies * threads);
-cores = cores > 0 ? cores : 1;
+cores = maxcpus / (sockets * dies * threads);
 } else if (threads == 0) {
-threads = cpus / (cores * dies * sockets);
-threads = threads > 0 ? threads : 1;
-} else if (sockets * dies * cores * threads < cpus) {
+threads = maxcpus / (sockets * dies * cores);
+}
+
+if (sockets * dies * cores * threads < cpus) {
 error_setg(errp, "cpu topology: "
"sockets (%u) * dies (%u) * cores (%u) * threads (%u) < "
"smp_cpus (%u)",

[PATCH for-6.2 v4 06/14] machine: Prefer cores over sockets in smp parsing since 6.2

2021-08-03 Thread Yanan Wang
In the real SMP hardware topology world, it's much more likely that
we have high cores-per-socket counts and few sockets totally. While
the current preference of sockets over cores in smp parsing results
in a virtual cpu topology with low cores-per-sockets counts and a
large number of sockets, which is just contrary to the real world.

Given that it is better to make the virtual cpu topology be more
reflective of the real world and also for the sake of compatibility,
we start to prefer cores over sockets over threads in smp parsing
since machine type 6.2 for different arches.

In this patch, a boolean "smp_prefer_sockets" is added, and we only
enable the old preference on older machines and enable the new one
since type 6.2 for all arches by using the machine compat mechanism.

Reviewed-by: Andrew Jones 
Acked-by: Cornelia Huck 
Acked-by: David Gibson 
Suggested-by: Daniel P. Berrange 
Signed-off-by: Yanan Wang 
---
 hw/arm/virt.c  |  1 +
 hw/core/machine.c  | 36 ++--
 hw/i386/pc.c   | 36 ++--
 hw/i386/pc_piix.c  |  1 +
 hw/i386/pc_q35.c   |  1 +
 hw/ppc/spapr.c |  1 +
 hw/s390x/s390-virtio-ccw.c |  1 +
 include/hw/boards.h|  1 +
 qemu-options.hx|  3 ++-
 9 files changed, 60 insertions(+), 21 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 01165f7f53..7babea40dc 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2797,6 +2797,7 @@ static void virt_machine_6_1_options(MachineClass *mc)
 {
 virt_machine_6_2_options(mc);
 compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
+mc->smp_prefer_sockets = true;
 }
 DEFINE_VIRT_MACHINE(6, 1)
 
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 458d9736e3..a8173a0f45 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -746,6 +746,7 @@ void machine_set_cpu_numa_node(MachineState *machine,
 
 static void smp_parse(MachineState *ms, SMPConfiguration *config, Error **errp)
 {
+MachineClass *mc = MACHINE_GET_CLASS(ms);
 unsigned cpus= config->has_cpus ? config->cpus : 0;
 unsigned sockets = config->has_sockets ? config->sockets : 0;
 unsigned cores   = config->has_cores ? config->cores : 0;
@@ -757,7 +758,7 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 return;
 }
 
-/* compute missing values, prefer sockets over cores over threads */
+/* compute missing values based on the provided ones */
 if (cpus == 0 && maxcpus == 0) {
 sockets = sockets > 0 ? sockets : 1;
 cores = cores > 0 ? cores : 1;
@@ -765,15 +766,30 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 } else {
 maxcpus = maxcpus > 0 ? maxcpus : cpus;
 
-if (sockets == 0) {
-cores = cores > 0 ? cores : 1;
-threads = threads > 0 ? threads : 1;
-sockets = maxcpus / (cores * threads);
-} else if (cores == 0) {
-threads = threads > 0 ? threads : 1;
-cores = maxcpus / (sockets * threads);
-} else if (threads == 0) {
-threads = maxcpus / (sockets * cores);
+if (mc->smp_prefer_sockets) {
+/* prefer sockets over cores over threads before 6.2 */
+if (sockets == 0) {
+cores = cores > 0 ? cores : 1;
+threads = threads > 0 ? threads : 1;
+sockets = maxcpus / (cores * threads);
+} else if (cores == 0) {
+threads = threads > 0 ? threads : 1;
+cores = maxcpus / (sockets * threads);
+} else if (threads == 0) {
+threads = maxcpus / (sockets * cores);
+}
+} else {
+/* prefer cores over sockets over threads since 6.2 */
+if (cores == 0) {
+sockets = sockets > 0 ? sockets : 1;
+threads = threads > 0 ? threads : 1;
+cores = maxcpus / (sockets * threads);
+} else if (sockets == 0) {
+threads = threads > 0 ? threads : 1;
+sockets = maxcpus / (cores * threads);
+} else if (threads == 0) {
+threads = maxcpus / (sockets * cores);
+}
 }
 }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index afd8b9c283..5d7c3efc43 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -717,6 +717,7 @@ void pc_acpi_smi_interrupt(void *opaque, int irq, int level)
  */
 static void pc_smp_parse(MachineState *ms, SMPConfiguration *config, Error 
**errp)
 {
+MachineClass *mc = MACHINE_GET_CLASS(ms);
 unsigned cpus= config->has_cpus ? config->cpus : 0;
 unsigned sockets = config->has_sockets ? config->sockets : 0;
 unsigned dies= config->has_dies ? config->dies : 0;
@@ -727,7 +728,7 @@ static void pc_smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **err
 /* directly de

[PATCH for-6.2 v4 03/14] machine: Set the value of cpus to match maxcpus if it's omitted

2021-08-03 Thread Yanan Wang
Currently we directly calculate the omitted cpus based on the given
incomplete collection of parameters. This makes some cmdlines like:
  -smp maxcpus=16
  -smp sockets=2,maxcpus=16
  -smp sockets=2,dies=2,maxcpus=16
  -smp sockets=2,cores=4,maxcpus=16
not work. We should probably set the value of cpus to match maxcpus
if it's omitted, which will make above configs start to work.

So the calculation logic of cpus/maxcpus after this patch will be:
When both maxcpus and cpus are omitted, maxcpus will be calculated
from the given parameters and cpus will be set equal to maxcpus.
When only one of maxcpus and cpus is given then the omitted one
will be set to its counterpart's value. Both maxcpus and cpus may
be specified, but maxcpus must be equal to or greater than cpus.

Note: change in this patch won't affect any existing working cmdlines
but allows more incomplete configs to be valid.

Reviewed-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c | 29 -
 hw/i386/pc.c  | 29 -
 qemu-options.hx   | 11 ---
 3 files changed, 40 insertions(+), 29 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index 69979c93dd..958e6e7107 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -755,25 +755,28 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 }
 
 /* compute missing values, prefer sockets over cores over threads */
-maxcpus = maxcpus > 0 ? maxcpus : cpus;
-
-if (cpus == 0) {
+if (cpus == 0 && maxcpus == 0) {
 sockets = sockets > 0 ? sockets : 1;
 cores = cores > 0 ? cores : 1;
 threads = threads > 0 ? threads : 1;
-cpus = sockets * cores * threads;
+} else {
 maxcpus = maxcpus > 0 ? maxcpus : cpus;
-} else if (sockets == 0) {
-cores = cores > 0 ? cores : 1;
-threads = threads > 0 ? threads : 1;
-sockets = maxcpus / (cores * threads);
-} else if (cores == 0) {
-threads = threads > 0 ? threads : 1;
-cores = maxcpus / (sockets * threads);
-} else if (threads == 0) {
-threads = maxcpus / (sockets * cores);
+
+if (sockets == 0) {
+cores = cores > 0 ? cores : 1;
+threads = threads > 0 ? threads : 1;
+sockets = maxcpus / (cores * threads);
+} else if (cores == 0) {
+threads = threads > 0 ? threads : 1;
+cores = maxcpus / (sockets * threads);
+} else if (threads == 0) {
+threads = maxcpus / (sockets * cores);
+}
 }
 
+maxcpus = maxcpus > 0 ? maxcpus : sockets * cores * threads;
+cpus = cpus > 0 ? cpus : maxcpus;
+
 if (sockets * cores * threads < cpus) {
 error_setg(errp, "cpu topology: "
"sockets (%u) * cores (%u) * threads (%u) < "
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index a9ff9ef52c..9ad7ae5254 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -725,25 +725,28 @@ static void pc_smp_parse(MachineState *ms, 
SMPConfiguration *config, Error **err
 dies = dies > 0 ? dies : 1;
 
 /* compute missing values, prefer sockets over cores over threads */
-maxcpus = maxcpus > 0 ? maxcpus : cpus;
-
-if (cpus == 0) {
+if (cpus == 0 && maxcpus == 0) {
 sockets = sockets > 0 ? sockets : 1;
 cores = cores > 0 ? cores : 1;
 threads = threads > 0 ? threads : 1;
-cpus = sockets * dies * cores * threads;
+} else {
 maxcpus = maxcpus > 0 ? maxcpus : cpus;
-} else if (sockets == 0) {
-cores = cores > 0 ? cores : 1;
-threads = threads > 0 ? threads : 1;
-sockets = maxcpus / (dies * cores * threads);
-} else if (cores == 0) {
-threads = threads > 0 ? threads : 1;
-cores = maxcpus / (sockets * dies * threads);
-} else if (threads == 0) {
-threads = maxcpus / (sockets * dies * cores);
+
+if (sockets == 0) {
+cores = cores > 0 ? cores : 1;
+threads = threads > 0 ? threads : 1;
+sockets = maxcpus / (dies * cores * threads);
+} else if (cores == 0) {
+threads = threads > 0 ? threads : 1;
+cores = maxcpus / (sockets * dies * threads);
+} else if (threads == 0) {
+threads = maxcpus / (sockets * dies * cores);
+}
 }
 
+maxcpus = maxcpus > 0 ? maxcpus : sockets * dies * cores * threads;
+cpus = cpus > 0 ? cpus : maxcpus;
+
 if (sockets * dies * cores * threads < cpus) {
 error_setg(errp, "cpu topology: "
"sockets (%u) * dies (%u) * cores (%u) * threads (%u) < "
diff --git a/qemu-options.hx b/qemu-options.hx
index aee622f577..06f819177e 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -214,9 +214,14 @@ SRST
 Simulate a SMP system with '\ ``n``\ ' CPUs initially present on
 the machine type board. On boards supporting CPU hotplug, the optional
 '\ ``maxcpus``\ ' parame

[PATCH for-6.2 v4 11/14] machine: Move smp_prefer_sockets to struct SMPCompatProps

2021-08-03 Thread Yanan Wang
Now we have a common structure SMPCompatProps used to store information
about SMP compatibility stuff, so we can also move smp_prefer_sockets
there for cleaner code.

No functional change intended.

Reviewed-by: Andrew Jones 
Acked-by: David Gibson 
Signed-off-by: Yanan Wang 
---
 hw/arm/virt.c  | 2 +-
 hw/core/machine.c  | 2 +-
 hw/i386/pc_piix.c  | 2 +-
 hw/i386/pc_q35.c   | 2 +-
 hw/ppc/spapr.c | 2 +-
 hw/s390x/s390-virtio-ccw.c | 2 +-
 include/hw/boards.h| 3 ++-
 7 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 7babea40dc..ae029680da 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2797,7 +2797,7 @@ static void virt_machine_6_1_options(MachineClass *mc)
 {
 virt_machine_6_2_options(mc);
 compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
-mc->smp_prefer_sockets = true;
+mc->smp_props.prefer_sockets = true;
 }
 DEFINE_VIRT_MACHINE(6, 1)
 
diff --git a/hw/core/machine.c b/hw/core/machine.c
index ba970af9fd..443ae5aa1b 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -834,7 +834,7 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 } else {
 maxcpus = maxcpus > 0 ? maxcpus : cpus;
 
-if (mc->smp_prefer_sockets) {
+if (mc->smp_props.prefer_sockets) {
 /* prefer sockets over cores over threads before 6.2 */
 if (sockets == 0) {
 cores = cores > 0 ? cores : 1;
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 9b811fc6ca..a60ebfc2c1 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -432,7 +432,7 @@ static void pc_i440fx_6_1_machine_options(MachineClass *m)
 m->is_default = false;
 compat_props_add(m->compat_props, hw_compat_6_1, hw_compat_6_1_len);
 compat_props_add(m->compat_props, pc_compat_6_1, pc_compat_6_1_len);
-m->smp_prefer_sockets = true;
+m->smp_props.prefer_sockets = true;
 }
 
 DEFINE_I440FX_MACHINE(v6_1, "pc-i440fx-6.1", NULL,
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 88efb7fde4..4b622ffb82 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -372,7 +372,7 @@ static void pc_q35_6_1_machine_options(MachineClass *m)
 m->alias = NULL;
 compat_props_add(m->compat_props, hw_compat_6_1, hw_compat_6_1_len);
 compat_props_add(m->compat_props, pc_compat_6_1, pc_compat_6_1_len);
-m->smp_prefer_sockets = true;
+m->smp_props.prefer_sockets = true;
 }
 
 DEFINE_Q35_MACHINE(v6_1, "pc-q35-6.1", NULL,
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index a481fade51..efdea43c0d 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4702,7 +4702,7 @@ static void spapr_machine_6_1_class_options(MachineClass 
*mc)
 {
 spapr_machine_6_2_class_options(mc);
 compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
-mc->smp_prefer_sockets = true;
+mc->smp_props.prefer_sockets = true;
 }
 
 DEFINE_SPAPR_MACHINE(6_1, "6.1", false);
diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
index b40e647883..5bdef9b4d7 100644
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -809,7 +809,7 @@ static void ccw_machine_6_1_class_options(MachineClass *mc)
 {
 ccw_machine_6_2_class_options(mc);
 compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
-mc->smp_prefer_sockets = true;
+mc->smp_props.prefer_sockets = true;
 }
 DEFINE_CCW_MACHINE(6_1, "6.1", false);
 
diff --git a/include/hw/boards.h b/include/hw/boards.h
index fa284e01e9..5adbcbb99b 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -110,9 +110,11 @@ typedef struct {
 
 /**
  * SMPCompatProps:
+ * @prefer_sockets - whether sockets are preferred over cores in smp parsing
  * @dies_supported - whether dies are supported by the machine
  */
 typedef struct {
+bool prefer_sockets;
 bool dies_supported;
 } SMPCompatProps;
 
@@ -250,7 +252,6 @@ struct MachineClass {
 bool nvdimm_supported;
 bool numa_mem_supported;
 bool auto_enable_numa;
-bool smp_prefer_sockets;
 SMPCompatProps smp_props;
 const char *default_ram_id;
 
-- 
2.19.1




[PATCH for-6.2 v4 12/14] machine: Put all sanity-check in the generic SMP parser

2021-08-03 Thread Yanan Wang
Put both sanity-check of the input SMP configuration and sanity-check
of the output SMP configuration uniformly in the generic parser. Then
machine_set_smp() will become cleaner, also all the invalid scenarios
can be tested only by calling the parser.

Signed-off-by: Yanan Wang 
---
 hw/core/machine.c | 63 +++
 1 file changed, 31 insertions(+), 32 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index 443ae5aa1b..3dd6c6f81e 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -811,6 +811,20 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 unsigned threads = config->has_threads ? config->threads : 0;
 unsigned maxcpus = config->has_maxcpus ? config->maxcpus : 0;
 
+/*
+ * A specified topology parameter must be greater than zero,
+ * explicit configuration like "cpus=0" is not allowed.
+ */
+if ((config->has_cpus && config->cpus == 0) ||
+(config->has_sockets && config->sockets == 0) ||
+(config->has_dies && config->dies == 0) ||
+(config->has_cores && config->cores == 0) ||
+(config->has_threads && config->threads == 0) ||
+(config->has_maxcpus && config->maxcpus == 0)) {
+error_setg(errp, "CPU topology parameters must be greater than zero");
+return;
+}
+
 /*
  * If not supported by the machine, a topology parameter must be
  * omitted or specified equal to 1.
@@ -889,6 +903,22 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
topo_msg, maxcpus, cpus);
 return;
 }
+
+if (ms->smp.cpus < mc->min_cpus) {
+error_setg(errp, "Invalid SMP CPUs %d. The min CPUs "
+   "supported by machine '%s' is %d",
+   ms->smp.cpus,
+   mc->name, mc->min_cpus);
+return;
+}
+
+if (ms->smp.max_cpus > mc->max_cpus) {
+error_setg(errp, "Invalid SMP CPUs %d. The max CPUs "
+   "supported by machine '%s' is %d",
+   ms->smp.max_cpus,
+   mc->name, mc->max_cpus);
+return;
+}
 }
 
 static void machine_get_smp(Object *obj, Visitor *v, const char *name,
@@ -911,7 +941,6 @@ static void machine_get_smp(Object *obj, Visitor *v, const 
char *name,
 static void machine_set_smp(Object *obj, Visitor *v, const char *name,
 void *opaque, Error **errp)
 {
-MachineClass *mc = MACHINE_GET_CLASS(obj);
 MachineState *ms = MACHINE(obj);
 SMPConfiguration *config;
 ERRP_GUARD();
@@ -920,40 +949,10 @@ static void machine_set_smp(Object *obj, Visitor *v, 
const char *name,
 return;
 }
 
-/*
- * The CPU topology parameters must be specified greater than zero or
- * just omitted, explicit configuration like "cpus=0" is not allowed.
- */
-if ((config->has_cpus && config->cpus == 0) ||
-(config->has_sockets && config->sockets == 0) ||
-(config->has_dies && config->dies == 0) ||
-(config->has_cores && config->cores == 0) ||
-(config->has_threads && config->threads == 0) ||
-(config->has_maxcpus && config->maxcpus == 0)) {
-error_setg(errp, "CPU topology parameters must be greater than zero");
-goto out_free;
-}
-
 smp_parse(ms, config, errp);
 if (errp) {
-goto out_free;
+qapi_free_SMPConfiguration(config);
 }
-
-/* sanity-check smp_cpus and max_cpus against mc */
-if (ms->smp.cpus < mc->min_cpus) {
-error_setg(errp, "Invalid SMP CPUs %d. The min CPUs "
-   "supported by machine '%s' is %d",
-   ms->smp.cpus,
-   mc->name, mc->min_cpus);
-} else if (ms->smp.max_cpus > mc->max_cpus) {
-error_setg(errp, "Invalid SMP CPUs %d. The max CPUs "
-   "supported by machine '%s' is %d",
-   ms->smp.max_cpus,
-   mc->name, mc->max_cpus);
-}
-
-out_free:
-qapi_free_SMPConfiguration(config);
 }
 
 static void machine_class_init(ObjectClass *oc, void *data)
-- 
2.19.1




[PATCH for-6.2 v4 10/14] machine: Remove smp_parse callback from MachineClass

2021-08-03 Thread Yanan Wang
Now we have a generic smp parser for all arches, and there will
not be any other arch specific ones, so let's remove the callback
from MachineClass and call the parser directly.

Reviewed-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c   | 3 +--
 include/hw/boards.h | 5 -
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index b214ef3e20..ba970af9fd 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -934,7 +934,7 @@ static void machine_set_smp(Object *obj, Visitor *v, const 
char *name,
 goto out_free;
 }
 
-mc->smp_parse(ms, config, errp);
+smp_parse(ms, config, errp);
 if (errp) {
 goto out_free;
 }
@@ -963,7 +963,6 @@ static void machine_class_init(ObjectClass *oc, void *data)
 /* Default 128 MB as guest ram size */
 mc->default_ram_size = 128 * MiB;
 mc->rom_file_has_mr = true;
-mc->smp_parse = smp_parse;
 
 /* numa node memory size aligned on 8MB by default.
  * On Linux, each node's border has to be 8MB aligned
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 72a23e4e0f..fa284e01e9 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -177,10 +177,6 @@ typedef struct {
  *kvm-type may be NULL if it is not needed.
  * @numa_mem_supported:
  *true if '--numa node.mem' option is supported and false otherwise
- * @smp_parse:
- *The function pointer to hook different machine specific functions for
- *parsing "smp-opts" from QemuOpts to MachineState::CpuTopology and more
- *machine specific topology fields, such as smp_dies for PCMachine.
  * @hotplug_allowed:
  *If the hook is provided, then it'll be called for each device
  *hotplug to check whether the device hotplug is allowed.  Return
@@ -217,7 +213,6 @@ struct MachineClass {
 void (*reset)(MachineState *state);
 void (*wakeup)(MachineState *state);
 int (*kvm_type)(MachineState *machine, const char *arg);
-void (*smp_parse)(MachineState *ms, SMPConfiguration *config, Error 
**errp);
 
 BlockInterfaceType block_default_type;
 int units_per_default_bus;
-- 
2.19.1




[PATCH for-6.2 v4 14/14] tests/unit: Add a unit test for smp parsing

2021-08-03 Thread Yanan Wang
Add a QEMU unit test for the parsing of given SMP configuration.
Since all the parsing logic is in generic function smp_parse(),
this test passes different SMP configurations to the function
and compare the parsing result with what is expected.

In the test, all possible collections of the topology parameters
and the corresponding expected results are listed, including the
valid and invalid ones.

The preference of sockets over cores and the preference of cores
over sockets, and the support of multi-dies are also considered.

Signed-off-by: Yanan Wang 
---
 MAINTAINERS |   1 +
 tests/unit/meson.build  |   1 +
 tests/unit/test-smp-parse.c | 892 
 3 files changed, 894 insertions(+)
 create mode 100644 tests/unit/test-smp-parse.c

diff --git a/MAINTAINERS b/MAINTAINERS
index e38d6d5b9d..a0d3258c09 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1639,6 +1639,7 @@ F: include/hw/boards.h
 F: include/hw/core/cpu.h
 F: include/hw/cpu/cluster.h
 F: include/sysemu/numa.h
+F: tests/unit/test-smp-parse.c
 T: git https://gitlab.com/ehabkost/qemu.git machine-next
 
 Xtensa Machines
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index 5736d285b2..e208173970 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -45,6 +45,7 @@ tests = {
   'test-uuid': [],
   'ptimer-test': ['ptimer-test-stubs.c', meson.source_root() / 
'hw/core/ptimer.c'],
   'test-qapi-util': [],
+  'test-smp-parse': [qom, meson.source_root() / 'hw/core/machine-smp.c'],
 }
 
 if have_system or have_tools
diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c
new file mode 100644
index 00..291fa3fa60
--- /dev/null
+++ b/tests/unit/test-smp-parse.c
@@ -0,0 +1,892 @@
+/*
+ * SMP parsing unit-tests
+ *
+ * Copyright (c) 2021 Huawei Technologies Co., Ltd
+ *
+ * Authors:
+ *  Yanan Wang 
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2.1 or later.
+ * See the COPYING.LIB file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qom/object.h"
+#include "qemu/module.h"
+#include "qapi/error.h"
+
+#include "hw/boards.h"
+
+#define T true
+#define F false
+
+#define MIN_CPUS 1
+#define MAX_CPUS 512
+
+/* define a CPU topology hierarchy of sockets/cores/threads */
+#define SMP_CONFIG_GENERIC(ha, a, hb, b, hc, c, hd, d, he, e) \
+{ \
+.has_cpus= ha, .cpus= a,  \
+.has_sockets = hb, .sockets = b,  \
+.has_cores   = hc, .cores   = c,  \
+.has_threads = hd, .threads = d,  \
+.has_maxcpus = he, .maxcpus = e,  \
+}
+
+#define CPU_TOPOLOGY_GENERIC(a, b, c, d, e)   \
+{ \
+.cpus = a,\
+.sockets  = b,\
+.cores= c,\
+.threads  = d,\
+.max_cpus = e,\
+}
+
+/* define a CPU topology hierarchy of sockets/dies/cores/threads */
+#define SMP_CONFIG_WITH_DIES(ha, a, hb, b, hc, c, hd, d, he, e, hf, f) \
+{ \
+.has_cpus= ha, .cpus= a,  \
+.has_sockets = hb, .sockets = b,  \
+.has_dies= hc, .dies= c,  \
+.has_cores   = hd, .cores   = d,  \
+.has_threads = he, .threads = e,  \
+.has_maxcpus = hf, .maxcpus = f,  \
+}
+
+#define CPU_TOPOLOGY_WITH_DIES(a, b, c, d, e, f)  \
+{ \
+.cpus = a,\
+.sockets  = b,\
+.dies = c,\
+.cores= d,\
+.threads  = e,\
+.max_cpus = f,\
+}
+
+/**
+ * SMPTestData:
+ * @config - the given SMP configuration
+ * @expect_prefer_sockets - expected topology result for the valid
+ * configuration, when sockets are preferred over cores in parsing
+ * @expect_prefer_cores - expected topology result for the valid
+ * configuration, when cores are preferred over sockets in parsing
+ * @expect_error - expected error report for the invalid configuration
+ */
+typedef struct SMPTestData {
+SMPConfiguration config;
+CpuTopology expect_prefer_sockets;
+CpuTopology expect_prefer_cores;
+const char *expect_error;
+} SMPTestData;
+
+/* specific machine type in

Re: [PATCH v2 2/5] s390x: kvm: topology: interception of PTF instruction

2021-08-03 Thread Pierre Morel




On 7/22/21 7:42 PM, Pierre Morel wrote:

Interception of the PTF instruction depending on the new
KVM_CAP_S390_CPU_TOPOLOGY KVM extension.

Signed-off-by: Pierre Morel 
---
  hw/s390x/s390-virtio-ccw.c | 45 ++
  include/hw/s390x/s390-virtio-ccw.h |  7 +
  target/s390x/kvm/kvm.c | 21 ++
  3 files changed, 73 insertions(+)

diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
index e4b18aef49..500e856974 100644
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -404,6 +404,49 @@ static void s390_pv_prepare_reset(S390CcwMachineState *ms)
  s390_pv_prep_reset();
  }
  
+int s390_handle_ptf(S390CPU *cpu, uint8_t r1, uintptr_t ra)

+{
+S390CcwMachineState *ms = S390_CCW_MACHINE(qdev_get_machine());
+CPUS390XState *env = &cpu->env;
+uint64_t reg = env->regs[r1];
+uint8_t fc = reg & S390_TOPO_FC_MASK;
+
+if (!s390_has_feat(S390_FEAT_CONFIGURATION_TOPOLOGY)) {
+s390_program_interrupt(env, PGM_OPERAND, ra);
+return 0;
+}
+
+if (env->psw.mask & PSW_MASK_PSTATE) {
+s390_program_interrupt(env, PGM_PRIVILEGED, ra);
+return 0;
+}
+
+if (reg & ~S390_TOPO_FC_MASK) {
+s390_program_interrupt(env, PGM_SPECIFICATION, ra);
+return 0;
+}
+
+switch (fc) {
+case 0:/* Horizontal polarization is already set */
+env->regs[r1] = S390_PTF_REASON_DONE;
+return 2;
+case 1:/* Vertical polarization is not supported */
+env->regs[r1] = S390_PTF_REASON_NONE;
+return 2;
+case 2:/* Report if a topology change report is pending */
+if (ms->topology_change_report_pending) {
+ms->topology_change_report_pending = false;
+return 1;
+}
+return 0;
+default:
+s390_program_interrupt(env, PGM_SPECIFICATION, ra);
+break;
+}
+
+return 0;
+}
+


Hi all,

it seems the part where the topology change is made pending on CPU 
creation disappeared from this patch...:


diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
index e02b2a8299..a9eeb11d1f 100644
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -302,12 +302,14 @@ static void s390_cpu_plug(HotplugHandler *hotplug_dev,
 DeviceState *dev, Error **errp)
 {
 MachineState *ms = MACHINE(hotplug_dev);
+S390CcwMachineState *s390ms = S390_CCW_MACHINE(ms);
 S390CPU *cpu = S390_CPU(dev);

 g_assert(!ms->possible_cpus->cpus[cpu->env.core_id].cpu);
 ms->possible_cpus->cpus[cpu->env.core_id].cpu = OBJECT(dev);

 s390_topology_new_cpu(cpu->env.core_id);
+s390ms->topology_change_report_pending = true;

 if (dev->hotplugged) {
 raise_irq_cpu_hotplug();


I will add this on the next round.

Otherwise, the changes in the Linux side to implement interpretation do 
not affect the QEMU implementation.


so... a gentle ping?

Pierre



  static void s390_machine_reset(MachineState *machine)
  {
  S390CcwMachineState *ms = S390_CCW_MACHINE(machine);
@@ -433,6 +476,8 @@ static void s390_machine_reset(MachineState *machine)
  run_on_cpu(cs, s390_do_cpu_ipl, RUN_ON_CPU_NULL);
  break;
  case S390_RESET_MODIFIED_CLEAR:
+/* clear topology_change_report pending condition on subsystem reset */
+ms->topology_change_report_pending = false;
  /*
   * Susbsystem reset needs to be done before we unshare memory
   * and lose access to VIRTIO structures in guest memory.
diff --git a/include/hw/s390x/s390-virtio-ccw.h 
b/include/hw/s390x/s390-virtio-ccw.h
index 3331990e02..fbde357332 100644
--- a/include/hw/s390x/s390-virtio-ccw.h
+++ b/include/hw/s390x/s390-virtio-ccw.h
@@ -27,9 +27,16 @@ struct S390CcwMachineState {
  bool aes_key_wrap;
  bool dea_key_wrap;
  bool pv;
+bool topology_change_report_pending;
  uint8_t loadparm[8];
  };
  
+#define S390_PTF_REASON_NONE (0x00 << 8)

+#define S390_PTF_REASON_DONE (0x01 << 8)
+#define S390_PTF_REASON_BUSY (0x02 << 8)
+#define S390_TOPO_FC_MASK 0xffUL
+int s390_handle_ptf(S390CPU *cpu, uint8_t r1, uintptr_t ra);
+
  struct S390CcwMachineClass {
  /*< private >*/
  MachineClass parent_class;
diff --git a/target/s390x/kvm/kvm.c b/target/s390x/kvm/kvm.c
index 5b1fdb55c4..9a0c13d4ac 100644
--- a/target/s390x/kvm/kvm.c
+++ b/target/s390x/kvm/kvm.c
@@ -97,6 +97,7 @@
  
  #define PRIV_B9_EQBS0x9c

  #define PRIV_B9_CLP 0xa0
+#define PRIV_B9_PTF 0xa2
  #define PRIV_B9_PCISTG  0xd0
  #define PRIV_B9_PCILG   0xd2
  #define PRIV_B9_RPCIT   0xd3
@@ -1452,6 +1453,16 @@ static int kvm_mpcifc_service_call(S390CPU *cpu, struct 
kvm_run *run)
  }
  }
  
+static int kvm_handle_ptf(S390CPU *cpu, struct kvm_run *run)

+{
+uint8_t r1 = (run->s390_sieic.ipb >> 20) & 0x0f;
+uint8_t ret;
+
+

[PATCH for-6.2 v4 01/14] machine: Minor refactor/cleanup for the smp parsers

2021-08-03 Thread Yanan Wang
To pave the way for the functional improvement in later patches,
make some refactor/cleanup for the smp parsers, including using
local maxcpus instead of ms->smp.max_cpus in the calculation,
defaulting dies to 0 initially like other members, cleanup the
sanity check for dies.

No functional change intended.

Signed-off-by: Yanan Wang 
---
 hw/core/machine.c | 19 +++
 hw/i386/pc.c  | 23 ++-
 2 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index e1533dfc47..696d9e8e47 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -747,9 +747,11 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 unsigned sockets = config->has_sockets ? config->sockets : 0;
 unsigned cores   = config->has_cores ? config->cores : 0;
 unsigned threads = config->has_threads ? config->threads : 0;
+unsigned maxcpus = config->has_maxcpus ? config->maxcpus : 0;
 
-if (config->has_dies && config->dies != 0 && config->dies != 1) {
+if (config->has_dies && config->dies > 1) {
 error_setg(errp, "dies not supported by this machine's CPU topology");
+return;
 }
 
 /* compute missing values, prefer sockets over cores over threads */
@@ -760,8 +762,8 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 sockets = sockets > 0 ? sockets : 1;
 cpus = cores * threads * sockets;
 } else {
-ms->smp.max_cpus = config->has_maxcpus ? config->maxcpus : cpus;
-sockets = ms->smp.max_cpus / (cores * threads);
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
+sockets = maxcpus / (cores * threads);
 }
 } else if (cores == 0) {
 threads = threads > 0 ? threads : 1;
@@ -778,26 +780,27 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 return;
 }
 
-ms->smp.max_cpus = config->has_maxcpus ? config->maxcpus : cpus;
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
 
-if (ms->smp.max_cpus < cpus) {
+if (maxcpus < cpus) {
 error_setg(errp, "maxcpus must be equal to or greater than smp");
 return;
 }
 
-if (sockets * cores * threads != ms->smp.max_cpus) {
+if (sockets * cores * threads != maxcpus) {
 error_setg(errp, "Invalid CPU topology: "
"sockets (%u) * cores (%u) * threads (%u) "
"!= maxcpus (%u)",
sockets, cores, threads,
-   ms->smp.max_cpus);
+   maxcpus);
 return;
 }
 
 ms->smp.cpus = cpus;
+ms->smp.sockets = sockets;
 ms->smp.cores = cores;
 ms->smp.threads = threads;
-ms->smp.sockets = sockets;
+ms->smp.max_cpus = maxcpus;
 }
 
 static void machine_get_smp(Object *obj, Visitor *v, const char *name,
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c2b9d62a35..acd31af452 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -716,9 +716,13 @@ static void pc_smp_parse(MachineState *ms, 
SMPConfiguration *config, Error **err
 {
 unsigned cpus= config->has_cpus ? config->cpus : 0;
 unsigned sockets = config->has_sockets ? config->sockets : 0;
-unsigned dies= config->has_dies ? config->dies : 1;
+unsigned dies= config->has_dies ? config->dies : 0;
 unsigned cores   = config->has_cores ? config->cores : 0;
 unsigned threads = config->has_threads ? config->threads : 0;
+unsigned maxcpus = config->has_maxcpus ? config->maxcpus : 0;
+
+/* directly default dies to 1 if it's omitted */
+dies = dies > 0 ? dies : 1;
 
 /* compute missing values, prefer sockets over cores over threads */
 if (cpus == 0 || sockets == 0) {
@@ -728,8 +732,8 @@ static void pc_smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **err
 sockets = sockets > 0 ? sockets : 1;
 cpus = cores * threads * dies * sockets;
 } else {
-ms->smp.max_cpus = config->has_maxcpus ? config->maxcpus : cpus;
-sockets = ms->smp.max_cpus / (cores * threads * dies);
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
+sockets = maxcpus / (dies * cores * threads);
 }
 } else if (cores == 0) {
 threads = threads > 0 ? threads : 1;
@@ -746,27 +750,28 @@ static void pc_smp_parse(MachineState *ms, 
SMPConfiguration *config, Error **err
 return;
 }
 
-ms->smp.max_cpus = config->has_maxcpus ? config->maxcpus : cpus;
+maxcpus = maxcpus > 0 ? maxcpus : cpus;
 
-if (ms->smp.max_cpus < cpus) {
+if (maxcpus < cpus) {
 error_setg(errp, "maxcpus must be equal to or greater than smp");
 return;
 }
 
-if (sockets * dies * cores * threads != ms->smp.max_cpus) {
+if (sockets * dies * cores * threads != maxcpus) {
 error_setg(errp, "Invalid CPU topology deprecated: "
"sockets (%u) * dies (%u) * cores (%u)

Re: [PATCH] vhost: use large iotlb entry if no IOMMU translation is needed

2021-08-03 Thread Jason Wang



在 2021/8/3 下午1:51, Chao Gao 写道:

On Tue, Aug 03, 2021 at 12:43:58PM +0800, Jason Wang wrote:

在 2021/8/3 下午12:29, Chao Gao 写道:

Ping. Could someone help to review this patch?

Thanks
Chao

On Wed, Jul 21, 2021 at 03:54:02PM +0800, Chao Gao wrote:

If guest enables IOMMU_PLATFORM for virtio-net, severe network
performance drop is observed even if there is no IOMMU.


We see such reports internally and we're testing a patch series to disable
vhost IOTLB in this case.

Will post a patch soon.

OK. put me in the CC list. I would like to test with TDX to ensure your patch
fix the performance issue I am facing.



Sure.








   And disabling
vhost can mitigate the perf issue. Finally, we found the culprit is
frequent iotlb misses: kernel vhost-net has 2048 entries and each
entry is 4K (qemu uses 4K for i386 if no IOMMU); vhost-net can cache
translations for up to 8M (i.e. 4K*2048) IOVAs. If guest uses >8M
memory for DMA, there are some iotlb misses.

If there is no IOMMU or IOMMU is disabled or IOMMU works in pass-thru
mode, we can optimistically use large, unaligned iotlb entries instead
of 4K-aligned entries to reduce iotlb pressure.


Instead of introducing new general facilities like unaligned IOTLB entry. I
wonder if we optimize the vtd_iommu_translate() to use e.g 1G instead?

using 1G iotlb entry looks feasible.



Want to send a patch?





     } else {
     /* DMAR disabled, passthrough, use 4k-page*/
     iotlb.iova = addr & VTD_PAGE_MASK_4K;
     iotlb.translated_addr = addr & VTD_PAGE_MASK_4K;
     iotlb.addr_mask = ~VTD_PAGE_MASK_4K;
     iotlb.perm = IOMMU_RW;
     success = true;
     }



   Actually, vhost-net
in kernel supports unaligned iotlb entry. The alignment requirement is
imposed by address_space_get_iotlb_entry() and flatview_do_translate().


For the passthrough case, is there anyway to detect them and then disable
device IOTLB in those case?

yes. I guess so; qemu knows the presence and status of iommu. Currently,
in flatview_do_translate(), memory_region_get_iommu() tells whether a memory
region is behind an iommu.



The issues are:

1) how to know the passthrough mode is enabled (note that passthrough 
mode doesn't mean it doesn't sit behind IOMMU)
2) can passthrough mode be disabled on the fly? If yes, we need to deal 
with them


Thanks




Thanks
Chao






[PATCH for-6.2 v4 09/14] machine: Make smp_parse generic enough for all arches

2021-08-03 Thread Yanan Wang
Currently the only difference between smp_parse and pc_smp_parse
is the support of dies parameter and the related error reporting.
With some arch compat variables like "bool dies_supported", we can
make smp_parse generic enough for all arches and the PC specific
one can be removed.

Making smp_parse() generic enough can reduce code duplication and
ease the code maintenance, and also allows extending the topology
with more arch specific members (e.g., clusters) in the future.

Reviewed-by: Andrew Jones 
Suggested-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
 hw/core/machine.c   | 112 +++-
 hw/i386/pc.c|  83 +---
 include/hw/boards.h |   9 
 3 files changed, 101 insertions(+), 103 deletions(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index 46eaf522ee..b214ef3e20 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -15,6 +15,7 @@
 #include "qapi/qmp/qerror.h"
 #include "sysemu/replay.h"
 #include "qemu/units.h"
+#include "qemu/cutils.h"
 #include "hw/boards.h"
 #include "hw/loader.h"
 #include "qapi/error.h"
@@ -744,20 +745,87 @@ void machine_set_cpu_numa_node(MachineState *machine,
 }
 }
 
+static char *cpu_topology_hierarchy(MachineState *ms)
+{
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+SMPCompatProps *smp_props = &mc->smp_props;
+char topo_msg[256] = "";
+
+/*
+ * Topology members should be ordered from the largest to the smallest.
+ * Concept of sockets/cores/threads is supported by default and will be
+ * reported in the hierarchy. Unsupported members will not be reported.
+ */
+g_autofree char *sockets_msg = g_strdup_printf(
+" * sockets (%u)", ms->smp.sockets);
+pstrcat(topo_msg, sizeof(topo_msg), sockets_msg);
+
+if (smp_props->dies_supported) {
+g_autofree char *dies_msg = g_strdup_printf(
+" * dies (%u)", ms->smp.dies);
+pstrcat(topo_msg, sizeof(topo_msg), dies_msg);
+}
+
+g_autofree char *cores_msg = g_strdup_printf(
+" * cores (%u)", ms->smp.cores);
+pstrcat(topo_msg, sizeof(topo_msg), cores_msg);
+
+g_autofree char *threads_msg = g_strdup_printf(
+" * threads (%u)", ms->smp.threads);
+pstrcat(topo_msg, sizeof(topo_msg), threads_msg);
+
+return g_strdup_printf("%s", topo_msg + 3);
+}
+
+/*
+ * smp_parse - Generic function used to parse the given SMP configuration
+ *
+ * If not supported by the machine, a topology parameter must be omitted
+ * or specified equal to 1. Concept of sockets/cores/threads is supported
+ * by default. Unsupported members will not be reported in the topology
+ * hierarchy message.
+ *
+ * For compatibility, omitted arch-specific members (e.g. dies) will not
+ * be computed, but will directly default to 1 instead. This logic should
+ * also apply to future introduced ones.
+ *
+ * Omitted arch-neutral parameters (i.e. cpus/sockets/cores/threads/maxcpus)
+ * will be computed based on the provided ones. When both maxcpus and cpus
+ * are omitted, maxcpus will be computed from the given parameters and cpus
+ * will be set equal to maxcpus. When only one of maxcpus and cpus is given
+ * then the omitted one will be set to its given counterpart's value.
+ * Both maxcpus and cpus may be specified, but maxcpus must be equal to or
+ * greater than cpus.
+ *
+ * In calculation of omitted sockets/cores/threads, we prefer sockets over
+ * cores over threads before 6.2, while preferring cores over sockets over
+ * threads since 6.2.
+ */
 static void smp_parse(MachineState *ms, SMPConfiguration *config, Error **errp)
 {
 MachineClass *mc = MACHINE_GET_CLASS(ms);
 unsigned cpus= config->has_cpus ? config->cpus : 0;
 unsigned sockets = config->has_sockets ? config->sockets : 0;
+unsigned dies= config->has_dies ? config->dies : 0;
 unsigned cores   = config->has_cores ? config->cores : 0;
 unsigned threads = config->has_threads ? config->threads : 0;
 unsigned maxcpus = config->has_maxcpus ? config->maxcpus : 0;
 
-if (config->has_dies && config->dies > 1) {
+/*
+ * If not supported by the machine, a topology parameter must be
+ * omitted or specified equal to 1.
+ */
+if (!mc->smp_props.dies_supported && dies > 1) {
 error_setg(errp, "dies not supported by this machine's CPU topology");
 return;
 }
 
+/*
+ * Omitted arch-specific members will not be computed, but will
+ * directly default to 1 instead.
+ */
+dies = dies > 0 ? dies : 1;
+
 /* compute missing values based on the provided ones */
 if (cpus == 0 && maxcpus == 0) {
 sockets = sockets > 0 ? sockets : 1;
@@ -771,54 +839,56 @@ static void smp_parse(MachineState *ms, SMPConfiguration 
*config, Error **errp)
 if (sockets == 0) {
 cores = cores > 0 ? cores : 1;
 threads = threads > 0 ? threads : 1;
-sockets = maxcpus / 

Re: [PATCH v6 0/2] target/s390x: Fix SIGILL and SIGFPE psw.addr reporting

2021-08-03 Thread Cornelia Huck
On Mon, Jul 05 2021, Ilya Leoshkevich  wrote:

> qemu-s390x puts a wrong value into SIGILL's siginfo_t's psw.addr: it
> should be a pointer to the instruction following the illegal
> instruction, but at the moment it is a pointer to the illegal
> instruction itself. This breaks OpenJDK, which relies on this value.
> A similar problem exists for SIGFPE.
>
> Patch 1 fixes the issue, patch 2 adds a test.
>
> v1: https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06592.html
> v1 -> v2: Use a better buglink (Cornelia), simplify the inline asm
>   magic in the test and add an explanation (David).
>
> v2: https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06649.html
> v2 -> v3: Fix SIGSEGV handling (found when trying to run valgrind under
>   qemu-user).
>
> v3: https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg00299.html
> v3 -> v4: Fix compiling the test on Ubuntu 20.04 (Jonathan).
>
> v4: https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg05848.html
> v4 -> v5: Greatly simplify the fix (Ulrich).
>
> v5: https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg06244.html
> v5 -> v6: Fix breakpoints (David). Add gdbstub test.
>
> Note: the compare-and-trap SIGFPE issue is being fixed separately.
> https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg05690.html
>
> Ilya Leoshkevich (2):
>   target/s390x: Fix SIGILL and SIGFPE psw.addr reporting
>   tests/tcg/s390x: Test SIGILL and SIGSEGV handling
>
>  linux-user/s390x/cpu_loop.c   |  12 +-
>  tests/tcg/s390x/Makefile.target   |  18 +-
>  tests/tcg/s390x/gdbstub/test-signals-s390x.py |  76 
>  tests/tcg/s390x/signals-s390x.c   | 165 ++
>  4 files changed, 269 insertions(+), 2 deletions(-)
>  create mode 100644 tests/tcg/s390x/gdbstub/test-signals-s390x.py
>  create mode 100644 tests/tcg/s390x/signals-s390x.c

So, I'd like to see this merged, but I'm unsure on what we agreed -- I
thought this would go via linux-user. Do I misremember?




[RFC PATCH: v3 1/2] add mi device in qemu

2021-08-03 Thread Padmakar Kalghatgi
From: padmakar 

This patch contains the implementation of certain commands 
of nvme-mi specification.The MI commands are useful to
manage/configure/monitor the device.Eventhough the MI commands
can be sent via the inband NVMe-MI send/recieve commands, the idea 
here is to emulate the sideband interface for MI.

The changes here includes the interface for i2c/smbus 
for nvme-mi protocol. We have used i2c address of 0x15
using which the guest VM can send and recieve the nvme-mi
commands. Since the nvme-mi device uses the I2C_SLAVE as
parent, we have used the send and recieve callbacks by
which the nvme-mi device will get the required notification.
With the callback approach, we have eliminated the use of 
threads. 

One needs to specify the following command in the launch to
specify the nvme-mi device, link to nvme and assign the i2c address.
<-device nvme-mi,nvme=nvme0,address=0x15>

This module makes use of the NvmeCtrl structure of the nvme module,
to fetch relevant information of the nvme device which are used in
some of the mi commands. Eventhough certain commands might require
modification to the nvme module, currently we have currently refrained
from making changes to the nvme module.

cmd-name  cmd-type
*
read nvme-mi dsnvme-mi
configuration set  nvme-mi
configuration get  nvme-mi
vpd read   nvme-mi
vpd write  nvme-mi
controller Health Staus Poll   nvme-mi
nvme subsystem health status poll  nvme-mi
identify   nvme-admin
get log page   nvme-admin
get features   nvme-admin

Signed-off-by: Padmakar Kalghatgi 
Reviewed-by: Klaus Birkelund Jensen 
Reviewed-by: Jaegyu Choi 

v3
-- rebased on master

---
 hw/nvme/meson.build |   2 +-
 hw/nvme/nvme-mi.c   | 650 
 hw/nvme/nvme-mi.h   | 297 
 3 files changed, 948 insertions(+), 1 deletion(-)
 create mode 100644 hw/nvme/nvme-mi.c
 create mode 100644 hw/nvme/nvme-mi.h

diff --git a/hw/nvme/meson.build b/hw/nvme/meson.build
index 3cf4004..8dac50e 100644
--- a/hw/nvme/meson.build
+++ b/hw/nvme/meson.build
@@ -1 +1 @@
-softmmu_ss.add(when: 'CONFIG_NVME_PCI', if_true: files('ctrl.c', 'dif.c', 
'ns.c', 'subsys.c'))
+softmmu_ss.add(when: 'CONFIG_NVME_PCI', if_true: files('ctrl.c', 'dif.c', 
'ns.c', 'subsys.c', 'nvme-mi.c'))
diff --git a/hw/nvme/nvme-mi.c b/hw/nvme/nvme-mi.c
new file mode 100644
index 000..a90ce90
--- /dev/null
+++ b/hw/nvme/nvme-mi.c
@@ -0,0 +1,650 @@
+/*
+ * QEMU NVMe-MI Controller
+ *
+ * Copyright (c) 2021, Samsung Electronics co Ltd.
+ *
+ * Written by Padmakar Kalghatgi 
+ *Arun Kumar Agasar 
+ *Saurav Kumar 
+ *
+ * This code is licensed under the GNU GPL v2 or later.
+ */
+
+/**
+ * Reference Specs: http://www.nvmexpress.org,
+ *
+ *
+ * Usage
+ * -
+ * The nvme-mi device has to be used along with nvme device only
+ *
+ * Add options:
+ *-device  nvme-mi,nvme=,address=0x15",
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "hw/qdev-core.h"
+#include "hw/block/block.h"
+#include "hw/pci/msix.h"
+#include "nvme.h"
+#include "nvme-mi.h"
+#include "qemu/crc32c.h"
+
+#define NVME_TEMPERATURE 0x143
+#define NVME_TEMPERATURE_WARNING 0x157
+#define NVME_TEMPERATURE_CRITICAL 0x175
+
+static void nvme_mi_send_resp(NvmeMiCtrl *ctrl_mi, uint8_t *resp, uint32_t 
size)
+{
+uint32_t crc_value = crc32c(0x, resp, size);
+size += 4;
+uint32_t offset = 0;
+uint32_t ofst = 0;
+uint32_t som = 1;
+uint32_t eom = 0;
+uint32_t pktseq = 0;
+uint32_t mtus = ctrl_mi->mctp_unit_size;
+while (size > 0) {
+uint32_t sizesent = size > mtus ? mtus : size;
+size -= sizesent;
+eom = size > 0 ? 0 : 1;
+g_autofree uint8_t *buf = (uint8_t *)g_malloc0(sizesent + 8);
+buf[2] = sizesent + 5;
+buf[7] = (som << 7) | (eom << 6) | (pktseq << 5);
+som = 0;
+memcpy(buf + 8, resp + offset, sizesent);
+offset += sizesent;
+if (size <= 0) {
+memcpy(buf + 8 + offset , &crc_value, sizeof(crc_value));
+}
+memcpy(ctrl_mi->misendrecv.sendrecvbuf + ofst, buf, sizesent + 8);
+ofst += sizesent + 8;
+}
+}
+
+static void nvme_mi_resp_hdr_init(NvmeMIResponse *resp, bool bAdminCommand)
+{
+resp->msg_header.msgtype = 4;
+resp->msg_header.ic = 1;
+resp->msg_header.csi = 0;
+resp->msg_header.reserved = 0;
+resp->msg_header.nmimt = bAdminCommand ? 2 : 1;
+resp->msg_header.ror = 1;
+resp->msg_header.reserved1 = 0;
+}
+static void nvme_mi_nvm_subsys_ds(NvmeMiCtrl *ctrl_mi, NvmeMIRequest *req)
+{
+NvmeMIResponse resp;
+NvmMiSubsysInfoDs ds;
+ds.nump = 1;
+ds.mjr = (ctrl_mi->n->bar.vs & 0xFF) >> 16;
+ds.mnr = (ctrl

Re: [PATCH for-6.2 v4 01/14] machine: Minor refactor/cleanup for the smp parsers

2021-08-03 Thread Andrew Jones
On Tue, Aug 03, 2021 at 04:05:14PM +0800, Yanan Wang wrote:
> To pave the way for the functional improvement in later patches,
> make some refactor/cleanup for the smp parsers, including using
> local maxcpus instead of ms->smp.max_cpus in the calculation,
> defaulting dies to 0 initially like other members, cleanup the
> sanity check for dies.
> 
> No functional change intended.
> 
> Signed-off-by: Yanan Wang 
> ---
>  hw/core/machine.c | 19 +++
>  hw/i386/pc.c  | 23 ++-
>  2 files changed, 25 insertions(+), 17 deletions(-)
>

Reviewed-by: Andrew Jones 




Re: [PATCH for-6.2 v4 12/14] machine: Put all sanity-check in the generic SMP parser

2021-08-03 Thread Andrew Jones
On Tue, Aug 03, 2021 at 04:05:25PM +0800, Yanan Wang wrote:
> Put both sanity-check of the input SMP configuration and sanity-check
> of the output SMP configuration uniformly in the generic parser. Then
> machine_set_smp() will become cleaner, also all the invalid scenarios
> can be tested only by calling the parser.
> 
> Signed-off-by: Yanan Wang 
> ---
>  hw/core/machine.c | 63 +++
>  1 file changed, 31 insertions(+), 32 deletions(-)
>

Reviewed-by: Andrew Jones 




Re: [PATCH 1/1] migration: Terminate multifd threads on yank

2021-08-03 Thread Lukas Straub
On Tue, 3 Aug 2021 04:18:42 -0300
Leonardo Bras Soares Passos  wrote:

> Hello Lukas,
> 
> On Tue, Aug 3, 2021 at 3:42 AM Lukas Straub  wrote:
> > Hi,
> > There is an easier explanation: I forgot the send side of multifd
> > altogether (I thought it was covered by migration_channel_connect()).
> > So yank won't actually shutdown() the multifd sockets on the send side.  
> 
> If I could get that correctly, it seems to abort migration (and
> therefore close all fds) if the ft that ends up qio_channel_shutdown()
> get to sendmsg(), which can take a while.

How long is "can take a while"? Until some TCP connection times out?
That would mean that it is hanging somewhere else.

I mean in precopy migration the multifd send threads should be fully
utilized and always sending something until the migration finishes. In
that case it is likely that all the treads become stuck in
qio_channel_write_all() if the connection breaks silently (i.e.
discards packets or the destination is powered off, No connection
reset) since there are no TCP ACK's ariving from the destination side
-> kernel tcp buffer becomes full -> qio_channel_write_all() blocks.
Thus, shutdown() on the sockets should be enough to get the treads
unstuck and notice that the connection broke.

If something else hangs, the question is where...

> But it really does not close thew fds before that.

Note: shutdown() is not close().

> >
> > In the bugreport you wrote  
> > > (As a test, I called qio_channel_shutdown() in every multifd iochannel 
> > > and yank worked just fine, but I could not retry migration, because it 
> > > was still 'ongoing')  
> > That sounds like a bug in the error handling for multifd. But quickly
> > looking at the code, it should properly fail the migration.  
> 
> In the end, just asking each thread to just exit ended up getting me a
> smoother migration abort.
> >
> > BTW: You can shutdown outgoing sockets from outside of qemu with the
> > 'ss' utility, like this: 'sudo ss -K dst  dport = 
> > '  
> 
> Very nice tool, thanks for sharing!
> 
> >
> > Regards,
> > Lukas Straub  
> 
> Best regards,
> Leonardo Bras
> 



-- 



pgpF7rDAf4BYx.pgp
Description: OpenPGP digital signature


Re: [PATCH for-6.2 v4 13/14] machine: Split out the smp parsing code

2021-08-03 Thread Andrew Jones
On Tue, Aug 03, 2021 at 04:05:26PM +0800, Yanan Wang wrote:
> We are going to introduce an unit test for the parser smp_parse()
> in hw/core/machine.c, but now machine.c is only built in softmmu.
> 
> In order to solve the build dependency on the smp parsing code and
> avoid building unrelated stuff for the unit tests, move the related
> code from machine.c into a new common file, i.e., machine-smp.c.
> 
> Signed-off-by: Yanan Wang 
> ---
>  MAINTAINERS   |   1 +
>  hw/core/machine-smp.c | 199 ++
>  hw/core/machine.c | 177 -
>  hw/core/meson.build   |   1 +
>  include/hw/boards.h   |   1 +
>  5 files changed, 202 insertions(+), 177 deletions(-)
>  create mode 100644 hw/core/machine-smp.c
>

Reviewed-by: Andrew Jones 




Re: [PULL 0/1] Libslirp update

2021-08-03 Thread Marc-André Lureau
Hi

On Tue, Aug 3, 2021 at 12:55 AM Peter Maydell 
wrote:

> On Mon, 2 Aug 2021 at 19:58, Marc-André Lureau
>  wrote:
> >
> > Hi Peter
> >
> > On Sun, Aug 1, 2021 at 4:10 PM Peter Maydell 
> wrote:
> >>
> >> On Wed, 28 Jul 2021 at 16:47, Marc-André Lureau
> >>  wrote:
> >> > I wish my previous pull request with the submodule change would
> >> > receive more help or attention, as I either couldn't reproduce the
> >> > failure (neither CI) or it was just some one-time warnings due to the
> >> > transition...
> >>
> >> Well, I reported the failures back to you. I can't do a lot more,
> >> because libslirp development is now much more opaque to me because
> >> it doesn't happen in-tree. So instead of "some small change happens and
> >> we pick up issues with it early", you have to deal with all of
> >> the accumulated problems at once when you update the submodule :-(
> >>
> >> rc2 is on Tuesday, so we're starting to run short on time to
> >> get an updated slirp in for 6.1.
> >>
> >
> > Do you mind checking the https://github.com/elmarco/qemu/tree/libslirp
> branch?
> >
> > From https://mail.gnu.org/archive/html/qemu-devel/2021-06/msg00031.html,
> there would still be the one-time warnings from git, but osx and dist error
> should be gone.
>
> Yep, I still see the git "warning: unable to rmdir 'slirp': Directory
> not empty", but I think we can ignore that.
>
> I also see
> config-host.mak is out-of-date, running configure
>   GIT ui/keycodemapdb meson tests/fp/berkeley-testfloat-3
> tests/fp/berkeley-softfloat-3 dtc capstone slirp
> warn: ignoring non-existent submodule slirp
>
> but I think that is also a one-off?
>

yes


> > Only one left as a mystery is the Ubuntu-ASAN link issue.
>
> This one is still here:
>
> subprojects/libslirp/libslirp.so.0.3.1.p/src_arp_table.c.o: In
> function `arp_table_add':
>
> /mnt/nvmedisk/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> undefined reference to `__ubsan_handle_type_mismatch_v1'
>
> /mnt/nvmedisk/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> undefined reference to `__ubsan_handle_type_mismatch_v1'
>
> when building the subprojects/libslirp/libslirp.so.0.3.1
>
> configure options:
> '../../configure' '--cc=clang' '--cxx=clang++' '--enable-gtk'
> '--extra-cflags=-fsanitize=undefined  -fno-sanitize=shift-base
> -Werror'
>

I am not able to reproduce. Could you check the value of default_library
for libslirp when you run "meson configure". It should be "static".

I tested with "make vm-build-ubuntu.amd64", with tests/vm/ubuntu.amd64:
import sys
import basevm
import ubuntuvm

DEFAULT_CONFIG = {
'install_cmds' : "apt-get update,"\
 "apt-get build-dep -y qemu,"\
 "apt-get install -y libfdt-dev language-pack-en
ninja-build clang",
}

class UbuntuX64VM(ubuntuvm.UbuntuVM):
name = "ubuntu.amd64"
arch = "x86_64"
image_link="https://cloud-images.ubuntu.com/releases/bionic/"\
   "release-20191114/ubuntu-18.04-server-cloudimg-amd64.img"

image_sha256="0c55fded766f3e4efb082f604ed71dd58c8e81f04bd1a66b4ced80ad62617547"
BUILD_SCRIPT = """
set -e;
cd $(mktemp -d);
sudo chmod a+r /dev/vdb;
tar -xf /dev/vdb;
./configure {configure_opts} --cc=clang --cxx=clang++
--host-cc=clang --extra-cflags='-fsanitize=undefined
 -fno-sanitize=shift-base -Werror'
make --output-sync {target} -j{jobs} {verbose};
"""

if __name__ == "__main__":
sys.exit(basevm.main(UbuntuX64VM, DEFAULT_CONFIG))



> This happens because (as noted in the clang documentation for the
> sanitizer: https://clang.llvm.org/docs/AddressSanitizer.html)
> when linking a shared library with the sanitizers, clang does not
> link in the sanitizer runtime library. That library is linked in
> with the executable, and the shared library's references to the
> sanitizer runtime functions are satisfied that way. However
> you/meson are building libslirp.so with -Wl,--no-undefined
> so the link of the .so fails.
> (This does not happen with gcc, because gcc chose to make the
> default for sanitizers to be to link against a shared libasan,
> not a static one, the reverse of clang's default.)
>
> What I don't understand is why we're building the .so at all.
> I just tried a fresh build with
> '../../configure' '--cc=clang' '--cxx=clang++' '--enable-gtk'
> '--enable-sanitizers'
> to check that telling configure (and possibly thus meson) about
> the sanitizers more directly still demonstrated the problem:
> but that sidesteps it because it never builds the .so.
> My other build directories (the ones that do plain old gcc
> builds with no sanitizer) seem to have built the .so file
> as well, though, so this isn't related to either clang or to
> the sanitizers -- meson just doesn't seem to be consistent
> about what we build.
>
> A related meson bug:
> https://github.com/mesonbuild/meson/issues/764
> (which was closed by just making meson wa

Re: [PATCH for-6.2 v4 14/14] tests/unit: Add a unit test for smp parsing

2021-08-03 Thread Andrew Jones
On Tue, Aug 03, 2021 at 04:05:27PM +0800, Yanan Wang wrote:
> Add a QEMU unit test for the parsing of given SMP configuration.
> Since all the parsing logic is in generic function smp_parse(),
> this test passes different SMP configurations to the function
> and compare the parsing result with what is expected.
> 
> In the test, all possible collections of the topology parameters
> and the corresponding expected results are listed, including the
> valid and invalid ones.
> 
> The preference of sockets over cores and the preference of cores
> over sockets, and the support of multi-dies are also considered.
> 
> Signed-off-by: Yanan Wang 
> ---
>  MAINTAINERS |   1 +
>  tests/unit/meson.build  |   1 +
>  tests/unit/test-smp-parse.c | 892 
>  3 files changed, 894 insertions(+)
>  create mode 100644 tests/unit/test-smp-parse.c
>

Reviewed-by: Andrew Jones 




Re: [PATCH 1/4] chardev: fix qemu_chr_open_fd() being called with fd=-1

2021-08-03 Thread Daniel P . Berrangé
On Fri, Jul 23, 2021 at 02:28:22PM +0400, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> The "file" chardev may call qemu_chr_open_fd() with fd_in=-1. This may
> cause invalid system calls, as the QIOChannel is assumed to be properly
> initialized later on.
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  chardev/char-fd.c | 22 +-
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/chardev/char-fd.c b/chardev/char-fd.c
> index 1cd62f2779..ee85a52c06 100644
> --- a/chardev/char-fd.c
> +++ b/chardev/char-fd.c
> @@ -133,15 +133,19 @@ void qemu_chr_open_fd(Chardev *chr,
>  FDChardev *s = FD_CHARDEV(chr);
>  char *name;
>  
> -s->ioc_in = QIO_CHANNEL(qio_channel_file_new_fd(fd_in));
> -name = g_strdup_printf("chardev-file-in-%s", chr->label);
> -qio_channel_set_name(QIO_CHANNEL(s->ioc_in), name);
> -g_free(name);
> -s->ioc_out = QIO_CHANNEL(qio_channel_file_new_fd(fd_out));
> -name = g_strdup_printf("chardev-file-out-%s", chr->label);
> -qio_channel_set_name(QIO_CHANNEL(s->ioc_out), name);
> -g_free(name);
> -qemu_set_nonblock(fd_out);
> +if (fd_in >= 0) {
> +s->ioc_in = QIO_CHANNEL(qio_channel_file_new_fd(fd_in));
> +name = g_strdup_printf("chardev-file-in-%s", chr->label);
> +qio_channel_set_name(QIO_CHANNEL(s->ioc_in), name);
> +g_free(name);
> +}
> +if (fd_out >= 0) {
> +s->ioc_out = QIO_CHANNEL(qio_channel_file_new_fd(fd_out));
> +name = g_strdup_printf("chardev-file-out-%s", chr->label);
> +qio_channel_set_name(QIO_CHANNEL(s->ioc_out), name);
> +g_free(name);
> +qemu_set_nonblock(fd_out);
> +}

Other code in this file assumes ioc_out is non-NULL, so this looks
like an incomplete fix.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[Question] Reduce the msix_load cost for VFIO device

2021-08-03 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Hi Alex,

We found that the msix_load() will cost 40~50ms if the VF has 60+ interrupts,
the following code cost too much for each interrupt:

msix_load:
  for (vector = 0; vector < 60; vector++)
msix_handle_mask_update
  vfio_msix_vector_do_use
vfio_add_kvm_msi_virq
  kvm_irqchip_add_msi_route
kvm_irqchip_commit_routes <-- cost 0.8ms each time

In irq remapping mode, the VF interrupts are not routed through KVM irqchip
in fact, so maybe we can reduce this cost by "x-no-kvm-msix=ture", right?
Are there any risks if we do in this way ?

Looking forward to your reply, thanks.



Re: [PATCH 2/4] chardev: fix qemu_chr_open_fd() with fd_in==fd_out

2021-08-03 Thread Daniel P . Berrangé
On Fri, Jul 23, 2021 at 02:28:23PM +0400, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> The "serial" chardev calls qemu_chr_open_fd() with the same fd. This
> may lead to double-close as each QIOChannel owns the fd.
> 
> Instead, share the reference to the same QIOChannel.
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  chardev/char-fd.c | 15 +--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/chardev/char-fd.c b/chardev/char-fd.c
> index ee85a52c06..32166182bf 100644
> --- a/chardev/char-fd.c
> +++ b/chardev/char-fd.c
> @@ -139,13 +139,24 @@ void qemu_chr_open_fd(Chardev *chr,
>  qio_channel_set_name(QIO_CHANNEL(s->ioc_in), name);
>  g_free(name);
>  }
> -if (fd_out >= 0) {
> +
> +if (fd_out < 0) {
> +return;
> +}
> +
> +if (fd_out == fd_in) {
> +s->ioc_out = QIO_CHANNEL(object_ref(s->ioc_in));
> +name = g_strdup_printf("chardev-file-%s", chr->label);
> +qio_channel_set_name(QIO_CHANNEL(s->ioc_in), name);
> +g_free(name);

This is overwriting the name set a few lines earlier. I think the
code ought to be refactor to eliminate this duplication.

ie

  if (fd_out == fd_in) {
s->ioc_out = s->ioc_in = QIO_CHANNEL(qio_channel_file_new_fd(fd_in));

  } else {
s->ioc_in = QIO_CHANNEL(qio_channel_file_new_fd(fd_in));
...
s->ioc_out = QIO_CHANNEL(qio_channel_file_new_fd(fd_out));
...
  }

> +} else {
>  s->ioc_out = QIO_CHANNEL(qio_channel_file_new_fd(fd_out));
>  name = g_strdup_printf("chardev-file-out-%s", chr->label);
>  qio_channel_set_name(QIO_CHANNEL(s->ioc_out), name);
>  g_free(name);
> -qemu_set_nonblock(fd_out);
>  }
> +
> +qemu_set_nonblock(fd_out);
>  }
>  
>  static void char_fd_class_init(ObjectClass *oc, void *data)
> -- 
> 2.32.0.264.g75ae10bc75
> 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 3/4] chardev: remove needless class method

2021-08-03 Thread Daniel P . Berrangé
On Fri, Jul 23, 2021 at 02:28:24PM +0400, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> "chr_option_parsed" is only implemented by the "mux" chardev, we can
> specialize the code there to avoid the needless generic class method.
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  include/chardev/char.h | 1 -
>  chardev/char-mux.c | 6 ++
>  2 files changed, 2 insertions(+), 5 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 4/4] chardev: add some comments about the class methods

2021-08-03 Thread Daniel P . Berrangé
On Fri, Jul 23, 2021 at 02:28:25PM +0400, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  include/chardev/char.h | 33 +
>  1 file changed, 33 insertions(+)
> 
> diff --git a/include/chardev/char.h b/include/chardev/char.h
> index 589e7fe46d..2e4c16f82f 100644
> --- a/include/chardev/char.h
> +++ b/include/chardev/char.h
> @@ -254,24 +254,57 @@ struct ChardevClass {
>  
>  bool internal; /* TODO: eventually use TYPE_USER_CREATABLE */
>  bool supports_yank;
> +
> +/* parse command line options and populate QAPI @backend */
>  void (*parse)(QemuOpts *opts, ChardevBackend *backend, Error **errp);
>  
> +/* called after construction, open/starts the backend */
>  void (*open)(Chardev *chr, ChardevBackend *backend,
>   bool *be_opened, Error **errp);
>  
> +/* write buf to the backend */
>  int (*chr_write)(Chardev *s, const uint8_t *buf, int len);
> +
> +/*
> + * Read from the backend (blocking). A typical front-end will instead 
> rely
> + * on char_can_read/chr_read being called when polling/looping.
> + */

chr_can_read

>  int (*chr_sync_read)(Chardev *s, const uint8_t *buf, int len);
> +
> +/* create a watch on the backend */
>  GSource *(*chr_add_watch)(Chardev *s, GIOCondition cond);
> +
> +/* update the backend internal sources */
>  void (*chr_update_read_handler)(Chardev *s);
> +
> +/* send an ioctl to the backend */
>  int (*chr_ioctl)(Chardev *s, int cmd, void *arg);
> +
> +/* get ancillary-received fds during last read */
>  int (*get_msgfds)(Chardev *s, int* fds, int num);
> +
> +/* set ancillary fds to be sent with next write */
>  int (*set_msgfds)(Chardev *s, int *fds, int num);
> +
> +/* accept the given fd */
>  int (*chr_add_client)(Chardev *chr, int fd);
> +
> +/* wait for a connection */
>  int (*chr_wait_connected)(Chardev *chr, Error **errp);
> +
> +/* disconnect a connection */
>  void (*chr_disconnect)(Chardev *chr);
> +
> +/* called by frontend when it can read */
>  void (*chr_accept_input)(Chardev *chr);
> +
> +/* set terminal echo */
>  void (*chr_set_echo)(Chardev *chr, bool echo);
> +
> +/* notify the backend of frontend open state */
>  void (*chr_set_fe_open)(Chardev *chr, int fe_open);
> +
> +/* handle various events */
>  void (*chr_be_event)(Chardev *s, QEMUChrEvent event);
>  };

Reviewed-by: Daniel P. Berrangé 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH-for-6.1 v2 2/2] hw/sd/sdcard: Fix assertion accessing out-of-range addresses with CMD30

2021-08-03 Thread Peter Maydell
On Tue, 3 Aug 2021 at 00:55, Philippe Mathieu-Daudé  wrote:
>
> OSS-Fuzz found sending illegal addresses when querying the write
> protection bits triggers the assertion added in commit 84816fb63e5
> ("hw/sd/sdcard: Assert if accessing an illegal group"):
>
>   qemu-fuzz-i386-target-generic-fuzz-sdhci-v3: ../hw/sd/sd.c:824: uint32_t 
> sd_wpbits(SDState *, uint64_t):
>   Assertion `wpnum < sd->wpgrps_size' failed.
>   #3 0x7f62a8b22c91 in __assert_fail
>   #4 0x5569adcec405 in sd_wpbits hw/sd/sd.c:824:9
>   #5 0x5569adce5f6d in sd_normal_command hw/sd/sd.c:1389:38
>   #6 0x5569adce3870 in sd_do_command hw/sd/sd.c:1737:17
>   #7 0x5569adcf1566 in sdbus_do_command hw/sd/core.c:100:16
>   #8 0x5569adcfc192 in sdhci_send_command hw/sd/sdhci.c:337:12
>   #9 0x5569adcfa3a3 in sdhci_write hw/sd/sdhci.c:1186:9
>   #10 0x5569adfb3447 in memory_region_write_accessor softmmu/memory.c:492:5
>
> It is legal for the CMD30 to query for out-of-range addresses.
> Such invalid addresses are simply ignored in the response (write
> protection bits set to 0).
>
> In commit 84816fb63e5 ("hw/sd/sdcard: Assert if accessing an illegal
> group") we misplaced the assertion *before* we test the address is
> in range. Move it *after*.
>
> Include the qtest reproducer provided by Alexander Bulekov:
>
>   $ make check-qtest-i386
>   ...
>   Running test qtest-i386/fuzz-sdcard-test
>   qemu-system-i386: ../hw/sd/sd.c:824: sd_wpbits: Assertion `wpnum < 
> sd->wpgrps_size' failed.
>
> Cc: qemu-sta...@nongnu.org
> Reported-by: OSS-Fuzz (Issue 29225)
> Suggested-by: Peter Maydell 
> Fixes: 84816fb63e5 ("hw/sd/sdcard: Assert if accessing an illegal group")
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/495
> Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: Peter Maydell 

thanks
-- PMM



Re: [PATCH-for-6.1 v2 1/2] hw/sd/sdcard: Document out-of-range addresses for SEND_WRITE_PROT

2021-08-03 Thread Peter Maydell
On Tue, 3 Aug 2021 at 00:55, Philippe Mathieu-Daudé  wrote:
>
> Per the 'Physical Layer Simplified Specification Version 3.01',
> Table 4-22: 'Block Oriented Write Protection Commands'
>
>   SEND_WRITE_PROT (CMD30)
>
>   If the card provides write protection features, this command asks
>   the card to send the status of the write protection bits [1].
>
>   [1] 32 write protection bits (representing 32 write protect groups
>   starting at the specified address) [...]
>   The last (least significant) bit of the protection bits corresponds
>   to the first addressed group. If the addresses of the last groups
>   are outside the valid range, then the corresponding write protection
>   bits shall be set to 0.
>
> Split the if() statement (without changing the behaviour of the code)
> to better position the description comment.
>
> Reviewed-by: Alexander Bulekov 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  hw/sd/sd.c | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/hw/sd/sd.c b/hw/sd/sd.c
> index 1f964e022b1..707dcc12a14 100644
> --- a/hw/sd/sd.c
> +++ b/hw/sd/sd.c
> @@ -822,7 +822,14 @@ static uint32_t sd_wpbits(SDState *sd, uint64_t addr)
>
>  for (i = 0; i < 32; i++, wpnum++, addr += WPGROUP_SIZE) {
>  assert(wpnum < sd->wpgrps_size);
> -if (addr < sd->size && test_bit(wpnum, sd->wp_groups)) {
> +if (addr >= sd->size) {
> +/*
> + * If the addresses of the last groups are outside the valid 
> range,
> + * then the corresponding write protection bits shall be set to 
> 0.
> + */
> +continue;
> +}
> +if (test_bit(wpnum, sd->wp_groups)) {
>  ret |= (1 << i);


Reviewed-by: Peter Maydell 

thanks
-- PMM



Re: [PATCH v2 0/3] Add support for Fujitsu A64FX processor

2021-08-03 Thread Peter Maydell
On Tue, 3 Aug 2021 at 01:37, ishii.shuuic...@fujitsu.com
 wrote:
>
> > I'm afraid this isn't the way a v2 patchseries should be structured.
> > The idea is that a v2 series should be complete in itself, not based on 
> > whatever v1
> > was. So when you make the changes requested in review of v1, you update the
> > commits in your local git branch, and then you send out the patches as the 
> > v2. v2
> > should apply cleanly on to master, and all the patches in it should be 
> > logically
> > separated out changes (with no "patch 1 makes a change and then patch 2
> > changes the code that was added in patch 1" effects).
>
> Thank you for comments.
> We apologize for the inconvenience caused by our lack of understanding.
> I understood your point.
>
> Just to confirm,
> should I update to v3 and resubmit it as a patch series based on the points 
> you mentioned?

Yes, please.

thanks
-- PMM



Re: [PULL 0/1] Libslirp update

2021-08-03 Thread Peter Maydell
On Tue, 3 Aug 2021 at 09:30, Marc-André Lureau
 wrote:
> On Tue, Aug 3, 2021 at 12:55 AM Peter Maydell  
> wrote:
>> This one is still here:
>>
>> subprojects/libslirp/libslirp.so.0.3.1.p/src_arp_table.c.o: In
>> function `arp_table_add':
>> /mnt/nvmedisk/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
>> undefined reference to `__ubsan_handle_type_mismatch_v1'
>> /mnt/nvmedisk/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
>> undefined reference to `__ubsan_handle_type_mismatch_v1'
>>
>> when building the subprojects/libslirp/libslirp.so.0.3.1
>>
>> configure options:
>> '../../configure' '--cc=clang' '--cxx=clang++' '--enable-gtk'
>> '--extra-cflags=-fsanitize=undefined  -fno-sanitize=shift-base
>> -Werror'
>
>
> I am not able to reproduce. Could you check the value of default_library for 
> libslirp when you run "meson configure". It should be "static".

I never run "meson configure". I just run the QEMU "make".

Are you testing by starting with a before-the-libslirp-merge
change QEMU, doing a build, then updating to the libslirp
changes, and then doing a 'make' without reconfigure or
'make clean' ? I suspect this is perhaps to do with it being
an incremental build.

>> This happens because (as noted in the clang documentation for the
>> sanitizer: https://clang.llvm.org/docs/AddressSanitizer.html)
>> when linking a shared library with the sanitizers, clang does not
>> link in the sanitizer runtime library. That library is linked in
>> with the executable, and the shared library's references to the
>> sanitizer runtime functions are satisfied that way. However
>> you/meson are building libslirp.so with -Wl,--no-undefined
>> so the link of the .so fails.
>> (This does not happen with gcc, because gcc chose to make the
>> default for sanitizers to be to link against a shared libasan,
>> not a static one, the reverse of clang's default.)
>>
>> What I don't understand is why we're building the .so at all.
>> I just tried a fresh build with
>> '../../configure' '--cc=clang' '--cxx=clang++' '--enable-gtk'
>> '--enable-sanitizers'
>> to check that telling configure (and possibly thus meson) about
>> the sanitizers more directly still demonstrated the problem:
>> but that sidesteps it because it never builds the .so.
>> My other build directories (the ones that do plain old gcc
>> builds with no sanitizer) seem to have built the .so file
>> as well, though, so this isn't related to either clang or to
>> the sanitizers -- meson just doesn't seem to be consistent
>> about what we build.
>>
>> A related meson bug:
>> https://github.com/mesonbuild/meson/issues/764
>> (which was closed by just making meson warn if you tell it
>> to both use --no-undefined (which is the default) and to use
>> the sanitizer.)
>>
>> The ideal fix seems to me to be to figure out why we're
>> building the libslirp .so and not do that.
>>
>> A simple fix/workaround would be to set "b_lundef = false" in
>> default_options in your meson.build (which will suppress the
>> -Wl,--no-undefined option). That does mean you won't get
>> any warnings if you accidentally make libslirp use a function
>> that is provided by the QEMU executable, I suppose.
>>
>
> What if you pass --extra-ldflags='-fsanitize=undefined' then?

-fsanitize=undefined is already in the ldflags. That doesn't
help because clang is still going to decide not to statically
link libasan and give an error because of -Wl,--noundefined.

-- PMM



Re: [PATCH] hw/net: Discard overly fragmented packets

2021-08-03 Thread Thomas Huth

On 05/07/2021 10.40, Philippe Mathieu-Daudé wrote:

Our infrastructure can handle fragmented packets up to
NET_MAX_FRAG_SG_LIST (64) pieces. This hard limit has
been proven enough in production for years. If it is
reached, it is likely an evil crafted packet. Discard it.

Include the qtest reproducer provided by Alexander Bulekov:

   $ make check-qtest-i386
   ...
   Running test qtest-i386/fuzz-vmxnet3-test
   qemu-system-i386: net/eth.c:334: void eth_setup_ip4_fragmentation(const void 
*, size_t, void *, size_t, size_t, size_t, _Bool):
   Assertion `frag_offset % IP_FRAG_UNIT_SIZE == 0' failed.

Cc: qemu-sta...@nongnu.org
Reported-by: OSS-Fuzz (Issue 35799)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/460
Signed-off-by: Philippe Mathieu-Daudé 
---
  hw/net/net_tx_pkt.c |   8 ++
  tests/qtest/fuzz-vmxnet3-test.c | 195 
  MAINTAINERS |   1 +
  tests/qtest/meson.build |   1 +
  4 files changed, 205 insertions(+)
  create mode 100644 tests/qtest/fuzz-vmxnet3-test.c


Reviewed-by: Thomas Huth 

Jason, I think this would even still qualify for QEMU v6.1 ?




Re: migration-test hang, s390x host, aarch64 guest

2021-08-03 Thread Dr. David Alan Gilbert
* Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> * Peter Maydell (peter.mayd...@linaro.org) wrote:
> > migration-test hung again during 'make check'.
> 
> ccing in Peter Xu

I've not managed to reproduce this here; I've had the migration tests
running in a loop for the last day just running aarch64 on s390's
migration test;  I've hit a multifd/cancel hang a few times (which I
think might be the same one Leo is fighting), but no problems in
the postcopy test.

[2 vcpu, ~2G RAM zvm guest???]

Dave

> > Process tree:
> > 
> > ubuntu   42067  0.0  0.0   5460  3156 ?S13:55   0:00
> >\_ bash -o pipefail -c echo
> > 'MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}
> > QTEST_QEMU_IMG=./qemu-img
> > G_TEST_DBUS_DAEMON=/home/ubuntu/qemu/tests/dbus-vmstate-daemon.sh
> > QTEST_QEMU_BINARY=./qemu-system-aarch64
> > QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> > tests/qtest/migration-test --tap -k' &&
> > MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}
> > QTEST_QEMU_IMG=./qemu-img
> > G_TEST_DBUS_DAEMON=/home/ubuntu/qemu/tests/dbus-vmstate-daemon.sh
> > QTEST_QEMU_BINARY=./qemu-system-aarch64
> > QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> > tests/qtest/migration-test --tap -k -m quick < /dev/null |
> > ./scripts/tap-driver.pl --test-name="qtest-aarch64/migration-test"
> > ubuntu   42070  0.0  0.0  78844  3184 ?Sl   13:55   0:01
> >\_ tests/qtest/migration-test --tap -k -m quick
> > ubuntu   43248  0.0  4.1 1401368 160852 ?  Sl   13:55   0:03
> >|   \_ ./qemu-system-aarch64 -qtest
> > unix:/tmp/qtest-42070.sock -qtest-log /dev/null -chardev
> > socket,path=/tmp/qtest-42070.qmp,id=char0 -mon
> > chardev=char0,mode=control -display none -accel kvm -accel tcg
> > -machine virt,gic-version=max -name source,debug-threads=on -m 150M
> > -serial file:/tmp/migration-test-SL7EkN/src_serial -cpu max -kernel
> > /tmp/migration-test-SL7EkN/bootsect -accel qtest
> > ubuntu   43256  0.0  1.4 1394208 57648 ?   Sl   13:55   0:00
> >|   \_ ./qemu-system-aarch64 -qtest
> > unix:/tmp/qtest-42070.sock -qtest-log /dev/null -chardev
> > socket,path=/tmp/qtest-42070.qmp,id=char0 -mon
> > chardev=char0,mode=control -display none -accel kvm -accel tcg
> > -machine virt,gic-version=max -name target,debug-threads=on -m 150M
> > -serial file:/tmp/migration-test-SL7EkN/dest_serial -incoming
> > unix:/tmp/migration-test-SL7EkN/migsocket -cpu max -kernel
> > /tmp/migration-test-SL7EkN/bootsect -accel qtest
> > ubuntu   42071  0.0  0.2  14116 11372 ?S13:55   0:00
> >\_ perl ./scripts/tap-driver.pl
> > --test-name=qtest-aarch64/migration-test
> > 
> > Backtrace, migration-test process:
> > 
> > Thread 2 (Thread 0x3ff8ff7f910 (LWP 42075)):
> > #0  syscall () at ../sysdeps/unix/sysv/linux/s390/s390-64/syscall.S:53
> > #1  0x02aa1d0d1c1c in qemu_futex_wait (val=,
> > f=) at /home/ubuntu/qemu/include/qemu/futex.h:29
> > #2  qemu_event_wait (ev=ev@entry=0x2aa1d0fda58 )
> > at ../../util/qemu-thread-posix.c:480
> > #3  0x02aa1d0d0884 in call_rcu_thread (opaque=opaque@entry=0x0) at
> > ../../util/rcu.c:258
> > #4  0x02aa1d0d0c32 in qemu_thread_start (args=) at
> > ../../util/qemu-thread-posix.c:541
> > #5  0x03ff90207aa8 in start_thread (arg=0x3ff8ff7f910) at
> > pthread_create.c:463
> > #6  0x03ff900fa11e in thread_start () at
> > ../sysdeps/unix/sysv/linux/s390/s390-64/clone.S:65
> > 
> > Thread 1 (Thread 0x3ff905f7c00 (LWP 42070)):
> > #0  0x03ff90212978 in __waitpid (pid=,
> > stat_loc=stat_loc@entry=0x2aa1d5e06bc, options=options@entry=0)
> > at ../sysdeps/unix/sysv/linux/waitpid.c:30
> > #1  0x02aa1d0a2176 in qtest_kill_qemu (s=0x2aa1d5e06b0) at
> > ../../tests/qtest/libqtest.c:144
> > #2  0x03ff903c0de6 in g_hook_list_invoke () from
> > /usr/lib/s390x-linux-gnu/libglib-2.0.so.0
> > #3  
> > #4  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
> > #5  0x03ff9003f3ca in __GI_abort () at abort.c:79
> > #6  0x03ff903fcbb2 in g_assertion_message () from
> > /usr/lib/s390x-linux-gnu/libglib-2.0.so.0
> > #7  0x03ff903fd2b4 in g_assertion_message_cmpstr () from
> > /usr/lib/s390x-linux-gnu/libglib-2.0.so.0
> > #8  0x02aa1d0a10f6 in check_migration_status
> > (ungoals=0x3dfe198, goal=0x2aa1d0d75c0 "postcopy-paused",
> > who=0x2aa1d5dee90)
> 
> g_assert_cmpstr(current_status, !=,  *ungoal);
> 
> > at ../../tests/qtest/migration-helpers.c:146
> > #9  wait_for_migration_status (who=0x2aa1d5dee90, goal= > out>, ungoals=0x3dfe198)
> > at ../../tests/qtest/migration-helpers.c:156
> > #10 0x02aa1d09f610 in test_postcopy_recovery () at
> > ../../tests/qtest/migration-test.c:773
> 
> wait_for_migration_status(from, "postcopy-paused",
>   (const char * []) { "failed", "active",
> 

Re: [PULL 0/1] Libslirp update

2021-08-03 Thread Marc-André Lureau
Hi

On Tue, Aug 3, 2021 at 1:09 PM Peter Maydell 
wrote:

> On Tue, 3 Aug 2021 at 09:30, Marc-André Lureau
>  wrote:
> > On Tue, Aug 3, 2021 at 12:55 AM Peter Maydell 
> wrote:
> >> This one is still here:
> >>
> >> subprojects/libslirp/libslirp.so.0.3.1.p/src_arp_table.c.o: In
> >> function `arp_table_add':
> >>
> /mnt/nvmedisk/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >>
> /mnt/nvmedisk/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >>
> >> when building the subprojects/libslirp/libslirp.so.0.3.1
> >>
> >> configure options:
> >> '../../configure' '--cc=clang' '--cxx=clang++' '--enable-gtk'
> >> '--extra-cflags=-fsanitize=undefined  -fno-sanitize=shift-base
> >> -Werror'
> >
> >
> > I am not able to reproduce. Could you check the value of default_library
> for libslirp when you run "meson configure". It should be "static".
>
> I never run "meson configure". I just run the QEMU "make".
>
>
Yes, here running "meson configure" after configure/make allows me to check
what meson has actually recorded.


> Are you testing by starting with a before-the-libslirp-merge
> change QEMU, doing a build, then updating to the libslirp
> changes, and then doing a 'make' without reconfigure or
> 'make clean' ? I suspect this is perhaps to do with it being
> an incremental build.
>
>
I tested with the "make vm-build-ubuntu.amd64" setup I gave before, so it
is a fresh build. Doing incremental build test is tedious, but I can give
it a try.

>> This happens because (as noted in the clang documentation for the
> >> sanitizer: https://clang.llvm.org/docs/AddressSanitizer.html)
> >> when linking a shared library with the sanitizers, clang does not
> >> link in the sanitizer runtime library. That library is linked in
> >> with the executable, and the shared library's references to the
> >> sanitizer runtime functions are satisfied that way. However
> >> you/meson are building libslirp.so with -Wl,--no-undefined
> >> so the link of the .so fails.
> >> (This does not happen with gcc, because gcc chose to make the
> >> default for sanitizers to be to link against a shared libasan,
> >> not a static one, the reverse of clang's default.)
> >>
> >> What I don't understand is why we're building the .so at all.
> >> I just tried a fresh build with
> >> '../../configure' '--cc=clang' '--cxx=clang++' '--enable-gtk'
> >> '--enable-sanitizers'
> >> to check that telling configure (and possibly thus meson) about
> >> the sanitizers more directly still demonstrated the problem:
> >> but that sidesteps it because it never builds the .so.
> >> My other build directories (the ones that do plain old gcc
> >> builds with no sanitizer) seem to have built the .so file
> >> as well, though, so this isn't related to either clang or to
> >> the sanitizers -- meson just doesn't seem to be consistent
> >> about what we build.
> >>
> >> A related meson bug:
> >> https://github.com/mesonbuild/meson/issues/764
> >> (which was closed by just making meson warn if you tell it
> >> to both use --no-undefined (which is the default) and to use
> >> the sanitizer.)
> >>
> >> The ideal fix seems to me to be to figure out why we're
> >> building the libslirp .so and not do that.
> >>
> >> A simple fix/workaround would be to set "b_lundef = false" in
> >> default_options in your meson.build (which will suppress the
> >> -Wl,--no-undefined option). That does mean you won't get
> >> any warnings if you accidentally make libslirp use a function
> >> that is provided by the QEMU executable, I suppose.
> >>
> >
> > What if you pass --extra-ldflags='-fsanitize=undefined' then?
>
> -fsanitize=undefined is already in the ldflags. That doesn't
> help because clang is still going to decide not to statically
> link libasan and give an error because of -Wl,--noundefined.
>

Ok


-- 
Marc-André Lureau


Re: [PULL 0/1] Libslirp update

2021-08-03 Thread Peter Maydell
On Tue, 3 Aug 2021 at 10:08, Peter Maydell  wrote:
> Are you testing by starting with a before-the-libslirp-merge
> change QEMU, doing a build, then updating to the libslirp
> changes, and then doing a 'make' without reconfigure or
> 'make clean' ? I suspect this is perhaps to do with it being
> an incremental build.

More specifically:

$ git checkout master
$ mkdir build/slirptest
$ (cd build/slirptest && ../../configure --target-list=arm-softmmu
--enable-debug --disable-docs --disable-tools)
$ make -C build/slirptest -j8
$ git checkout remotes/elmarco/libslirp
$ make -C build/slirptest
$ mkdir build/slirp2
$ (cd build/slirptest && ../../configure --target-list=arm-softmmu
--enable-debug --disable-docs --disable-tools)
$ make -C build/slirp2

The build/slirptest directory has built a .so, and the
build/slirp2 directory has built a static .a.

(Both builds succeed because they're not hitting the clang
static analyzer thing, but they shouldn't be building different
types of library.)

Running '../../meson/meson.py configure' in slirp2 gives
(ignoring the non libslirp parts of the output):

Subproject libslirp:

  Core options  Current Value
Possible Values
Description
    -
---
---
  libslirp:default_library  static
[shared, static, both]Default
library type
  libslirp:werror   true   [true,
false] Treat warnings
as errors

  Project options   Current Value
Possible Values
Description
  ---   -
---
---
  libslirp:version_suffix
  Suffix to append
to SLIRP_VERSION_STRING

In slirptest I get only:

Subproject libslirp:

  Project options   Current Value
Possible Values
Description
  ---   -
---
---
  libslirp:version_suffix
  Suffix to append
to SLIRP_VERSION_STRING


So somehow meson has failed to apply the default_library and werror options in
the incremental build case.

-- PMM



Re: [PATCH] hw/net: Discard overly fragmented packets

2021-08-03 Thread Philippe Mathieu-Daudé
On 8/3/21 11:33 AM, Thomas Huth wrote:
> On 05/07/2021 10.40, Philippe Mathieu-Daudé wrote:
>> Our infrastructure can handle fragmented packets up to
>> NET_MAX_FRAG_SG_LIST (64) pieces. This hard limit has
>> been proven enough in production for years. If it is
>> reached, it is likely an evil crafted packet. Discard it.
>>
>> Include the qtest reproducer provided by Alexander Bulekov:
>>
>>    $ make check-qtest-i386
>>    ...
>>    Running test qtest-i386/fuzz-vmxnet3-test
>>    qemu-system-i386: net/eth.c:334: void
>> eth_setup_ip4_fragmentation(const void *, size_t, void *, size_t,
>> size_t, size_t, _Bool):
>>    Assertion `frag_offset % IP_FRAG_UNIT_SIZE == 0' failed.
>>
>> Cc: qemu-sta...@nongnu.org
>> Reported-by: OSS-Fuzz (Issue 35799)
>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/460
>> Signed-off-by: Philippe Mathieu-Daudé 
>> ---
>>   hw/net/net_tx_pkt.c |   8 ++
>>   tests/qtest/fuzz-vmxnet3-test.c | 195 
>>   MAINTAINERS |   1 +
>>   tests/qtest/meson.build |   1 +
>>   4 files changed, 205 insertions(+)
>>   create mode 100644 tests/qtest/fuzz-vmxnet3-test.c
> 
> Reviewed-by: Thomas Huth 
> 
> Jason, I think this would even still qualify for QEMU v6.1 ?

Yes, easy one for 6.1.




Re: [PATCH v2 39/55] target/sparc: Use cpu_*_mmu instead of helper_*_mmu

2021-08-03 Thread Philippe Mathieu-Daudé
On 8/3/21 6:14 AM, Richard Henderson wrote:
> The helper_*_mmu functions were the only thing available
> when this code was written.  This could have been adjusted
> when we added cpu_*_mmuidx_ra, but now we can most easily
> use the newest set of interfaces.
> 
> Cc: Mark Cave-Ayland 
> Signed-off-by: Richard Henderson 
> ---
>  target/sparc/ldst_helper.c | 14 +++---
>  1 file changed, 7 insertions(+), 7 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 02/55] hw/core: Make do_unaligned_access available to user-only

2021-08-03 Thread Philippe Mathieu-Daudé
On 8/3/21 6:13 AM, Richard Henderson wrote:
> We shouldn't be ignoring SIGBUS for user-only.
> 
> Move our existing TCGCPUOps hook out from CONFIG_SOFTMMU.
> Move the wrapper, cpu_unaligned_access, to cpu-exec-common.c.
> 
> Signed-off-by: Richard Henderson 
> ---
>  accel/tcg/internal.h  |  4 
>  include/hw/core/tcg-cpu-ops.h | 16 
>  accel/tcg/cpu-exec-common.c   | 12 
>  accel/tcg/cputlb.c|  9 -
>  4 files changed, 24 insertions(+), 17 deletions(-)
> 
> diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
> index 881bc1ede0..a5e70cd91d 100644
> --- a/accel/tcg/internal.h
> +++ b/accel/tcg/internal.h
> @@ -19,4 +19,8 @@ void QEMU_NORETURN cpu_io_recompile(CPUState *cpu, 
> uintptr_t retaddr);
>  void page_init(void);
>  void tb_htable_init(void);
>  
> +void QEMU_NORETURN cpu_unaligned_access(CPUState *cpu, vaddr addr,
> +MMUAccessType access_type,
> +int mmu_idx, uintptr_t retaddr);

Thanks for using noreturn :)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 01/55] hw/core: Make do_unaligned_access noreturn

2021-08-03 Thread Philippe Mathieu-Daudé
On 8/3/21 6:13 AM, Richard Henderson wrote:
> While we may have had some thought of allowing system-mode
> to return from this hook, we have no guests that require this.
> 
> Signed-off-by: Richard Henderson 
> ---
>  include/hw/core/tcg-cpu-ops.h  | 3 ++-
>  target/alpha/cpu.h | 4 ++--
>  target/arm/internals.h | 3 ++-
>  target/microblaze/cpu.h| 2 +-
>  target/mips/tcg/tcg-internal.h | 4 ++--
>  target/nios2/cpu.h | 4 ++--
>  target/ppc/internal.h  | 4 ++--
>  target/riscv/cpu.h | 2 +-
>  target/s390x/s390x-internal.h  | 4 ++--
>  target/sh4/cpu.h   | 4 ++--
>  target/xtensa/cpu.h| 4 ++--
>  target/hppa/cpu.c  | 7 ---
>  12 files changed, 24 insertions(+), 21 deletions(-)

> diff --git a/target/alpha/cpu.h b/target/alpha/cpu.h
> index 82df108967..6eb3fcc63e 100644
> --- a/target/alpha/cpu.h
> +++ b/target/alpha/cpu.h
> @@ -283,8 +283,8 @@ hwaddr alpha_cpu_get_phys_page_debug(CPUState *cpu, vaddr 
> addr);
>  int alpha_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
>  int alpha_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
>  void alpha_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
> -   MMUAccessType access_type,
> -   int mmu_idx, uintptr_t retaddr);
> +   MMUAccessType access_type, int mmu_idx,
> +   uintptr_t retaddr) QEMU_NORETURN;
>  
>  #define cpu_list alpha_cpu_list
>  #define cpu_signal_handler cpu_alpha_signal_handler
> diff --git a/target/arm/internals.h b/target/arm/internals.h
> index cd2ea8a388..3da9b1c61e 100644
> --- a/target/arm/internals.h
> +++ b/target/arm/internals.h
> @@ -594,7 +594,8 @@ bool arm_s1_regime_using_lpae_format(CPUARMState *env, 
> ARMMMUIdx mmu_idx);
>  /* Raise a data fault alignment exception for the specified virtual address 
> */
>  void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
>   MMUAccessType access_type,
> - int mmu_idx, uintptr_t retaddr);
> + int mmu_idx, uintptr_t retaddr)
> +QEMU_NORETURN;

This one ended misaligned, I'd align as:

QEMU_NORETURN;

Otherwise:
Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v6 0/2] target/s390x: Fix SIGILL and SIGFPE psw.addr reporting

2021-08-03 Thread Laurent Vivier
Le 03/08/2021 à 10:13, Cornelia Huck a écrit :
> On Mon, Jul 05 2021, Ilya Leoshkevich  wrote:
> 
>> qemu-s390x puts a wrong value into SIGILL's siginfo_t's psw.addr: it
>> should be a pointer to the instruction following the illegal
>> instruction, but at the moment it is a pointer to the illegal
>> instruction itself. This breaks OpenJDK, which relies on this value.
>> A similar problem exists for SIGFPE.
>>
>> Patch 1 fixes the issue, patch 2 adds a test.
>>
>> v1: https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06592.html
>> v1 -> v2: Use a better buglink (Cornelia), simplify the inline asm
>>   magic in the test and add an explanation (David).
>>
>> v2: https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06649.html
>> v2 -> v3: Fix SIGSEGV handling (found when trying to run valgrind under
>>   qemu-user).
>>
>> v3: https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg00299.html
>> v3 -> v4: Fix compiling the test on Ubuntu 20.04 (Jonathan).
>>
>> v4: https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg05848.html
>> v4 -> v5: Greatly simplify the fix (Ulrich).
>>
>> v5: https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg06244.html
>> v5 -> v6: Fix breakpoints (David). Add gdbstub test.
>>
>> Note: the compare-and-trap SIGFPE issue is being fixed separately.
>> https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg05690.html
>>
>> Ilya Leoshkevich (2):
>>   target/s390x: Fix SIGILL and SIGFPE psw.addr reporting
>>   tests/tcg/s390x: Test SIGILL and SIGSEGV handling
>>
>>  linux-user/s390x/cpu_loop.c   |  12 +-
>>  tests/tcg/s390x/Makefile.target   |  18 +-
>>  tests/tcg/s390x/gdbstub/test-signals-s390x.py |  76 
>>  tests/tcg/s390x/signals-s390x.c   | 165 ++
>>  4 files changed, 269 insertions(+), 2 deletions(-)
>>  create mode 100644 tests/tcg/s390x/gdbstub/test-signals-s390x.py
>>  create mode 100644 tests/tcg/s390x/signals-s390x.c
> 
> So, I'd like to see this merged, but I'm unsure on what we agreed -- I
> thought this would go via linux-user. Do I misremember?
> 

Please, take them via the s390x branch.

Thanks,
Laurent



[RFC PATCH 1/3] configure: don't override the selected host test compiler if defined

2021-08-03 Thread Alex Bennée
There are not many cases you would want to do this but one is if you
want to use a test friendly compiler like gcc instead of a system
compiler like clang. Either way we should honour the users choice if
they have made it.

Signed-off-by: Alex Bennée 
Cc: Warner Losh 
---
 configure | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/configure b/configure
index 9a79a004d7..eadc434144 100755
--- a/configure
+++ b/configure
@@ -1691,8 +1691,11 @@ case "$cpu" in
 # No special flags required for other host CPUs
 esac
 
-eval "cross_cc_${cpu}=\$cc"
-cross_cc_vars="$cross_cc_vars cross_cc_${cpu}"
+if eval test -z "\${cross_cc_$cpu}"; then
+eval "cross_cc_${cpu}=\$cc"
+cross_cc_vars="$cross_cc_vars cross_cc_${cpu}"
+fi
+
 QEMU_CFLAGS="$CPU_CFLAGS $QEMU_CFLAGS"
 
 # For user-mode emulation the host arch has to be one we explicitly
-- 
2.30.2




[RFC PATCH 0/3] check-tcg hacks for BSD

2021-08-03 Thread Alex Bennée
Hi Warner,

Here are some hacks I made to nominally get the check-tcg system
working on the BSD user builds. The first step was installing GCC as
we skip clang for x86 builds due to inline assembly issues:

  ../src/configure --disable-system --enable-user \
--python=/usr/local/bin/python3.7 --cross-cc-x86_64=/usr/local/bin/gcc10

and then at least "gmake build-tcg" generates some binaries. You
should also be able to drop a simple helloworld.c in
tests/tcg/multiarch and have something simple to start with. At the
moment all apart from sha1 segfault when run native. None of them run
under the user mode emulation. I leave figuring that out to the BSD
experts.

Alex Bennée (3):
  configure: don't override the selected host test compiler if defined
  tests/tcg/sha1: remove endian include
  tests/tcg: commit Makefile atrocities in the name of portability

 configure   | 7 +--
 tests/tcg/multiarch/sha1.c  | 1 -
 tests/tcg/multiarch/Makefile.target | 6 +-
 tests/tcg/x86_64/Makefile.target| 4 
 4 files changed, 14 insertions(+), 4 deletions(-)

-- 
2.30.2




[RFC PATCH 3/3] tests/tcg: commit Makefile atrocities in the name of portability

2021-08-03 Thread Alex Bennée
Not all of the multiarch tests are pure POSIX so elide over those
tests on a non-Linux system. This allows for at least some of the
tests to be nominally usable by *BSD user builds.

Signed-off-by: Alex Bennée 
Cc: Warner Losh 
---
 tests/tcg/multiarch/Makefile.target | 6 +-
 tests/tcg/x86_64/Makefile.target| 4 
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/tests/tcg/multiarch/Makefile.target 
b/tests/tcg/multiarch/Makefile.target
index 85a6fb7a2e..38ee0f1dec 100644
--- a/tests/tcg/multiarch/Makefile.target
+++ b/tests/tcg/multiarch/Makefile.target
@@ -10,7 +10,11 @@ MULTIARCH_SRC=$(SRC_PATH)/tests/tcg/multiarch
 # Set search path for all sources
 VPATH  += $(MULTIARCH_SRC)
 MULTIARCH_SRCS   =$(notdir $(wildcard $(MULTIARCH_SRC)/*.c))
-MULTIARCH_TESTS  =$(filter-out float_helpers, $(MULTIARCH_SRCS:.c=))
+MULTIARCH_SKIP=float_helpers
+ifeq ($(CONFIG_LINUX),)
+MULTIARCH_SKIP+=linux-test
+endif
+MULTIARCH_TESTS  =$(filter-out $(MULTIARCH_SKIP),$(MULTIARCH_SRCS:.c=))
 
 #
 # The following are any additional rules needed to build things
diff --git a/tests/tcg/x86_64/Makefile.target b/tests/tcg/x86_64/Makefile.target
index 2151ea6302..d7a7385583 100644
--- a/tests/tcg/x86_64/Makefile.target
+++ b/tests/tcg/x86_64/Makefile.target
@@ -8,8 +8,12 @@
 
 include $(SRC_PATH)/tests/tcg/i386/Makefile.target
 
+ifneq ($(CONFIG_LINUX),)
 X86_64_TESTS += vsyscall
 TESTS=$(MULTIARCH_TESTS) $(X86_64_TESTS) test-x86_64
+else
+TESTS=$(MULTIARCH_TESTS)
+endif
 QEMU_OPTS += -cpu max
 
 test-x86_64: LDFLAGS+=-lm -lc
-- 
2.30.2




[RFC PATCH 2/3] tests/tcg/sha1: remove endian include

2021-08-03 Thread Alex Bennée
This doesn't exist in BSD world and doesn't seem to be needed by
either.

Signed-off-by: Alex Bennée 
Cc: Warner Losh 
---
 tests/tcg/multiarch/sha1.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tests/tcg/multiarch/sha1.c b/tests/tcg/multiarch/sha1.c
index 87bfbcdf52..0081bd7657 100644
--- a/tests/tcg/multiarch/sha1.c
+++ b/tests/tcg/multiarch/sha1.c
@@ -43,7 +43,6 @@ void SHA1Init(SHA1_CTX* context);
 void SHA1Update(SHA1_CTX* context, const unsigned char* data, uint32_t len);
 void SHA1Final(unsigned char digest[20], SHA1_CTX* context);
 /*  end of sha1.h  */
-#include 
 
 #define rol(value, bits) (((value) << (bits)) | ((value) >> (32 - (bits
 
-- 
2.30.2




Re: [PATCH v6 05/11] qapi: introduce QAPISchemaIfCond.cgen()

2021-08-03 Thread Markus Armbruster
Markus Armbruster  writes:

> marcandre.lur...@redhat.com writes:
>
>> From: Marc-André Lureau 
>>
>> Instead of building prepocessor conditions from a list of string, use
>> the result generated from QAPISchemaIfCond.cgen() and hide the
>> implementation details.
>>
>> Signed-off-by: Marc-André Lureau 
>
> Please mention that the patch changes generated code.  See below for
> details.

[...]

> This patch does three things:
>
> (1) Change gen_if(), gen_endif() to always generate a single #if,
> #endif.  This enables:
>
> (2) Factor cgen_ifcond() out of gen_if() and gen_endif()
>
> (3) Lift the call of cgen_ifcond() into into gen_if()'s, gen_endif()'s
> callers.
>
> I'd split the patch.  This is *not* a demand.
>
> The motivation for (3) is unclear.  Is it so gen_if() doesn't depend on
> QAPISchemaIfCond?
>
> Step (1) affects the generated code.  When @ifcond is [COND1, COND2, ...],
> gen_if()'s value changes from
>
> #if COND1
> #if COND2
> ...
>
> to
>
> #if (COND1) && (COND2)

[...]

> The common case: when it's just [COND], the value changes from
>
> #if COND
>
> to
>
> #if (COND)
>
> which is a bit ugly.
>
> Example: in qapi-types-block-export.c

[...]

> Avoiding the redundant pair of parenthesis takes another special case.
> Let's do it.

Looks like PATCH 07 does it.  Avoiding the temporary change would be
nice, but isn't required.




Re: [PATCH v6 05/11] qapi: introduce QAPISchemaIfCond.cgen()

2021-08-03 Thread Markus Armbruster
Markus Armbruster  writes:

[...]

>> Avoiding the redundant pair of parenthesis takes another special case.
>> Let's do it.
>
> Looks like PATCH 07 does it.  Avoiding the temporary change would be
> nice, but isn't required.

Make that PATCH 08.




Re: [PATCH v6 05/11] qapi: introduce QAPISchemaIfCond.cgen()

2021-08-03 Thread Markus Armbruster
Markus Armbruster  writes:

> Markus Armbruster  writes:
>
> [...]
>
>>> Avoiding the redundant pair of parenthesis takes another special case.
>>> Let's do it.
>>
>> Looks like PATCH 07 does it.  Avoiding the temporary change would be
>> nice, but isn't required.
>
> Make that PATCH 08.

Scratch that, it's 07.  Sorry for the noise.




Re: [PATCH v3] hw/acpi: add an assertion check for non-null return from acpi_get_i386_pci_host

2021-08-03 Thread Ani Sinha
ping ...

On Thu, 29 Jul 2021, Ani Sinha wrote:

>
>
> On Thu, 29 Jul 2021, Ani Sinha wrote:
>
> >
> >
> > On Wed, 28 Jul 2021, Michael S. Tsirkin wrote:
> >
> > > On Mon, Jul 26, 2021 at 10:27:43PM +0530, Ani Sinha wrote:
> > > > All existing code using acpi_get_i386_pci_host() checks for a non-null
> > > > return value from this function call. Instead of returning early when 
> > > > the value
> > > > returned is NULL, assert instead. Since there are only two possible 
> > > > host buses
> > > > for i386 - q35 and i440fx, a null value return from the function does 
> > > > not make
> > > > sense in most cases and is likely an error situation.
> > >
> > > add "on i386"?
> > >
> > > > Fixes: c0e427d6eb5fef ("hw/acpi/ich9: Enable ACPI PCI hot-plug")
> > >
> > > This that seems inappropriate, this is not a bugfix.
> > >
> >
> > Forgot to answer this. I started this patch because I saw a gap that was
> > introduced with the above patch. In acpi_pcihp_disable_root_bus(), Julia's
> > code did not check for null return value from acpi_get_i386_pci_host().
> > See v2. Hence, I added the fixes tag. Then Igor suggested that I assert
> > instead and I also thought perhaps assertion is a better idea. Hence v3. I
> > am not conflicted after reading your argument. We should assert only when
> > a certain invariant is always respected. Otherwise we should not assert.
> > If you think acpi_get_i386_pci_host() can be called from non-i386 path as
> > well, maybe v2 approach is better.
>
> Also I should point out that at this moment, only ich9 and piix4 end up
> calling acpi_pcihp_disable_root_bus(). Hence, we are ok either way for
> now. In the future, if other archs end of calling this function, then the
> question is, do we gracefully fail by simply returning in case of null
> host bridge or do we assert? In its current form, it will ungracefully
> crash somewhere.
>
>



Re: [PATCH] hw/i386/ich9: add comment explaining an argument to acpi_pcihp_reset call

2021-08-03 Thread Ani Sinha
Ping ...

On Thu, 29 Jul 2021, Ani Sinha wrote:

> ping ...
>
> On Tue, 27 Jul 2021, Ani Sinha wrote:
>
> > acpi_pcihp_reset() call from ich9/pm_reset() passes an unconditional truth 
> > value
> > as the second argument. Added a commnet here to explain the reason why the
> > argument is being passed unconditionally.
> >
> > Signed-off-by: Ani Sinha 
> > ---
> >  hw/acpi/ich9.c | 5 +
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
> > index 778e27b659..b2e3c46075 100644
> > --- a/hw/acpi/ich9.c
> > +++ b/hw/acpi/ich9.c
> > @@ -281,6 +281,11 @@ static void pm_reset(void *opaque)
> >  pm->smi_en_wmask = ~0;
> >
> >  if (pm->use_acpi_hotplug_bridge) {
> > +/*
> > + * PCI Express root buses do not support hot-plug, for
> > + * details see docs/pcie.txt. Hence, the second argument
> > + * is unconditionally true.
> > + */
> >  acpi_pcihp_reset(&pm->acpi_pci_hotplug, true);
> >  }
> >
> > --
> > 2.25.1
> >
> >
>



Re: [PATCH v2 38/55] target/s390x: Use cpu_*_mmu instead of helper_*_mmu

2021-08-03 Thread David Hildenbrand

On 03.08.21 06:14, Richard Henderson wrote:

The helper_*_mmu functions were the only thing available
when this code was written.  This could have been adjusted
when we added cpu_*_mmuidx_ra, but now we can most easily
use the newest set of interfaces.

Cc: qemu-s3...@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé 
Signed-off-by: Richard Henderson 
---


Reviewed-by: David Hildenbrand 

--
Thanks,

David / dhildenb




Re: [PULL 0/1] Libslirp update

2021-08-03 Thread Marc-André Lureau
Hi

On Tue, Aug 3, 2021 at 1:50 PM Peter Maydell 
wrote:

> On Tue, 3 Aug 2021 at 10:08, Peter Maydell 
> wrote:
> > Are you testing by starting with a before-the-libslirp-merge
> > change QEMU, doing a build, then updating to the libslirp
> > changes, and then doing a 'make' without reconfigure or
> > 'make clean' ? I suspect this is perhaps to do with it being
> > an incremental build.
>
> More specifically:
>
> $ git checkout master
> $ mkdir build/slirptest
> $ (cd build/slirptest && ../../configure --target-list=arm-softmmu
> --enable-debug --disable-docs --disable-tools)
> $ make -C build/slirptest -j8
> $ git checkout remotes/elmarco/libslirp
> $ make -C build/slirptest
> $ mkdir build/slirp2
> $ (cd build/slirptest && ../../configure --target-list=arm-softmmu
> --enable-debug --disable-docs --disable-tools)
> $ make -C build/slirp2
>
> The build/slirptest directory has built a .so, and the
> build/slirp2 directory has built a static .a.
>
> (Both builds succeed because they're not hitting the clang
> static analyzer thing, but they shouldn't be building different
> types of library.)
>
> Running '../../meson/meson.py configure' in slirp2 gives
> (ignoring the non libslirp parts of the output):
>
> Subproject libslirp:
>
>   Core options  Current Value
> Possible Values
> Description
>     -
> ---
> ---
>   libslirp:default_library  static
> [shared, static, both]Default
> library type
>   libslirp:werror   true   [true,
> false] Treat warnings
> as errors
>
>   Project options   Current Value
> Possible Values
> Description
>   ---   -
> ---
> ---
>   libslirp:version_suffix
>   Suffix to append
> to SLIRP_VERSION_STRING
>
> In slirptest I get only:
>
> Subproject libslirp:
>
>   Project options   Current Value
> Possible Values
> Description
>   ---   -
> ---
> ---
>   libslirp:version_suffix
>   Suffix to append
> to SLIRP_VERSION_STRING
>
>
> So somehow meson has failed to apply the default_library and werror
> options in
> the incremental build case.
>
>
Thanks for the detailed steps. It turns out that incremental build with
subprojects is broken with meson < 0.57.2 (
https://github.com/mesonbuild/meson/pull/8424).

Either we acknowledge that, or we fix the qemu meson.build for now with the
missing -lresolv for osx/bsd.

I am going to work on a patch for the second option, but leave the decision
open.





-- 
Marc-André Lureau


[PATCH] chardev: give some context on chardev-add error

2021-08-03 Thread marcandre . lureau
From: Marc-André Lureau 

Related to:
https://bugzilla.redhat.com/show_bug.cgi?id=1984721

Signed-off-by: Marc-André Lureau 
---
 chardev/char.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/chardev/char.c b/chardev/char.c
index d959eec522..f59a61774b 100644
--- a/chardev/char.c
+++ b/chardev/char.c
@@ -1031,27 +1031,26 @@ Chardev *qemu_chardev_new(const char *id, const char 
*typename,
 ChardevReturn *qmp_chardev_add(const char *id, ChardevBackend *backend,
Error **errp)
 {
+ERRP_GUARD();
 const ChardevClass *cc;
 ChardevReturn *ret;
-Chardev *chr;
+g_autoptr(Chardev) chr = NULL;
 
 cc = char_get_class(ChardevBackendKind_str(backend->type), errp);
 if (!cc) {
-return NULL;
+goto err;
 }
 
 chr = chardev_new(id, object_class_get_name(OBJECT_CLASS(cc)),
   backend, NULL, false, errp);
 if (!chr) {
-return NULL;
+goto err;
 }
 
 if (!object_property_try_add_child(get_chardevs_root(), id, OBJECT(chr),
errp)) {
-object_unref(OBJECT(chr));
-return NULL;
+goto err;
 }
-object_unref(OBJECT(chr));
 
 ret = g_new0(ChardevReturn, 1);
 if (CHARDEV_IS_PTY(chr)) {
@@ -1060,6 +1059,10 @@ ChardevReturn *qmp_chardev_add(const char *id, 
ChardevBackend *backend,
 }
 
 return ret;
+
+err:
+error_prepend(errp, "Failed to add chardev '%s': ", id);
+return NULL;
 }
 
 ChardevReturn *qmp_chardev_change(const char *id, ChardevBackend *backend,
-- 
2.32.0.264.g75ae10bc75




Re: [PATCH for-6.1? v2 5/7] job: Add job_cancel_requested()

2021-08-03 Thread Vladimir Sementsov-Ogievskiy

02.08.2021 13:23, Max Reitz wrote:

On 27.07.21 17:47, Vladimir Sementsov-Ogievskiy wrote:

27.07.2021 18:39, Max Reitz wrote:

On 27.07.21 15:04, Vladimir Sementsov-Ogievskiy wrote:

26.07.2021 17:46, Max Reitz wrote:

Most callers of job_is_cancelled() actually want to know whether the job
is on its way to immediate termination.  For example, we refuse to pause
jobs that are cancelled; but this only makes sense for jobs that are
really actually cancelled.

A mirror job that is cancelled during READY with force=false should
absolutely be allowed to pause.  This "cancellation" (which is actually
a kind of completion) may take an indefinite amount of time, and so
should behave like any job during normal operation.  For example, with
on-target-error=stop, the job should stop on write errors. (In
contrast, force-cancelled jobs should not get write errors, as they
should just terminate and not do further I/O.)

Therefore, redefine job_is_cancelled() to only return true for jobs that
are force-cancelled (which as of HEAD^ means any job that interprets the
cancellation request as a request for immediate termination), and add
job_cancel_request() as the general variant, which returns true for any


job_cancel_requested()


jobs which have been requested to be cancelled, whether it be
immediately or after an arbitrarily long completion phase.

Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
Signed-off-by: Max Reitz 
---
  include/qemu/job.h |  8 +++-
  block/mirror.c | 10 --
  job.c  |  7 ++-
  3 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/include/qemu/job.h b/include/qemu/job.h
index 8aa90f7395..032edf3c5f 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -436,9 +436,15 @@ const char *job_type_str(const Job *job);
  /** Returns true if the job should not be visible to the management layer. */
  bool job_is_internal(Job *job);
  -/** Returns whether the job is scheduled for cancellation. */
+/** Returns whether the job is being cancelled. */
  bool job_is_cancelled(Job *job);
  +/**
+ * Returns whether the job is scheduled for cancellation (at an
+ * indefinite point).
+ */
+bool job_cancel_requested(Job *job);
+
  /** Returns whether the job is in a completed state. */
  bool job_is_completed(Job *job);
  diff --git a/block/mirror.c b/block/mirror.c
index e93631a9f6..72e02fa34e 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -936,7 +936,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
  /* Transition to the READY state and wait for complete. */
  job_transition_to_ready(&s->common.job);
  s->actively_synced = true;
-    while (!job_is_cancelled(&s->common.job) && !s->should_complete) {
+    while (!job_cancel_requested(&s->common.job) && !s->should_complete) {
  job_yield(&s->common.job);
  }
  s->common.job.cancelled = false;
@@ -1043,7 +1043,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
  }
    should_complete = s->should_complete ||
-    job_is_cancelled(&s->common.job);
+ job_cancel_requested(&s->common.job);
  cnt = bdrv_get_dirty_count(s->dirty_bitmap);
  }
  @@ -1087,7 +1087,7 @@ static int coroutine_fn mirror_run(Job *job, Error 
**errp)
  trace_mirror_before_sleep(s, cnt, job_is_ready(&s->common.job),
    delay_ns);
  job_sleep_ns(&s->common.job, delay_ns);
-    if (job_is_cancelled(&s->common.job) && s->common.job.force_cancel) {
+    if (job_is_cancelled(&s->common.job)) {
  break;
  }
  s->last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
@@ -1099,9 +1099,7 @@ immediate_exit:
   * or it was cancelled prematurely so that we do not guarantee that
   * the target is a copy of the source.
   */
-    assert(ret < 0 ||
-   (s->common.job.force_cancel &&
-    job_is_cancelled(&s->common.job)));
+    assert(ret < 0 || job_is_cancelled(&s->common.job));


(As a note, I hope this does the job regarding your suggestions for patch 4. :))


  assert(need_drain);
  mirror_wait_for_all_io(s);
  }
diff --git a/job.c b/job.c
index e78d893a9c..dba17a680f 100644
--- a/job.c
+++ b/job.c
@@ -216,6 +216,11 @@ const char *job_type_str(const Job *job)
  }
    bool job_is_cancelled(Job *job)
+{
+    return job->cancelled && job->force_cancel;


can job->cancelled be false when job->force_cancel is true ? I think not and 
worth an assertion here. Something like

if (job->force_cancel) {
   assert(job->cancelled);
   return true;
}

return false;


Sounds good, why not.




+}
+
+bool job_cancel_requested(Job *job)
  {
  return job->cancelled;
  }
@@ -1015,7 +1020,7 @@ void job_complete(Job *job, Error **errp)
  if (job_apply_verb(job, JOB_VERB_COMPLETE, errp)) {
  return;
  }
-    if (job_is_cancelled(job) || !job->driver->

'make check-acceptance' eats lots of disk space and never cleans it up

2021-08-03 Thread Peter Maydell
It looks like 'make check-acceptance' creates directories in
build/clang/tests/results which are huge and which it never
cleans up. For example one of my build directories (configured
just for arm targets) has over 350 'job-[timestamp]' directories,
many of which are 2.5GB or more in size.

I assume most of this is artefacts (disk images etc) needed to
rerun the tests. That's useful to keep around so you can manually
run a test. However, we should be sharing this between runs, not
creating a fresh copy for every time check-acceptance is
run, surely ?

I just freed 72 GB (!) of disk on my local box just by doing
rm -rf build/arm-clang/tests/results/ ...

thanks
-- PMM



Re: 'make check-acceptance' eats lots of disk space and never cleans it up

2021-08-03 Thread Cleber Rosa
On Tue, Aug 3, 2021 at 8:43 AM Peter Maydell  wrote:
>
> It looks like 'make check-acceptance' creates directories in
> build/clang/tests/results which are huge and which it never
> cleans up. For example one of my build directories (configured
> just for arm targets) has over 350 'job-[timestamp]' directories,
> many of which are 2.5GB or more in size.
>

Every "job-[timestamp]" directory is the result of an "avocado run"
invocation, that is, one "make check-acceptance" command.

> I assume most of this is artefacts (disk images etc) needed to
> rerun the tests. That's useful to keep around so you can manually
> run a test. However, we should be sharing this between runs, not
> creating a fresh copy for every time check-acceptance is
> run, surely ?
>

They contain results and files needed for debugging the results of
tests, not artefacts needed to re-run them.  Everything that is
shareable is in the "~/avocado/data/caches" directory.

> I just freed 72 GB (!) of disk on my local box just by doing
> rm -rf build/arm-clang/tests/results/ ...
>
> thanks
> -- PMM
>

There's the "make check-clean" rule, which will clear everything too.
We can also add a flag to *not* save the results from the beginning,
but I guess one would miss them when needed.

Any other ideas?

Thanks,
- Cleber.




Re: [PATCH v6 07/11] qapi: replace if condition list with dict {'all': [...]}

2021-08-03 Thread Markus Armbruster
marcandre.lur...@redhat.com writes:

> From: Marc-André Lureau 
>
> Replace the simple list sugar form with a recursive structure that will
> accept other operators in the following commits (all, any or not).
>
> Signed-off-by: Marc-André Lureau 
> ---
>  scripts/qapi/common.py| 23 +--
>  scripts/qapi/expr.py  | 52 ++--
>  scripts/qapi/schema.py|  2 +-
>  tests/qapi-schema/bad-if-empty-list.json  |  2 +-
>  tests/qapi-schema/bad-if-list.json|  2 +-
>  tests/qapi-schema/bad-if.err  |  3 +-
>  tests/qapi-schema/doc-good.json   |  3 +-
>  tests/qapi-schema/doc-good.out| 13 ++--
>  tests/qapi-schema/doc-good.txt|  6 ++
>  tests/qapi-schema/enum-if-invalid.err |  3 +-
>  tests/qapi-schema/features-if-invalid.err |  2 +-
>  tests/qapi-schema/qapi-schema-test.json   | 25 
>  tests/qapi-schema/qapi-schema-test.out| 62 +--
>  .../qapi-schema/struct-member-if-invalid.err  |  2 +-
>  .../qapi-schema/union-branch-if-invalid.json  |  2 +-
>  15 files changed, 119 insertions(+), 83 deletions(-)
>
> diff --git a/scripts/qapi/common.py b/scripts/qapi/common.py
> index 5181a0f167..51463510c9 100644
> --- a/scripts/qapi/common.py
> +++ b/scripts/qapi/common.py
> @@ -13,7 +13,8 @@
>  
>  import re
>  from typing import (
> -List,
> +Any,
> +Dict,
>  Match,
>  Optional,
>  Union,
> @@ -199,16 +200,28 @@ def guardend(name: str) -> str:
>   name=c_fname(name).upper())
>  
>  
> -def cgen_ifcond(ifcond: Union[str, List[str]]) -> str:
> +def docgen_ifcond(ifcond: Union[str, Dict[str, Any]]) -> str:

Uh, why do you swap cgen_ifcond() and docgen_ifcond()?  Accident?

>  if not ifcond:
>  return ''
> -return '(' + ') && ('.join(ifcond) + ')'
> +if isinstance(ifcond, str):
> +return ifcond
>  
> +oper, operands = next(iter(ifcond.items()))
> +oper = {'all': ' and '}[oper]
> +operands = [docgen_ifcond(o) for o in operands]
> +return '(' + oper.join(operands) + ')'

What a nice review speedbump you buried here...

The whole block boils down to the much less exciting

   operands = [docgen_ifcond(o) for o in ifcond['all']]
   return '(' + ' and '.join(operands) + ')'

Peeking ahead, I understand that you did it this way here so you can
extend it trivially there.  Matter of taste; what counts is the final
result and minimizing reviewer WTFs/minute along the way.

Since the WTFs/minute is a done deed now, what remains is the final
result, which I expect to review shortly.  But please try a bit harder
to be boring next time ;)

>  
> -def docgen_ifcond(ifcond: Union[str, List[str]]) -> str:
> +
> +def cgen_ifcond(ifcond: Union[str, Dict[str, Any]]) -> str:
>  if not ifcond:
>  return ''
> -return ' and '.join(ifcond)
> +if isinstance(ifcond, str):
> +return ifcond

This is what gets rid of the redundant parenthesises in the common case
"single condition string".

> +
> +oper, operands = next(iter(ifcond.items()))
> +oper = {'all': '&&'}[oper]
> +operands = [cgen_ifcond(o) for o in operands]
> +return '(' + (') ' + oper + ' (').join(operands) + ')'

This line is hard to read.  Easier, I think:

   oper = {'all': ' && '}[oper]
   operands = ['(' + cgen_ifcond(o) + ')' for o in operands]
   return oper.join(operands)

Neither your version nor mine gets rid of the redundant parenthesises in
the (uncommon) case "complex condition expression".
tests/test-qapi-introspect.c still has

#if (defined(TEST_IF_COND_1)) && (defined(TEST_IF_COND_2))
QLIT_QSTR("feature1"),
#endif /* (defined(TEST_IF_COND_1)) && (defined(TEST_IF_COND_2)) */

Mildly annoying.  I'm willing to leave this for later.

Code smell: cgen_ifcond() and docgen_ifcond() are almost identical.  Can
also be left for later.

>  
>  
>  def gen_if(cond: str) -> str:
> diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
> index 496f7e0333..3ee66c5f62 100644
> --- a/scripts/qapi/expr.py
> +++ b/scripts/qapi/expr.py
> @@ -259,14 +259,12 @@ def check_flags(expr: _JSONObject, info: 
> QAPISourceInfo) -> None:
>  
>  def check_if(expr: _JSONObject, info: QAPISourceInfo, source: str) -> None:
>  """
> -Normalize and validate the ``if`` member of an object.
> +Validate the ``if`` member of an object.
>  
> -The ``if`` member may be either a ``str`` or a ``List[str]``.
> -A ``str`` value will be normalized to ``List[str]``.
> +The ``if`` member may be either a ``str`` or a dict.
>  
>  :forms:
> -  :sugared: ``Union[str, List[str]]``
> -  :canonical: ``List[str]``
> +  :canonical: ``Union[str, dict]``

Does this :forms: thing make sense without any :sugared:?  John, you
added (invented?) it in commit a48653638fa, but no explanation made it
into the tree.

>  
>  :param expr

Re: [PATCH v6 07/11] qapi: replace if condition list with dict {'all': [...]}

2021-08-03 Thread Markus Armbruster
One more thing...

marcandre.lur...@redhat.com writes:

> From: Marc-André Lureau 
>
> Replace the simple list sugar form with a recursive structure that will
> accept other operators in the following commits (all, any or not).
>
> Signed-off-by: Marc-André Lureau 
> ---

[...]

> diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
> index 496f7e0333..3ee66c5f62 100644
> --- a/scripts/qapi/expr.py
> +++ b/scripts/qapi/expr.py
> @@ -259,14 +259,12 @@ def check_flags(expr: _JSONObject, info: 
> QAPISourceInfo) -> None:
>  
>  def check_if(expr: _JSONObject, info: QAPISourceInfo, source: str) -> None:
>  """
> -Normalize and validate the ``if`` member of an object.
> +Validate the ``if`` member of an object.
>  
> -The ``if`` member may be either a ``str`` or a ``List[str]``.
> -A ``str`` value will be normalized to ``List[str]``.
> +The ``if`` member may be either a ``str`` or a dict.
>  
>  :forms:
> -  :sugared: ``Union[str, List[str]]``
> -  :canonical: ``List[str]``
> +  :canonical: ``Union[str, dict]``
>  
>  :param expr: The expression containing the ``if`` member to validate.
>  :param info: QAPI schema source file information.
> @@ -275,31 +273,45 @@ def check_if(expr: _JSONObject, info: QAPISourceInfo, 
> source: str) -> None:
>  :raise QAPISemError:
>  When the "if" member fails validation, or when there are no
>  non-empty conditions.
> -:return: None, ``expr`` is normalized in-place as needed.
> +:return: None
>  """
>  ifcond = expr.get('if')
>  if ifcond is None:
>  return
>  
> -if isinstance(ifcond, list):
> -if not ifcond:
> -raise QAPISemError(
> -info, "'if' condition [] of %s is useless" % source)
> -else:
> -# Normalize to a list
> -ifcond = expr['if'] = [ifcond]
> +def _check_if(cond: Union[str, object]) -> None:
> +if isinstance(cond, str):
> +if not cond.strip():
> +raise QAPISemError(
> +info,
> +"'if' condition '%s' of %s makes no sense"
> +% (cond, source))
> +return
>  
> -for elt in ifcond:
> -if not isinstance(elt, str):
> +if not isinstance(cond, dict):
>  raise QAPISemError(
>  info,
> -"'if' condition of %s must be a string or a list of strings"
> -% source)
> -if not elt.strip():
> +"'if' condition of %s must be a string or a dict" % source)
> +if len(cond) != 1:
>  raise QAPISemError(
>  info,
> -"'if' condition '%s' of %s makes no sense"
> -% (elt, source))
> +"'if' condition dict of %s must have one key: "
> +"'all'" % source)

Not covered by the negative tests in tests/qapi-schema/.  Please
double-check that all new errors are covered.

> +check_keys(cond, info, "'if' condition", [],
> +   ["all"])
> +
> +oper, operands = next(iter(cond.items()))
> +if not operands:
> +raise QAPISemError(
> +info, "'if' condition [] of %s is useless" % source)
> +
> +if oper in ("all") and not isinstance(operands, list):
> +raise QAPISemError(
> +info, "'%s' condition of %s must be a list" % (oper, source))
> +for operand in operands:
> +_check_if(operand)
> +
> +_check_if(ifcond)
>  
>  

[...]




Re: [PATCH v6 08/11] qapi: add 'any' condition

2021-08-03 Thread Markus Armbruster
marcandre.lur...@redhat.com writes:

> From: Marc-André Lureau 
>
> Signed-off-by: Marc-André Lureau 
> ---
>  tests/unit/test-qmp-cmds.c  | 1 +
>  scripts/qapi/common.py  | 4 ++--
>  scripts/qapi/expr.py| 6 +++---
>  tests/qapi-schema/bad-if.err| 2 +-
>  tests/qapi-schema/doc-good.json | 3 ++-
>  tests/qapi-schema/doc-good.out  | 2 +-
>  tests/qapi-schema/doc-good.txt  | 3 ++-
>  tests/qapi-schema/enum-if-invalid.err   | 2 +-
>  tests/qapi-schema/qapi-schema-test.json | 7 ++-
>  tests/qapi-schema/qapi-schema-test.out  | 5 +
>  10 files changed, 24 insertions(+), 11 deletions(-)
>
> diff --git a/tests/unit/test-qmp-cmds.c b/tests/unit/test-qmp-cmds.c
> index 1b0b7d99df..83efa39720 100644
> --- a/tests/unit/test-qmp-cmds.c
> +++ b/tests/unit/test-qmp-cmds.c
> @@ -51,6 +51,7 @@ FeatureStruct1 *qmp_test_features0(bool has_fs0, 
> FeatureStruct0 *fs0,
> bool has_cfs1, CondFeatureStruct1 *cfs1,
> bool has_cfs2, CondFeatureStruct2 *cfs2,
> bool has_cfs3, CondFeatureStruct3 *cfs3,
> +   bool has_cfs4, CondFeatureStruct4 *cfs4,
> Error **errp)
>  {
>  return g_new0(FeatureStruct1, 1);
> diff --git a/scripts/qapi/common.py b/scripts/qapi/common.py
> index 51463510c9..018d2f6996 100644
> --- a/scripts/qapi/common.py
> +++ b/scripts/qapi/common.py
> @@ -207,7 +207,7 @@ def docgen_ifcond(ifcond: Union[str, Dict[str, Any]]) -> 
> str:
>  return ifcond
>  
>  oper, operands = next(iter(ifcond.items()))
> -oper = {'all': ' and '}[oper]
> +oper = {'all': ' and ', 'any': ' or '}[oper]
>  operands = [docgen_ifcond(o) for o in operands]
>  return '(' + oper.join(operands) + ')'
>  
> @@ -219,7 +219,7 @@ def cgen_ifcond(ifcond: Union[str, Dict[str, Any]]) -> 
> str:
>  return ifcond
>  
>  oper, operands = next(iter(ifcond.items()))
> -oper = {'all': '&&'}[oper]
> +oper = {'all': '&&', 'any': '||'}[oper]
>  operands = [cgen_ifcond(o) for o in operands]
>  return '(' + (') ' + oper + ' (').join(operands) + ')'
>  
> diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
> index 3ee66c5f62..e10a803dc2 100644
> --- a/scripts/qapi/expr.py
> +++ b/scripts/qapi/expr.py
> @@ -296,16 +296,16 @@ def _check_if(cond: Union[str, object]) -> None:
>  raise QAPISemError(
>  info,
>  "'if' condition dict of %s must have one key: "
> -"'all'" % source)
> +"'all' or 'any'" % source)
>  check_keys(cond, info, "'if' condition", [],
> -   ["all"])
> +   ["all", "any"])
>  
>  oper, operands = next(iter(cond.items()))
>  if not operands:
>  raise QAPISemError(
>  info, "'if' condition [] of %s is useless" % source)
>  
> -if oper in ("all") and not isinstance(operands, list):
> +if oper in ("all", "any") and not isinstance(operands, list):
>  raise QAPISemError(
>  info, "'%s' condition of %s must be a list" % (oper, source))
>  for operand in operands:
> diff --git a/tests/qapi-schema/bad-if.err b/tests/qapi-schema/bad-if.err
> index 8278c49368..3624e79b0c 100644
> --- a/tests/qapi-schema/bad-if.err
> +++ b/tests/qapi-schema/bad-if.err
> @@ -1,3 +1,3 @@
>  bad-if.json: In struct 'TestIfStruct':
>  bad-if.json:2: 'if' condition has unknown key 'value'
> -Valid keys are 'all'.
> +Valid keys are 'all', 'any'.
> diff --git a/tests/qapi-schema/doc-good.json b/tests/qapi-schema/doc-good.json
> index 25b1053e8a..a67d4d9467 100644
> --- a/tests/qapi-schema/doc-good.json
> +++ b/tests/qapi-schema/doc-good.json
> @@ -103,7 +103,8 @@
>'features': [ 'union-feat1' ],
>'base': 'Base',
>'discriminator': 'base1',
> -  'data': { 'one': 'Variant1', 'two': { 'type': 'Variant2', 'if': 'IFTWO' } 
> } }
> +  'data': { 'one': 'Variant1', 'two': { 'type': 'Variant2',
> +'if': { 'any': ['IFONE', 'IFTWO'] } 
> } } }

Easier to read:

 'data': { 'one': 'Variant1',
   'two': { 'type': 'Variant2',
'if': { 'any': ['IFONE', 'IFTWO'] } } } }

>  
>  ##
>  # @SugaredUnion:
> diff --git a/tests/qapi-schema/doc-good.out b/tests/qapi-schema/doc-good.out
> index 689d084f3a..c44c346ec8 100644
> --- a/tests/qapi-schema/doc-good.out
> +++ b/tests/qapi-schema/doc-good.out
> @@ -30,7 +30,7 @@ object Object
>  tag base1
>  case one: Variant1
>  case two: Variant2
> -if IFTWO
> +if OrderedDict([('any', ['IFONE', 'IFTWO'])])
>  feature union-feat1
>  object q_obj_Variant1-wrapper
>  member data: Variant1 optional=False
> diff --git a/tests/qapi-schema/doc-good.txt b/tests/qapi-schema/doc-good.txt
> index 4490108cb7..251e9b746c 10064

[RFC PATCH : v3 2/2] Implementation of nvme-mi plugin in nvme-cli

2021-08-03 Thread Mohit Kapoor
From: mohit kapoor 

Subject: [RFC PATCH : v3 2/2] Implementation of nvme-mi plugin in nvme-cli

The enclosed patch contains the implementation of a new 
nvme mi(Management Interface) plugin. By adding the mi plugin into the 
nvme-cli application, users have the ability to test the mi commands, 
control and manage the nvme subsystem using nvme-cli application.
 
There are 3 components for the mi plugin:
1. mi plugin command layer, which is responsible for forming command packets 
with nvme-mi header information and pass it on to the utility layer.
2. Utility layer, which is responsible for adding smbus/i2c header 
information over nvme-mi command packet, communicate with HAL layer 
for sending commands and receiving responses.
3. HAL (Hardware Abstraction Layer) layer communicates to the nvme subsystem 
via sideband interface. The HAL layer is helpful in adding multiple sideband 
hardware support in nvme-cli mi plugin. HAL makes use of structure with 
function pointers for abstraction. Currently qemu-mi support is added in the 
HAL layer as the default sideband interface. The mi plugin of nvme-cli 
communicates with qemu-mi using i2c on bus 0, slave address as 0x15. 
The prerequisite for executing via i2c interface is to add the qemu-mi 
in the i2c bus using 'i2cdetect 0' command, which is part of i2ctools.


For executing mi commands in nvme-cli application, one needs to specify 
following command format:
1. For complete list of commands : nvme mi help
2. For command help : nvme mi  -h
3. For executing command : nvme mi   

The mi plugin implementation is completely new and there are no changes to 
other modules of nvme-cli, hence there will be no impact on any other 
functionality of nvme-cli application. Currently the mi plugin contains 
implementation of the mi commands of nvme-mi specification currently 
supported by qemu-mi. Please review and suggest if any improvement can 
be made to the design which has been followed.

v3
--rebased on master

diff --git a/qemu_nvme/nvme-cli/Makefile b/qemu_nvme/nvme-cli/Makefile
index 86eb7c6..3ea82dd 100644
--- a/qemu_nvme/nvme-cli/Makefile
+++ b/qemu_nvme/nvme-cli/Makefile
@@ -83,6 +83,13 @@ PLUGIN_OBJS :=   \
plugins/wdc/wdc-utils.o \
plugins/huawei/huawei-nvme.o\
plugins/netapp/netapp-nvme.o\
+   plugins/mi/util/hal/mi-nvme-hal-main.o  \
+   plugins/mi/util/hal/mi-nvme-qemu/mi-nvme-qemu.o \
+   plugins/mi/util/mi-nvme-util-base.o \
+   plugins/mi/util/mi-nvme-util-crc.o  \
+   plugins/mi/util/mi-nvme-util-tool.o \
+   plugins/mi/mi-nvme.o\
+   plugins/mi/mi-nvme-cmd.o\
plugins/toshiba/toshiba-nvme.o  \
plugins/micron/micron-nvme.o\
plugins/seagate/seagate-nvme.o  \
diff --git a/qemu_nvme/nvme-cli/plugins/mi/mi-nvme-cmd.c 
b/qemu_nvme/nvme-cli/plugins/mi/mi-nvme-cmd.c
new file mode 100644
index 000..8852004
--- /dev/null
+++ b/qemu_nvme/nvme-cli/plugins/mi/mi-nvme-cmd.c
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2021 Samsung Electronics Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * ~~
+ * mi-nvme-cmd.c - Implementation of NVMe Management Interface commands in Nvme
+ *
+ * Developer: Mohit Kapoor 
+ */
+
+#include "mi-nvme-cmd.h"
+
+uint32_t gettransmissionunitsize()
+{
+return MCTP_TRANS_UNIT_SIZE_VAL_DEF;
+}
+
+int executecommand(__u8 *cmdobj)
+{
+struct nvme_mi_message message;
+memset(&message, 0, sizeof(struct nvme_mi_message));
+nvme_mi_admin_message adminmessage;
+memset(&adminmessage, 0, sizeof(nvme_mi_admin_message));
+bool ret = false;
+uint32_t RequestDataSize = 0;
+struct gencmd_nvmemi *micmd;
+struct gencmd_nvmemi_admin *miadmincmd;
+uint32_t uiMCTP_TUS = gettransmissionunitsize();
+
+ret = initialize(uiMCTP_TUS);
+if (ret == false) {
+return -1;
+}
+
+streply.length = 0;
+streply.errorcode = 0;
+memset(streply.replybuf, 0, sizeof(streply.replybuf));
+memcpy(&message.msgPld, cmdobj, sizeof(struct nvme_mi_message) - 8);
+
+/*Check if the incoming command is an MI/MI-Admin Command*/
+if (message.msgPld.nvme_mi_message_header & 0x1000) {
+miadmincmd = (struct gencmd_nvmemi_admin *)cmdobj;
+memcpy(&adminmessage.msgPld, cmdobj, sizeof(nvme_mi_admin_message) - 
8);
+  

Re: [PATCH v6 09/11] qapi: convert 'if' C-expressions to the new syntax tree

2021-08-03 Thread Markus Armbruster
marcandre.lur...@redhat.com writes:

> From: Marc-André Lureau 
>
> Signed-off-by: Marc-André Lureau 
> Reviewed-by: Stefan Hajnoczi 
> Tested-by: John Snow 

Possibly clearer:

qapi: Use 'if': { 'any': ... } where appropriate

Reviewed-by: Markus Armbruster 




[PULL for-6.1 0/6] qemu-ga patch queue for hard-freeze

2021-08-03 Thread Michael Roth
Hi Peter,

Sorry for the late submission. These patches affect only the w32 build of
qemu-ga. A number of these patches I've had queued for some time, but a bug
in the MSI installer that was just fixed was blocking testing. Now that that
is working again I am hoping to get these in along with a couple of other
fixes that have come in since then.

The following changes since commit 7f1cab9c628a798ae2607940993771e6300e9e00:

  Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into 
staging (2021-08-02 17:21:50 +0100)

are available in the Git repository at:

  git://github.com/mdroth/qemu.git tags/qga-pull-2021-08-03-pull-tag

for you to fetch changes up to e300858ed4a6d0cbd52b7fb5082d3c69cc371965:

  qga-win/msi: fix missing libstdc++-6 DLL in MSI installer (2021-08-03 
07:01:36 -0500)


qemu-ga patch queue for hard-freeze

* w32: Fix missing/incorrect DLLs in MSI installer
* w32: Fix memory leaks in guest-get-osinfo/guest-get-fsinfo
* w32: Increase timeout for guest-fsfreeze-freeze


Basil Salman (3):
  qga-win: Increase VSS freeze timeout to 60 secs instead of 10
  qga-win: Fix build_guest_fsinfo() close of nonexistent
  qga-win: Fix handle leak in ga_get_win_product_name()

Gerd Hoffmann (1):
  qemu-ga/msi: fix w32 libgcc name

Kostiantyn Kostiuk (1):
  qga-win: Free GMatchInfo properly

Michael Roth (1):
  qga-win/msi: fix missing libstdc++-6 DLL in MSI installer

 qga/commands-win32.c| 18 --
 qga/installer/qemu-ga.wxs   |  6 +-
 qga/vss-win32/requester.cpp |  2 +-
 3 files changed, 18 insertions(+), 8 deletions(-)





[PULL for-6.1 2/6] qga-win: Fix build_guest_fsinfo() close of nonexistent

2021-08-03 Thread Michael Roth
From: Basil Salman 

On the current error path of build_guest_fsinfo(), a non existent handle
is passed to CloseHandle().

This patch adds initialization of hLocalDiskHandle to
INVALID_HANDLE_VALUE, and checks for handle validity before the handle
is closed.

Signed-off-by: Basil Salman 
Signed-off-by: Basil Salman 
Signed-off-by: Michael Roth 
---
 qga/commands-win32.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/qga/commands-win32.c b/qga/commands-win32.c
index a099acb34d..763186efd4 100644
--- a/qga/commands-win32.c
+++ b/qga/commands-win32.c
@@ -1091,7 +1091,7 @@ static GuestFilesystemInfo *build_guest_fsinfo(char 
*guid, Error **errp)
 size_t len;
 uint64_t i64FreeBytesToCaller, i64TotalBytes, i64FreeBytes;
 GuestFilesystemInfo *fs = NULL;
-HANDLE hLocalDiskHandle = NULL;
+HANDLE hLocalDiskHandle = INVALID_HANDLE_VALUE;
 
 GetVolumePathNamesForVolumeName(guid, (LPCH)&mnt, 0, &info_size);
 if (GetLastError() != ERROR_MORE_DATA) {
@@ -1149,7 +1149,9 @@ static GuestFilesystemInfo *build_guest_fsinfo(char 
*guid, Error **errp)
 fs->type = g_strdup(fs_name);
 fs->disk = build_guest_disk_info(guid, errp);
 free:
-CloseHandle(hLocalDiskHandle);
+if (hLocalDiskHandle != INVALID_HANDLE_VALUE) {
+CloseHandle(hLocalDiskHandle);
+}
 g_free(mnt_point);
 return fs;
 }
-- 
2.25.1




[PULL for-6.1 1/6] qga-win: Increase VSS freeze timeout to 60 secs instead of 10

2021-08-03 Thread Michael Roth
From: Basil Salman 

Currently Requester freeze times out after 10 seconds, while
the default timeout for Writer Freeze is 60 seconds. according to
VSS Documentation [1].
[1]: 
https://docs.microsoft.com/en-us/windows/win32/vss/overview-of-processing-a-backup-under-vss

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1909073

Signed-off-by: Basil Salman 
Signed-off-by: Basil Salman 
Signed-off-by: Michael Roth 
---
 qga/vss-win32/requester.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/qga/vss-win32/requester.cpp b/qga/vss-win32/requester.cpp
index 5378c55d23..940a2c8f55 100644
--- a/qga/vss-win32/requester.cpp
+++ b/qga/vss-win32/requester.cpp
@@ -18,7 +18,7 @@
 #include 
 
 /* Max wait time for frozen event (VSS can only hold writes for 10 seconds) */
-#define VSS_TIMEOUT_FREEZE_MSEC 1
+#define VSS_TIMEOUT_FREEZE_MSEC 6
 
 /* Call QueryStatus every 10 ms while waiting for frozen event */
 #define VSS_TIMEOUT_EVENT_MSEC 10
-- 
2.25.1




[PULL for-6.1 6/6] qga-win/msi: fix missing libstdc++-6 DLL in MSI installer

2021-08-03 Thread Michael Roth
libstdc++ is required for the qga-vss.dll that provides fsfreeze
functionality. Currently it is not provided by the MSI installer,
resulting in fsfreeze being disabled in guest environments where it has
not been installed by other means.

In the future this would be better handled via gcc-cpp ComponentGroup
provided by msitools, but that would be better handled with a general
rework of DLL dependency handling in the installer build. Keep it
simple for now to fix this regression.

Tested with Fedora 34 mingw build environment.

Cc: Gerd Hoffmann 
Cc: Kostiantyn Kostiuk 
Cc: Marc-André Lureau 
Cc: Philippe Mathieu-Daudé 
Reviewed-by: Marc-André Lureau 
Signed-off-by: Michael Roth 
---
 qga/installer/qemu-ga.wxs | 4 
 1 file changed, 4 insertions(+)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index ce7b25b5e1..0950e8c6be 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -84,6 +84,9 @@
 
   
   
+  
+
+  
   
 
   
@@ -164,6 +167,7 @@
 
   
   
+  
   
   
   
-- 
2.25.1




[PULL for-6.1 5/6] qemu-ga/msi: fix w32 libgcc name

2021-08-03 Thread Michael Roth
From: Gerd Hoffmann 

This is what I find on my Fedora 34 mingw install.

Signed-off-by: Gerd Hoffmann 
Signed-off-by: Michael Roth 
---
 qga/installer/qemu-ga.wxs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index 9cb4c3d733..ce7b25b5e1 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -31,7 +31,7 @@
   
 
   
-
+
 
   
 
-- 
2.25.1




[PULL for-6.1 4/6] qga-win: Free GMatchInfo properly

2021-08-03 Thread Michael Roth
From: Kostiantyn Kostiuk 

The g_regex_match function creates match_info even if it
returns FALSE. So we should always call g_match_info_free.
A better solution is using g_autoptr for match_info variable.

Signed-off-by: Kostiantyn Kostiuk 
Reviewed-by: Daniel P. Berrangé 
Reviewed-by: Philippe Mathieu-Daudé 
Signed-off-by: Michael Roth 
---
 qga/commands-win32.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/qga/commands-win32.c b/qga/commands-win32.c
index 098211e724..7bac0c5d42 100644
--- a/qga/commands-win32.c
+++ b/qga/commands-win32.c
@@ -2459,7 +2459,7 @@ GuestDeviceInfoList *qmp_guest_get_devices(Error **errp)
 continue;
 }
 for (j = 0; hw_ids[j] != NULL; j++) {
-GMatchInfo *match_info;
+g_autoptr(GMatchInfo) match_info;
 GuestDeviceIdPCI *id;
 if (!g_regex_match(device_pci_re, hw_ids[j], 0, &match_info)) {
 continue;
@@ -2476,7 +2476,6 @@ GuestDeviceInfoList *qmp_guest_get_devices(Error **errp)
 id->vendor_id = g_ascii_strtoull(vendor_id, NULL, 16);
 id->device_id = g_ascii_strtoull(device_id, NULL, 16);
 
-g_match_info_free(match_info);
 break;
 }
 if (skip) {
-- 
2.25.1




[PULL for-6.1 3/6] qga-win: Fix handle leak in ga_get_win_product_name()

2021-08-03 Thread Michael Roth
From: Basil Salman 

In ga_get_win_product_name() a handle to Registry key was open but not
closed.

In this patch the handle is closed as part of the free routine.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1929144

Signed-off-by: Basil Salman 
Signed-off-by: Basil Salman 
Signed-off-by: Michael Roth 
---
 qga/commands-win32.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/qga/commands-win32.c b/qga/commands-win32.c
index 763186efd4..098211e724 100644
--- a/qga/commands-win32.c
+++ b/qga/commands-win32.c
@@ -2231,7 +2231,7 @@ static char *ga_get_win_name(OSVERSIONINFOEXW const 
*os_version, bool id)
 
 static char *ga_get_win_product_name(Error **errp)
 {
-HKEY key = NULL;
+HKEY key = INVALID_HANDLE_VALUE;
 DWORD size = 128;
 char *result = g_malloc0(size);
 LONG err = ERROR_SUCCESS;
@@ -2241,7 +2241,8 @@ static char *ga_get_win_product_name(Error **errp)
   &key);
 if (err != ERROR_SUCCESS) {
 error_setg_win32(errp, err, "failed to open registry key");
-goto fail;
+g_free(result);
+return NULL;
 }
 
 err = RegQueryValueExA(key, "ProductName", NULL, NULL,
@@ -2262,9 +2263,13 @@ static char *ga_get_win_product_name(Error **errp)
 goto fail;
 }
 
+RegCloseKey(key);
 return result;
 
 fail:
+if (key != INVALID_HANDLE_VALUE) {
+RegCloseKey(key);
+}
 g_free(result);
 return NULL;
 }
-- 
2.25.1




Re: [PATCH v6 11/11] qapi: make 'if' condition strings simple identifiers

2021-08-03 Thread Markus Armbruster
marcandre.lur...@redhat.com writes:

> From: Marc-André Lureau 
>
> Change the 'if' condition strings to be C-agnostic and be simple
> identifiers.

This is less general, and that's okay, we're doing it for a purpose.
But the commit message should mention / explain all this.

> Signed-off-by: Marc-André Lureau 
> Reviewed-by: Stefan Hajnoczi 
> Tested-by: John Snow 
> ---

[...]

> diff --git a/scripts/qapi/common.py b/scripts/qapi/common.py
> index f8718e201b..0c718e43c9 100644
> --- a/scripts/qapi/common.py
> +++ b/scripts/qapi/common.py
> @@ -218,7 +218,7 @@ def cgen_ifcond(ifcond: Union[str, Dict[str, Any]]) -> 
> str:
>  if not ifcond:
>  return ''
>  if isinstance(ifcond, str):
> -return ifcond
> +return 'defined(' + ifcond + ')'
>  
>  oper, operands = next(iter(ifcond.items()))
>  if oper == 'not':
> diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
> index d2bd52c49f..d355cbc8c1 100644
> --- a/scripts/qapi/expr.py
> +++ b/scripts/qapi/expr.py
> @@ -281,10 +281,10 @@ def check_if(expr: _JSONObject, info: QAPISourceInfo, 
> source: str) -> None:
>  
>  def _check_if(cond: Union[str, object]) -> None:
>  if isinstance(cond, str):
> -if not cond.strip():
> +if not cond.isidentifier():

This accepts *Python* identifiers:

$ python
Python 3.9.6 (default, Jul 16 2021, 00:00:00)
[...]
>>> 'André'.isidentifier()
True

These may or may not work for the languages we generate.  Wouldn't
restricting identifiers to something like /[A-Z][A-Z0-9_]*/ make more
sense?

>  raise QAPISemError(
>  info,
> -"'if' condition '%s' of %s makes no sense"
> +"'if' condition '%s' of %s is not a valid identifier"
>  % (cond, source))
>  return
>  
> diff --git a/tests/qapi-schema/alternate-branch-if-invalid.err 
> b/tests/qapi-schema/alternate-branch-if-invalid.err
> index d384929c51..03bad877a3 100644
> --- a/tests/qapi-schema/alternate-branch-if-invalid.err
> +++ b/tests/qapi-schema/alternate-branch-if-invalid.err
> @@ -1,2 +1,2 @@
>  alternate-branch-if-invalid.json: In alternate 'Alt':
> -alternate-branch-if-invalid.json:2: 'if' condition ' ' of 'data' member 
> 'branch' makes no sense
> +alternate-branch-if-invalid.json:2: 'if' condition ' ' of 'data' member 
> 'branch' is not a valid identifier

[...]




Re: [PATCH v6 00/11] qapi: untie 'if' conditions from C preprocessor

2021-08-03 Thread Markus Armbruster
marcandre.lur...@redhat.com writes:

> From: Marc-André Lureau 
>
> Hi,
>
> This series makes the 'if' conditions less liberal, by formalizing a simple
> expression tree based on bare boolean logic of configure option identifiers.
>
> (this allows to express conditions in Rust in my QAPI-Rust PoC series)
>
> thanks

I like this overall.

The commit messages are rather terse in places.  I have a few questions,
I asked for a few minor tweaks, and I also noted a few possible
improvements that can be done on top.  I wonder whether we can drop
PATCH 04.

Let's discuss my findings, then decide whether we want a respin.

Thank you!




Re: Status of stable release effort ?

2021-08-03 Thread Daniel P . Berrangé
Ping, anyone ?

On Mon, Jul 26, 2021 at 11:13:41AM +0100, Daniel P. Berrangé wrote:
> We are currently in the final lead up to shipping the 6.1.0 release
> 
> AFAICT from git tags, the latest stable release was 5.0.1 in Sep 2020
> 
>https://gitlab.com/qemu-project/qemu/-/tags
> 
> There have been many patches sent to qemu-stable each month e.g.
> 
>   https://lists.nongnu.org/archive/html/qemu-stable/2021-06/threads.html
> 
> but no 5.1.1, 5.2.1 or 6.0.1 releases IIUC
> 
> Is QEMU stable still an active effort that we need to CC patches to ?


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH] hw/dma/pl330: Add memory region to replace default address_space_memory

2021-08-03 Thread Wen, Jianxian
From f780b0ee2ee36c562ab814915fff0e7217b25e63 Mon Sep 17 00:00:00 2001
From: Jianxian Wen 
Date: Tue, 3 Aug 2021 09:44:35 +0800
Subject: [PATCH] hw/dma/pl330: Add memory region to replace default
address_space_memory

PL330 needs a memory region which can connect with SMMU translate IOMMU region 
to support SMMU.

Signed-off-by: Jianxian Wen 
---
hw/arm/exynos4210.c  |  3 +++
hw/arm/xilinx_zynq.c |  2 ++
hw/dma/pl330.c   | 24 
3 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
index 5c7a51bba..af0e4820d 100644
--- a/hw/arm/exynos4210.c
+++ b/hw/arm/exynos4210.c
@@ -171,8 +171,11 @@ static DeviceState *pl330_create(uint32_t base, 
qemu_or_irq *orgate,
 SysBusDevice *busdev;
 DeviceState *dev;
 int i;
+MemoryRegion *sysmem = get_system_memory();
 dev = qdev_new("pl330");
+object_property_set_link(OBJECT(dev), "memory",
+OBJECT(sysmem), &error_fatal);
 qdev_prop_set_uint8(dev, "num_events", nevents);
 qdev_prop_set_uint8(dev, "num_chnls",  8);
 qdev_prop_set_uint8(dev, "num_periph_req",  nreq);
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
index 245af81bb..e0b3a73b9 100644
--- a/hw/arm/xilinx_zynq.c
+++ b/hw/arm/xilinx_zynq.c
@@ -312,6 +312,8 @@ static void zynq_init(MachineState *machine)
 sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[39-IRQ_OFFSET]);
 dev = qdev_new("pl330");
+object_property_set_link(OBJECT(dev), "memory",
+OBJECT(address_space_mem), &error_fatal);
 qdev_prop_set_uint8(dev, "num_chnls",  8);
 qdev_prop_set_uint8(dev, "num_periph_req",  4);
 qdev_prop_set_uint8(dev, "num_events",  16);
diff --git a/hw/dma/pl330.c b/hw/dma/pl330.c
index 944ba296b..06747ca0b 100644
--- a/hw/dma/pl330.c
+++ b/hw/dma/pl330.c
@@ -269,6 +269,9 @@ struct PL330State {
 uint8_t num_faulting;
 uint8_t periph_busy[PL330_PERIPH_NUM];
+/* Memory region that DMA operation access */
+MemoryRegion *mem_mr;
+AddressSpace mem_as;
};
 #define TYPE_PL330 "pl330"
@@ -1108,7 +,8 @@ static inline const PL330InsnDesc 
*pl330_fetch_insn(PL330Chan *ch)
 uint8_t opcode;
 int i;
-dma_memory_read(&address_space_memory, ch->pc, &opcode, 1);
+address_space_read(&ch->parent->mem_as, ch->pc,
+MEMTXATTRS_UNSPECIFIED, &opcode, 1);
 for (i = 0; insn_desc[i].size; i++) {
 if ((opcode & insn_desc[i].opmask) == insn_desc[i].opcode) {
 return &insn_desc[i];
@@ -1122,7 +1126,8 @@ static inline void pl330_exec_insn(PL330Chan *ch, const 
PL330InsnDesc *insn)
 uint8_t buf[PL330_INSN_MAXSIZE];
 assert(insn->size <= PL330_INSN_MAXSIZE);
-dma_memory_read(&address_space_memory, ch->pc, buf, insn->size);
+address_space_read(&ch->parent->mem_as, ch->pc,
+MEMTXATTRS_UNSPECIFIED, buf, insn->size);
 insn->exec(ch, buf[0], &buf[1], insn->size - 1);
}
@@ -1186,7 +1191,8 @@ static int pl330_exec_cycle(PL330Chan *channel)
 if (q != NULL && q->len <= pl330_fifo_num_free(&s->fifo)) {
 int len = q->len - (q->addr & (q->len - 1));
-dma_memory_read(&address_space_memory, q->addr, buf, len);
+address_space_read(&s->mem_as, q->addr,
+MEMTXATTRS_UNSPECIFIED, buf, len);
 trace_pl330_exec_cycle(q->addr, len);
 if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
 pl330_hexdump(buf, len);
@@ -1217,7 +1223,8 @@ static int pl330_exec_cycle(PL330Chan *channel)
 fifo_res = pl330_fifo_get(&s->fifo, buf, len, q->tag);
 }
 if (fifo_res == PL330_FIFO_OK || q->z) {
-dma_memory_write(&address_space_memory, q->addr, buf, len);
+address_space_write(&s->mem_as, q->addr,
+MEMTXATTRS_UNSPECIFIED, buf, len);
 trace_pl330_exec_cycle(q->addr, len);
 if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
 pl330_hexdump(buf, len);
@@ -1562,6 +1569,12 @@ static void pl330_realize(DeviceState *dev, Error **errp)
   "dma", PL330_IOMEM_SIZE);
 sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
+if (!s->mem_mr) {
+error_setg(errp, "'mem_mr' link is not set");
+return;
+}
+address_space_init(&s->mem_as, s->mem_mr, "pl330-memory");
+
 s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pl330_exec_cycle_timer, s);
 s->cfg[0] = (s->mgr_ns_at_rst ? 0x4 : 0) |
@@ -1656,6 +1669,9 @@ static Property pl330_properties[] = {
 DEFINE_PROP_UINT8("rd_q_dep", PL330State, rd_q_dep, 16),
 DEFINE_PROP_UINT16("data_buffer_dep", PL330State, data_buffer_dep, 256),
+DEFINE_PROP_LINK("memory", PL330State, mem_mr,
+ TYPE_MEMORY_REGION, MemoryRegion *),
+
 DEFINE_PROP_END_OF_LIST(),
};
--


Re: [PATCH-for-6.1 v2 0/2] hw/sd/sdcard: Fix assertion accessing out-of-range addresses with CMD30

2021-08-03 Thread Alexander Bulekov
On 210803 0155, Philippe Mathieu-Daudé wrote:
> Fix an assertion reported by OSS-Fuzz, add corresponding qtest.
> 
> The change is (now) simple enough for the next rc.
> 
> Since v1:
> - Simplified/corrected following Peter's suggestion
> 
> Philippe Mathieu-Daudé (2):
>   hw/sd/sdcard: Document out-of-range addresses for SEND_WRITE_PROT
>   hw/sd/sdcard: Fix assertion accessing out-of-range addresses with
> CMD30
> 

Fuzzed this for 20 mins, based on the OSS-Fuzz corpus, without finding
anything.

./qemu-fuzz-i386 --fuzz-target=generic-fuzz-sdhci-v3 -jobs=4 -workers=4 \
-focus_function=sd_wpbits \
~/oss-fuzz/qemu_qemu-fuzz-i386-target-generic-fuzz-sdhci-v3/  

Tested-by: Alexander Bulekov 

Thanks!

>  hw/sd/sd.c |  9 -
>  tests/qtest/fuzz-sdcard-test.c | 36 ++
>  2 files changed, 44 insertions(+), 1 deletion(-)
> 
> -- 
> 2.31.1
> 



Re: 'make check-acceptance' eats lots of disk space and never cleans it up

2021-08-03 Thread Peter Maydell
On Tue, 3 Aug 2021 at 13:58, Cleber Rosa  wrote:
>
> On Tue, Aug 3, 2021 at 8:43 AM Peter Maydell  wrote:
> >
> > It looks like 'make check-acceptance' creates directories in
> > build/clang/tests/results which are huge and which it never
> > cleans up. For example one of my build directories (configured
> > just for arm targets) has over 350 'job-[timestamp]' directories,
> > many of which are 2.5GB or more in size.
> >
>
> Every "job-[timestamp]" directory is the result of an "avocado run"
> invocation, that is, one "make check-acceptance" command.
>
> > I assume most of this is artefacts (disk images etc) needed to
> > rerun the tests. That's useful to keep around so you can manually
> > run a test. However, we should be sharing this between runs, not
> > creating a fresh copy for every time check-acceptance is
> > run, surely ?
> >
>
> They contain results and files needed for debugging the results of
> tests, not artefacts needed to re-run them.  Everything that is
> shareable is in the "~/avocado/data/caches" directory.

This doesn't in practice seem to be the case. Picking a subdirectory
at random:

./build/clang/tests/results/job-2021-07-30T11.20-63bd0a6/test-results/tmp_dir4_a3m36o/091-tests_acceptance_machine_sparc64_sun4u.py_Sun4uMachine.test_sparc64_sun4u/day23

This contains (among other things) a vmlinux file which I assume is
the one we run on the guest. It looks to me like this is a directory
where we unzipped/untarred a downloaded file with the guest image.

And another:

./build/clang/tests/results/job-2021-07-30T11.20-63bd0a6/test-results/tmp_dirwowk1bzp/026-tests_acceptance_boot_linux_console.py_BootLinuxConsole.test_arm_cubieboard_initrd/

This seems to contain a rootfilesystem for some test or other,
with a boot/, lib/, usr/, etc.

These all look like artefacts to me, in the sense that they're
the same every time.

I notice that all these have 'tmp_dir*' directories in the paths. Is the
problem just that we're failing to clean up a tempdir in some situations?

thanks
-- PMM



Re: [RFC PATCH 2/3] tests/tcg/sha1: remove endian include

2021-08-03 Thread Warner Losh
On Tue, Aug 3, 2021, 5:02 AM Alex Bennée  wrote:

> This doesn't exist in BSD world and doesn't seem to be needed by
> either.
>

Sys/endian.h is common. FreeBSD has endian.h, but others don't.

Signed-off-by: Alex Bennée 
> Cc: Warner Losh 
>

Acked by: Warner Losh 

> ---
>  tests/tcg/multiarch/sha1.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/tests/tcg/multiarch/sha1.c b/tests/tcg/multiarch/sha1.c
> index 87bfbcdf52..0081bd7657 100644
> --- a/tests/tcg/multiarch/sha1.c
> +++ b/tests/tcg/multiarch/sha1.c
> @@ -43,7 +43,6 @@ void SHA1Init(SHA1_CTX* context);
>  void SHA1Update(SHA1_CTX* context, const unsigned char* data, uint32_t
> len);
>  void SHA1Final(unsigned char digest[20], SHA1_CTX* context);
>  /*  end of sha1.h  */
> -#include 
>
>  #define rol(value, bits) (((value) << (bits)) | ((value) >> (32 -
> (bits
>
> --
> 2.30.2
>
>


Re: [PATCH for-6.1? v2 3/7] job: @force parameter for job_cancel_sync{,_all}()

2021-08-03 Thread Kevin Wolf
Am 26.07.2021 um 16:46 hat Max Reitz geschrieben:
> Callers should be able to specify whether they want job_cancel_sync() to
> force-cancel the job or not.
> 
> In fact, almost all invocations do not care about consistency of the
> result and just want the job to terminate as soon as possible, so they
> should pass force=true.  The replication block driver is the exception.
> 
> This changes some iotest outputs, because quitting qemu while a mirror
> job is active will now lead to it being cancelled instead of completed,
> which is what we want.  (Cancelling a READY mirror job with force=false
> may take an indefinite amount of time, which we do not want when
> quitting.  If users want consistent results, they must have all jobs be
> done before they quit qemu.)
> 
> Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
> Signed-off-by: Max Reitz 
> Reviewed-by: Vladimir Sementsov-Ogievskiy 

> diff --git a/job.c b/job.c
> index e7a5d28854..9e971d64cf 100644
> --- a/job.c
> +++ b/job.c
> @@ -763,7 +763,12 @@ static void job_completed_txn_abort(Job *job)
>  if (other_job != job) {
>  ctx = other_job->aio_context;
>  aio_context_acquire(ctx);
> -job_cancel_async(other_job, false);
> +/*
> + * This is a transaction: If one job failed, no result will 
> matter.
> + * Therefore, pass force=true to terminate all other jobs as 
> quickly
> + * as possible.
> + */
> +job_cancel_async(other_job, true);
>  aio_context_release(ctx);
>  }
>  }

Sneaking in a hunk that is unrelated to what the commit message
promises? How naughty! :-)

(But I guess the change makes sense.)

Kevin




Re: [PATCH v4] failover: unregister ROM on unplug

2021-08-03 Thread Michael S. Tsirkin
On Wed, Jul 21, 2021 at 06:09:05PM +0200, Laurent Vivier wrote:
> The intend of failover is to allow to migrate a VM with a VFIO
> networking card without disrupting the network operation by switching
> to a virtio-net device during the migration.
> 
> This simple change allows to test failover with a simulated device
> like e1000e rather than a vfio device, even if it's useless in real
> life it can help to debug failover.
> 
> This is interesting to developers that want to test failover on
> a system with no vfio device. Moreover it simplifies host networking
> configuration as we can use the same bridge for virtio-net and
> the other failover networking device.
> 
> Without this change the migration of a system configured with failover
> fails with:
> 
>   ...
>   -device virtio-net-pci,id=virtionet0,failover=on,...  \
>   -device e1000,failover_pair_id=virtionet0,... \
>   ...
> 
>   (qemu) migrate ...
> 
>   Unknown ramblock ":00:01.1:00.0/e1000e.rom", cannot accept migration
>   error while loading state for instance 0x0 of device 'ram'
>   load of migration failed: Invalid argument
> 
> This happens because QEMU correctly unregisters the interface vmstate but
> not the ROM one. This patch fixes that.
> 
> Signed-off-by: Laurent Vivier 


Build fails on qemu-system-m68k:

/usr/bin/ld: libqemu-m68k-softmmu.fa.p/hw_net_virtio-net.c.o: in function 
`virtio_net_handle_migration_primary':
/scm/qemu/build/../hw/net/virtio-net.c:3259: undefined reference to 
`pci_del_option_rom'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
make[1]: *** [Makefile:154: run-ninja] Error 1

It's not pretty to poke at pci from generic virtio.
Should we maybe wrap vmstate_unregister and pci_del_option_rom
to allow removing all migrateable things related to the device
in one go somehow?


> ---
> 
> Notes:
> v4:
>   export and use pci_del_option_rom()
> 
> v3:
>   remove useless space before comma
> 
> v2:
>   reset has_rom to false
>   update commit log message
> 
>  include/hw/pci/pci.h | 2 ++
>  hw/net/virtio-net.c  | 1 +
>  hw/pci/pci.c | 3 +--
>  3 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> index d0f4266e3725..84707034cbf8 100644
> --- a/include/hw/pci/pci.h
> +++ b/include/hw/pci/pci.h
> @@ -369,6 +369,8 @@ void pci_register_vga(PCIDevice *pci_dev, MemoryRegion 
> *mem,
>  void pci_unregister_vga(PCIDevice *pci_dev);
>  pcibus_t pci_get_bar_addr(PCIDevice *pci_dev, int region_num);
>  
> +void pci_del_option_rom(PCIDevice *pdev);
> +
>  int pci_add_capability(PCIDevice *pdev, uint8_t cap_id,
> uint8_t offset, uint8_t size,
> Error **errp);
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 16d20cdee52a..d6f03633f1b3 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3256,6 +3256,7 @@ static void 
> virtio_net_handle_migration_primary(VirtIONet *n, MigrationState *s)
>  if (migration_in_setup(s) && !should_be_hidden) {
>  if (failover_unplug_primary(n, dev)) {
>  vmstate_unregister(VMSTATE_IF(dev), qdev_get_vmsd(dev), dev);
> +pci_del_option_rom(PCI_DEVICE(dev));
>  qapi_event_send_unplug_primary(dev->id);
>  qatomic_set(&n->failover_primary_hidden, true);
>  } else {
> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> index 23d2ae2ab232..c210d92b5ba7 100644
> --- a/hw/pci/pci.c
> +++ b/hw/pci/pci.c
> @@ -228,7 +228,6 @@ static PCIBus *pci_find_bus_nr(PCIBus *bus, int bus_num);
>  static void pci_update_mappings(PCIDevice *d);
>  static void pci_irq_handler(void *opaque, int irq_num, int level);
>  static void pci_add_option_rom(PCIDevice *pdev, bool is_default_rom, Error 
> **);
> -static void pci_del_option_rom(PCIDevice *pdev);
>  
>  static uint16_t pci_default_sub_vendor_id = PCI_SUBVENDOR_ID_REDHAT_QUMRANET;
>  static uint16_t pci_default_sub_device_id = PCI_SUBDEVICE_ID_QEMU;
> @@ -2429,7 +2428,7 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
> is_default_rom,
>  pci_register_bar(pdev, PCI_ROM_SLOT, 0, &pdev->rom);
>  }
>  
> -static void pci_del_option_rom(PCIDevice *pdev)
> +void pci_del_option_rom(PCIDevice *pdev)
>  {
>  if (!pdev->has_rom)
>  return;
> -- 
> 2.31.1




[PATCH 1/2] doc: Clarify serial parameters

2021-08-03 Thread Axel Heider

There is a difference between 'null' and 'none'. Add a sentence to
highlight this, so this does not get mixed up.

Signed-off-by: Axel Heider 
---
 qemu-options.hx | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 83aa59a920..e3f256fa72 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -3702,7 +3702,8 @@ SRST
 [Linux only] Pseudo TTY (a new PTY is automatically allocated)

 ``none``
-No device is allocated.
+No serial devices are allocated at all. Use ``null`` to disable a
+specific serial device.

 ``null``
 void device
--
2.17.1



[PATCH 2/2] doc: Remove trailing spaces

2021-08-03 Thread Axel Heider

Signed-off-by: Axel Heider 
---
 qemu-options.hx | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index e3f256fa72..ed91246114 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -244,15 +244,15 @@ DEF("numa", HAS_ARG, QEMU_OPTION_numa,
 QEMU_ARCH_ALL)
 SRST
 ``-numa 
node[,mem=size][,cpus=firstcpu[-lastcpu]][,nodeid=node][,initiator=initiator]``
-  \
+  \
 ``-numa 
node[,memdev=id][,cpus=firstcpu[-lastcpu]][,nodeid=node][,initiator=initiator]``
   \
 ``-numa dist,src=source,dst=destination,val=distance``
-  \
+  \
 ``-numa cpu,node-id=node[,socket-id=x][,core-id=y][,thread-id=z]``
-  \
+  \
 ``-numa 
hmat-lb,initiator=node,target=node,hierarchy=hierarchy,data-type=tpye[,latency=lat][,bandwidth=bw]``
-  \
+  \
 ``-numa 
hmat-cache,node-id=node,size=size,level=level[,associativity=str][,policy=str][,line=size]``
 Define a NUMA node and assign RAM and VCPUs to it. Set the NUMA
 distance from a source node to a destination node. Set the ACPI
@@ -447,7 +447,7 @@ DEF("global", HAS_ARG, QEMU_OPTION_global,
 QEMU_ARCH_ALL)
 SRST
 ``-global driver.prop=value``
-  \
+  \
 ``-global driver=driver,property=property,value=value``
 Set default value of driver's property prop to value, e.g.:

@@ -1026,9 +1026,9 @@ SRST
 ``-hda file``
   \
 ``-hdb file``
-  \
+  \
 ``-hdc file``
-  \
+  \
 ``-hdd file``
 Use file as hard disk 0, 1, 2 or 3 image (see the :ref:`disk images`
 chapter in the System Emulation Users Guide).
@@ -1518,7 +1518,7 @@ DEF("fsdev", HAS_ARG, QEMU_OPTION_fsdev,

 SRST
 ``-fsdev local,id=id,path=path,security_model=security_model 
[,writeout=writeout][,readonly=on][,fmode=fmode][,dmode=dmode] 
[,throttling.option=value[,throttling.option=value[,...]]]``
-  \
+  \
 ``-fsdev proxy,id=id,socket=socket[,writeout=writeout][,readonly=on]``
   \
 ``-fsdev proxy,id=id,sock_fd=sock_fd[,writeout=writeout][,readonly=on]``
@@ -1639,9 +1639,9 @@ DEF("virtfs", HAS_ARG, QEMU_OPTION_virtfs,

 SRST
 ``-virtfs local,path=path,mount_tag=mount_tag 
,security_model=security_model[,writeout=writeout][,readonly=on] 
[,fmode=fmode][,dmode=dmode][,multidevs=multidevs]``
-  \
+  \
 ``-virtfs proxy,socket=socket,mount_tag=mount_tag 
[,writeout=writeout][,readonly=on]``
-  \
+  \
 ``-virtfs proxy,sock_fd=sock_fd,mount_tag=mount_tag 
[,writeout=writeout][,readonly=on]``
   \
 ``-virtfs synth,mount_tag=mount_tag``
@@ -3873,10 +3873,10 @@ DEF("mon", HAS_ARG, QEMU_OPTION_mon, \
 "-mon [chardev=]name[,mode=readline|control][,pretty[=on|off]]\n", 
QEMU_ARCH_ALL)
 SRST
 ``-mon [chardev=]name[,mode=readline|control][,pretty[=on|off]]``
-Setup monitor on chardev name. ``mode=control`` configures
+Setup monitor on chardev name. ``mode=control`` configures
 a QMP monitor (a JSON RPC-style protocol) and it is not the
 same as HMP, the human monitor that has a "(qemu)" prompt.
-``pretty`` is only valid when ``mode=control``,
+``pretty`` is only valid when ``mode=control``,
 turning on JSON pretty printing to ease
 human reading and debugging.
 ERST
@@ -3937,7 +3937,7 @@ DEF("overcommit", HAS_ARG, QEMU_OPTION_overcommit,
 QEMU_ARCH_ALL)
 SRST
 ``-overcommit mem-lock=on|off``
-  \
+  \
 ``-overcommit cpu-pm=on|off``
 Run qemu with hints about host resource overcommit. The default is
 to assume that host overcommits all resources.
@@ -4323,7 +4323,7 @@ DEF("incoming", HAS_ARG, QEMU_OPTION_incoming, \
 QEMU_ARCH_ALL)
 SRST
 ``-incoming tcp:[host]:port[,to=maxport][,ipv4=on|off][,ipv6=on|off]``
-  \
+  \
 ``-incoming rdma:host:port[,ipv4=on|off][,ipv6=on|off]``
 Prepare for incoming migration, listen on a given tcp port.

@@ -5072,7 +5072,7 @@ SRST
[...]

 ``-object 
secret,id=id,data=string,format=raw|base64[,keyid=secretid,iv=string]``
-  \
+  \
 ``-object 
secret,id=id,file=filename,format=raw|base64[,keyid=secretid,iv=string]``
 Defines a secret to store a password, encryption key, or some
 other sensitive data. The sensitive data can either be passed
--
2.17.1



Re: [PATCH] hw/i386/ich9: add comment explaining an argument to acpi_pcihp_reset call

2021-08-03 Thread Michael S. Tsirkin
On Tue, Jul 27, 2021 at 10:15:46AM +0530, Ani Sinha wrote:
> acpi_pcihp_reset() call from ich9/pm_reset() passes an unconditional truth 
> value
> as the second argument. Added a commnet here to explain the reason why the
> argument is being passed unconditionally.
> 
> Signed-off-by: Ani Sinha 
> ---
>  hw/acpi/ich9.c | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
> index 778e27b659..b2e3c46075 100644
> --- a/hw/acpi/ich9.c
> +++ b/hw/acpi/ich9.c
> @@ -281,6 +281,11 @@ static void pm_reset(void *opaque)
>  pm->smi_en_wmask = ~0;
>  
>  if (pm->use_acpi_hotplug_bridge) {
> +/*
> + * PCI Express root buses do not support hot-plug, for
> + * details see docs/pcie.txt. Hence, the second argument
> + * is unconditionally true.
> + */
>  acpi_pcihp_reset(&pm->acpi_pci_hotplug, true);
>  }


I don't see this comment as especially elluminating, sorry.

>  
> -- 
> 2.25.1




[PULL v4 0/1] Libslirp patches

2021-08-03 Thread marcandre . lureau
From: Marc-André Lureau 

The following changes since commit 7f1cab9c628a798ae2607940993771e6300e9e00:

  Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into 
staging (2021-08-02 17:21:50 +0100)

are available in the Git repository at:

  g...@github.com:elmarco/qemu.git tags/libslirp-pull-request

for you to fetch changes up to 43f547b73dfaf108c9aaacb0b36200e2e7a200f1:

  Update libslirp to v4.6.1 (2021-08-03 16:07:22 +0400)


Update libslirp

Hi,

v4:
 - drop subproject patch
 - fix OSX linking issue

v3:
 - rebased
 - (checked compilation with P. Maydell extra-cflags reported failure & gitlab 
CI)

v2:
 - fix unused variables on macos
 - fork_exec_child_setup: improve signal handling



Marc-André Lureau (1):
  Update libslirp to v4.6.1

 meson.build | 2 ++
 slirp   | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

-- 
2.32.0.264.g75ae10bc75





Re: [Question] Reduce the msix_load cost for VFIO device

2021-08-03 Thread Alex Williamson
On Tue, 3 Aug 2021 16:43:07 +0800
"Longpeng (Mike, Cloud Infrastructure Service Product Dept.)"
 wrote:

> Hi Alex,
> 
> We found that the msix_load() will cost 40~50ms if the VF has 60+ interrupts,
> the following code cost too much for each interrupt:
> 
> msix_load:
>   for (vector = 0; vector < 60; vector++)
> msix_handle_mask_update
>   vfio_msix_vector_do_use
> vfio_add_kvm_msi_virq
>   kvm_irqchip_add_msi_route
> kvm_irqchip_commit_routes <-- cost 0.8ms each time
> 
> In irq remapping mode, the VF interrupts are not routed through KVM irqchip

I'm not sure what this means.  Your MSIX interrupts are going through
QEMU anyway?  Why?

> in fact, so maybe we can reduce this cost by "x-no-kvm-msix=ture", right?
> Are there any risks if we do in this way ?

You're obviously free to configure your device this way, but the
expectation is that any sort of startup latency is more than offset by
improved runtime latency through the KVM route.  This option is usually
reserved for debugging, when we want to see all interaction with the
device in QEMU.

If there's a case where we're not detecting that a KVM route is
ineffective, then we should figure out how to detect that and skip this
code, but again the expectation is that the KVM route is worthwhile.

If this is specifically about kvm_irqchip_commit_routes(), maybe the
setup path needs a way to batch multiple routes and defer the commit,
if that's possible.  Thanks,

Alex




[PULL v4 1/1] Update libslirp to v4.6.1

2021-08-03 Thread marcandre . lureau
From: Marc-André Lureau 

Switch from stable-4.2 branch to upstream v4.6.1 release + fixes.

## [Unreleased]

### Fixed

 - Haiku fixes. !98 !99
 - Fix a minor DHCP regression introduced in 4.6.0. !97

## [4.6.1] - 2021-06-18

### Fixed

 - Fix DHCP regression introduced in 4.6.0. !95

## [4.6.0] - 2021-06-14

### Added

 - mbuf: Add debugging helpers for allocation. !90

### Changed

 -  Revert "Set macOS deployment target to macOS 10.4". !93

### Fixed

 - mtod()-related buffer overflows (CVE-2021-3592 #44, CVE-2021-3593 #45,
   CVE-2021-3594 #47, CVE-2021-3595 #46).
 - poll_fd: add missing fd registration for UDP and ICMP
 - ncsi: make ncsi_calculate_checksum work with unaligned data. !89
 - Various typos and doc fixes. !88

## [4.5.0] - 2021-05-18

### Added

 - IPv6 forwarding. !62 !75 !77
 - slirp_neighbor_info() to dump the ARP/NDP tables. !71

### Changed

 - Lazy guest address resolution for IPv6. !81
 - Improve signal handling when spawning a child. !61
 - Set macOS deployment target to macOS 10.4. !72
 - slirp_add_hostfwd: Ensure all error paths set errno. !80
 - More API documentation.

### Fixed

 - Assertion failure on unspecified IPv6 address. !86
 - Disable polling for PRI on MacOS, fixing some closing streams issues. !73
 - Various memory leak fixes on fastq/batchq. !68
 - Memory leak on IPv6 fast-send. !67
 - Slow socket response on Windows. !64
 - Misc build and code cleanups. !60 !63 !76 !79 !84

## [4.4.0] - 2020-12-02

### Added

 - udp, udp6, icmp: handle TTL value. !48
 - Enable forwarding ICMP errors. !49
 - Add DNS resolving for iOS. !54

### Changed

 - Improve meson subproject() support. !53
 - Removed Makefile-based build system. !56

### Fixed

 - socket: consume empty packets. !55
 - check pkt_len before reading protocol header (CVE-2020-29129). !57
 - ip_stripoptions use memmove (fixes undefined behaviour). !47
 - various Coverity-related changes/fixes.

## [4.3.1] - 2020-07-08

### Changed

 - A silent truncation could occur in `slirp_fmt()`, which will now print a
   critical message. See also #22.

### Fixed

 - CVE-2020-10756 - Drop bogus IPv6 messages that could lead to data leakage.
   See !44 and !42.
 - Fix win32 builds by using the SLIRP_PACKED definition.
 - Various coverity scan errors fixed. !41
 - Fix new GCC warnings. !43

## [4.3.0] - 2020-04-22

### Added

 - `SLIRP_VERSION_STRING` macro, with the git sha suffix when building from git
 - `SlirpConfig.disable_dns`, to disable DNS redirection #16

### Changed

 - `slirp_version_string()` now has the git sha suffix when building form git
 - Limit DNS redirection to port 53 #16

### Fixed

 - Fix build regression with mingw & NetBSD
 - Fix use-afte-free in `ip_reass()` (CVE-2020-1983)

Signed-off-by: Marc-André Lureau 
Reviewed-by: Doug Evans 
---
 meson.build | 2 ++
 slirp   | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/meson.build b/meson.build
index f2e148eaf9..af9bbb83db 100644
--- a/meson.build
+++ b/meson.build
@@ -1824,6 +1824,8 @@ if have_system
 slirp_deps = []
 if targetos == 'windows'
   slirp_deps = cc.find_library('iphlpapi')
+elif targetos == 'darwin'
+  slirp_deps = cc.find_library('resolv')
 endif
 slirp_conf = configuration_data()
 slirp_conf.set('SLIRP_MAJOR_VERSION', 
meson.project_version().split('.')[0])
diff --git a/slirp b/slirp
index 8f43a99191..a88d9ace23 16
--- a/slirp
+++ b/slirp
@@ -1 +1 @@
-Subproject commit 8f43a99191afb47ca3f3c6972f6306209f367ece
+Subproject commit a88d9ace234a24ce1c17189642ef9104799425e0
-- 
2.32.0.264.g75ae10bc75




Re: [RFC PATCH 2/3] tests/tcg/sha1: remove endian include

2021-08-03 Thread Warner Losh
On Tue, Aug 3, 2021 at 7:55 AM Warner Losh  wrote:

>
>
> On Tue, Aug 3, 2021, 5:02 AM Alex Bennée  wrote:
>
>> This doesn't exist in BSD world and doesn't seem to be needed by
>> either.
>>
>
> Sys/endian.h is common. FreeBSD has endian.h, but others don't.
>
> Signed-off-by: Alex Bennée 
>> Cc: Warner Losh 
>>
>
> Acked by: Warner Losh 
>

On second thought, this is

Reviewed by: Warner Losh 

since I know the change is good.

Warner


> ---
>>  tests/tcg/multiarch/sha1.c | 1 -
>>  1 file changed, 1 deletion(-)
>>
>> diff --git a/tests/tcg/multiarch/sha1.c b/tests/tcg/multiarch/sha1.c
>> index 87bfbcdf52..0081bd7657 100644
>> --- a/tests/tcg/multiarch/sha1.c
>> +++ b/tests/tcg/multiarch/sha1.c
>> @@ -43,7 +43,6 @@ void SHA1Init(SHA1_CTX* context);
>>  void SHA1Update(SHA1_CTX* context, const unsigned char* data, uint32_t
>> len);
>>  void SHA1Final(unsigned char digest[20], SHA1_CTX* context);
>>  /*  end of sha1.h  */
>> -#include 
>>
>>  #define rol(value, bits) (((value) << (bits)) | ((value) >> (32 -
>> (bits
>>
>> --
>> 2.30.2
>>
>>


[PULL 0/3] s390x fixes

2021-08-03 Thread Thomas Huth
 Hi Peter!

The following changes since commit 7f1cab9c628a798ae2607940993771e6300e9e00:

  Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into 
staging (2021-08-02 17:21:50 +0100)

are available in the Git repository at:

  https://gitlab.com/thuth/qemu.git tags/pull-request-2021-08-03

for you to fetch changes up to 50e36dd61652a4a4f2af245655ed3ca08ef0a3ed:

  tests/tcg: Test that compare-and-trap raises SIGFPE (2021-08-03 15:17:38 
+0200)


* Fixes for SIGILL and SIGFPE of the s390x linux-user target


Ilya Leoshkevich (1):
  target/s390x: Fix SIGILL and SIGFPE psw.addr reporting

Jonathan Albrecht (2):
  linux-user/s390x: signal with SIGFPE on compare-and-trap
  tests/tcg: Test that compare-and-trap raises SIGFPE

 linux-user/s390x/cpu_loop.c |  66 +-
 tests/tcg/s390x/Makefile.target |   2 +-
 tests/tcg/s390x/trap.c  | 102 
 3 files changed, 148 insertions(+), 22 deletions(-)
 create mode 100644 tests/tcg/s390x/trap.c




[PULL 1/3] target/s390x: Fix SIGILL and SIGFPE psw.addr reporting

2021-08-03 Thread Thomas Huth
From: Ilya Leoshkevich 

For SIGILL, SIGFPE and SIGTRAP the PSW must point after the
instruction, and at the instruction for other signals. Currently under
qemu-user for SIGFILL and SIGFPE it points at the instruction.

Fix by advancing psw.addr for these signals.

Co-developed-by: Ulrich Weigand 
Signed-off-by: Ilya Leoshkevich 
Reviewed-by: David Hildenbrand 
Buglink: https://gitlab.com/qemu-project/qemu/-/issues/319
Message-Id: <20210705210434.45824-2-...@linux.ibm.com>
Signed-off-by: Thomas Huth 
---
 linux-user/s390x/cpu_loop.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/linux-user/s390x/cpu_loop.c b/linux-user/s390x/cpu_loop.c
index f2d1215fb1..22f2e89c62 100644
--- a/linux-user/s390x/cpu_loop.c
+++ b/linux-user/s390x/cpu_loop.c
@@ -64,7 +64,13 @@ void cpu_loop(CPUS390XState *env)
 case EXCP_DEBUG:
 sig = TARGET_SIGTRAP;
 n = TARGET_TRAP_BRKPT;
-goto do_signal_pc;
+/*
+ * For SIGTRAP the PSW must point after the instruction, which it
+ * already does thanks to s390x_tr_tb_stop(). si_addr doesn't need
+ * to be filled.
+ */
+addr = 0;
+goto do_signal;
 case EXCP_PGM:
 n = env->int_pgm_code;
 switch (n) {
@@ -132,6 +138,10 @@ void cpu_loop(CPUS390XState *env)
 
 do_signal_pc:
 addr = env->psw.addr;
+/*
+ * For SIGILL and SIGFPE the PSW must point after the instruction.
+ */
+env->psw.addr += env->int_pgm_ilen;
 do_signal:
 info.si_signo = sig;
 info.si_errno = 0;
-- 
2.27.0




[PULL 3/3] tests/tcg: Test that compare-and-trap raises SIGFPE

2021-08-03 Thread Thomas Huth
From: Jonathan Albrecht 

Signed-off-by: Jonathan Albrecht 
Message-Id: <20210709160459.4962-3-jonathan.albre...@linux.vnet.ibm.com>
Signed-off-by: Thomas Huth 
---
 tests/tcg/s390x/Makefile.target |   2 +-
 tests/tcg/s390x/trap.c  | 102 
 2 files changed, 103 insertions(+), 1 deletion(-)
 create mode 100644 tests/tcg/s390x/trap.c

diff --git a/tests/tcg/s390x/Makefile.target b/tests/tcg/s390x/Makefile.target
index 5d3de1b27a..bd084c7840 100644
--- a/tests/tcg/s390x/Makefile.target
+++ b/tests/tcg/s390x/Makefile.target
@@ -8,4 +8,4 @@ TESTS+=exrl-trtr
 TESTS+=pack
 TESTS+=mvo
 TESTS+=mvc
-
+TESTS+=trap
diff --git a/tests/tcg/s390x/trap.c b/tests/tcg/s390x/trap.c
new file mode 100644
index 00..d4c61c7f52
--- /dev/null
+++ b/tests/tcg/s390x/trap.c
@@ -0,0 +1,102 @@
+/*
+ * Copyright 2021 IBM Corp.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static void error1(const char *filename, int line, const char *fmt, ...)
+{
+va_list ap;
+va_start(ap, fmt);
+fprintf(stderr, "%s:%d: ", filename, line);
+vfprintf(stderr, fmt, ap);
+fprintf(stderr, "\n");
+va_end(ap);
+exit(1);
+}
+
+static int __chk_error(const char *filename, int line, int ret)
+{
+if (ret < 0) {
+error1(filename, line, "%m (ret=%d, errno=%d/%s)",
+   ret, errno, strerror(errno));
+}
+return ret;
+}
+
+#define error(fmt, ...) error1(__FILE__, __LINE__, fmt, ## __VA_ARGS__)
+
+#define chk_error(ret) __chk_error(__FILE__, __LINE__, (ret))
+
+int sigfpe_count;
+int sigill_count;
+
+static void sig_handler(int sig, siginfo_t *si, void *puc)
+{
+if (sig == SIGFPE) {
+if (si->si_code != 0) {
+error("unexpected si_code: 0x%x != 0", si->si_code);
+}
+++sigfpe_count;
+return;
+}
+
+if (sig == SIGILL) {
+++sigill_count;
+return;
+}
+
+error("unexpected signal 0x%x\n", sig);
+}
+
+int main(int argc, char **argv)
+{
+sigfpe_count = sigill_count = 0;
+
+struct sigaction act;
+
+/* Set up SIG handler */
+act.sa_sigaction = sig_handler;
+sigemptyset(&act.sa_mask);
+act.sa_flags = SA_SIGINFO;
+chk_error(sigaction(SIGFPE, &act, NULL));
+chk_error(sigaction(SIGILL, &act, NULL));
+
+uint64_t z = 0x0ull;
+uint64_t lz = 0xull;
+asm volatile (
+"lg %%r13,%[lz]\n"
+"cgitne %%r13,0\n" /* SIGFPE */
+"lg %%r13,%[z]\n"
+"cgitne %%r13,0\n" /* no trap */
+"nopr\n"
+"lg %%r13,%[lz]\n"
+"citne %%r13,0\n" /* SIGFPE */
+"lg %%r13,%[z]\n"
+"citne %%r13,0\n" /* no trap */
+"nopr\n"
+:
+: [z] "m" (z), [lz] "m" (lz)
+: "memory", "r13");
+
+if (sigfpe_count != 2) {
+error("unexpected SIGFPE count: %d != 2", sigfpe_count);
+}
+if (sigill_count != 0) {
+error("unexpected SIGILL count: %d != 0", sigill_count);
+}
+
+printf("PASS\n");
+return 0;
+}
-- 
2.27.0




[PULL 2/3] linux-user/s390x: signal with SIGFPE on compare-and-trap

2021-08-03 Thread Thomas Huth
From: Jonathan Albrecht 

Currently when a compare-and-trap instruction is executed, qemu will
always raise a SIGILL signal. On real hardware, a SIGFPE is raised.

Change the PGM_DATA case in cpu_loop to follow the behavior in
linux kernel /arch/s390/kernel/traps.c.
 * Only raise SIGILL if DXC == 0
 * If DXC matches a non-simulated IEEE exception, raise SIGFPE with
   correct si_code
 * Raise SIGFPE with si_code == 0 for everything else

When applied on 20210705210434.45824-2-...@linux.ibm.com, this fixes
crashes in the java jdk such as the linked bug.

Signed-off-by: Jonathan Albrecht 
Reviewed-by: Richard Henderson 
Buglink: https://bugs.launchpad.net/qemu/+bug/1920913
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/319
Message-Id: <20210709160459.4962-2-jonathan.albre...@linux.vnet.ibm.com>
Signed-off-by: Thomas Huth 
---
 linux-user/s390x/cpu_loop.c | 54 +++--
 1 file changed, 34 insertions(+), 20 deletions(-)

diff --git a/linux-user/s390x/cpu_loop.c b/linux-user/s390x/cpu_loop.c
index 22f2e89c62..6a69a6dd26 100644
--- a/linux-user/s390x/cpu_loop.c
+++ b/linux-user/s390x/cpu_loop.c
@@ -25,6 +25,35 @@
 /* s390x masks the fault address it reports in si_addr for SIGSEGV and SIGBUS 
*/
 #define S390X_FAIL_ADDR_MASK -4096LL
 
+static int get_pgm_data_si_code(int dxc_code)
+{
+switch (dxc_code) {
+/* Non-simulated IEEE exceptions */
+case 0x80:
+return TARGET_FPE_FLTINV;
+case 0x40:
+return TARGET_FPE_FLTDIV;
+case 0x20:
+case 0x28:
+case 0x2c:
+return TARGET_FPE_FLTOVF;
+case 0x10:
+case 0x18:
+case 0x1c:
+return TARGET_FPE_FLTUND;
+case 0x08:
+case 0x0c:
+return TARGET_FPE_FLTRES;
+}
+/*
+ * Non-IEEE and simulated IEEE:
+ * Includes compare-and-trap, quantum exception, etc.
+ * Simulated IEEE are included here to match current
+ * s390x linux kernel.
+ */
+return 0;
+}
+
 void cpu_loop(CPUS390XState *env)
 {
 CPUState *cs = env_cpu(env);
@@ -106,29 +135,14 @@ void cpu_loop(CPUS390XState *env)
 
 case PGM_DATA:
 n = (env->fpc >> 8) & 0xff;
-if (n == 0xff) {
-/* compare-and-trap */
+if (n == 0) {
 goto do_sigill_opn;
-} else {
-/* An IEEE exception, simulated or otherwise.  */
-if (n & 0x80) {
-n = TARGET_FPE_FLTINV;
-} else if (n & 0x40) {
-n = TARGET_FPE_FLTDIV;
-} else if (n & 0x20) {
-n = TARGET_FPE_FLTOVF;
-} else if (n & 0x10) {
-n = TARGET_FPE_FLTUND;
-} else if (n & 0x08) {
-n = TARGET_FPE_FLTRES;
-} else {
-/* ??? Quantum exception; BFP, DFP error.  */
-goto do_sigill_opn;
-}
-sig = TARGET_SIGFPE;
-goto do_signal_pc;
 }
 
+sig = TARGET_SIGFPE;
+n = get_pgm_data_si_code(n);
+goto do_signal_pc;
+
 default:
 fprintf(stderr, "Unhandled program exception: %#x\n", n);
 cpu_dump_state(cs, stderr, 0);
-- 
2.27.0




Re: [PATCH for-6.1? v2 5/7] job: Add job_cancel_requested()

2021-08-03 Thread Kevin Wolf
Am 26.07.2021 um 16:46 hat Max Reitz geschrieben:
> Most callers of job_is_cancelled() actually want to know whether the job
> is on its way to immediate termination.  For example, we refuse to pause
> jobs that are cancelled; but this only makes sense for jobs that are
> really actually cancelled.
> 
> A mirror job that is cancelled during READY with force=false should
> absolutely be allowed to pause.  This "cancellation" (which is actually
> a kind of completion) may take an indefinite amount of time, and so
> should behave like any job during normal operation.  For example, with
> on-target-error=stop, the job should stop on write errors.  (In
> contrast, force-cancelled jobs should not get write errors, as they
> should just terminate and not do further I/O.)
> 
> Therefore, redefine job_is_cancelled() to only return true for jobs that
> are force-cancelled (which as of HEAD^ means any job that interprets the
> cancellation request as a request for immediate termination), and add
> job_cancel_request() as the general variant, which returns true for any
> jobs which have been requested to be cancelled, whether it be
> immediately or after an arbitrarily long completion phase.
> 
> Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
> Signed-off-by: Max Reitz 
> ---
>  include/qemu/job.h |  8 +++-
>  block/mirror.c | 10 --
>  job.c  |  7 ++-
>  3 files changed, 17 insertions(+), 8 deletions(-)
> 
> diff --git a/include/qemu/job.h b/include/qemu/job.h
> index 8aa90f7395..032edf3c5f 100644
> --- a/include/qemu/job.h
> +++ b/include/qemu/job.h
> @@ -436,9 +436,15 @@ const char *job_type_str(const Job *job);
>  /** Returns true if the job should not be visible to the management layer. */
>  bool job_is_internal(Job *job);
>  
> -/** Returns whether the job is scheduled for cancellation. */
> +/** Returns whether the job is being cancelled. */
>  bool job_is_cancelled(Job *job);
>  
> +/**
> + * Returns whether the job is scheduled for cancellation (at an
> + * indefinite point).
> + */
> +bool job_cancel_requested(Job *job);

I don't think non-force blockdev-cancel for mirror should actually be
considered cancellation, so what is the question that this function
answers?

"Is this a cancelled job, or a mirror block job that is supposed to
complete soon, but only if it doesn't switch over the users to the
target on completion"?

Is this ever a reasonable question to ask, except maybe inside the
mirror implementation itself?

job_complete() is the only function outside of mirror that seems to use
it. But even there, it feels wrong to make a difference. Either we
accept redundant completion requests, or we don't. It doesn't really
matter how the job reconfigures the graph on completion. (Also, I feel
this should really have been part of the state machine, but I'm not sure
if we want to touch it now...)

Kevin




Re: 'make check-acceptance' eats lots of disk space and never cleans it up

2021-08-03 Thread Cleber Rosa
On Tue, Aug 3, 2021 at 9:47 AM Peter Maydell  wrote:
>
> On Tue, 3 Aug 2021 at 13:58, Cleber Rosa  wrote:
> >
> > On Tue, Aug 3, 2021 at 8:43 AM Peter Maydell  
> > wrote:
> > >
> > > It looks like 'make check-acceptance' creates directories in
> > > build/clang/tests/results which are huge and which it never
> > > cleans up. For example one of my build directories (configured
> > > just for arm targets) has over 350 'job-[timestamp]' directories,
> > > many of which are 2.5GB or more in size.
> > >
> >
> > Every "job-[timestamp]" directory is the result of an "avocado run"
> > invocation, that is, one "make check-acceptance" command.
> >
> > > I assume most of this is artefacts (disk images etc) needed to
> > > rerun the tests. That's useful to keep around so you can manually
> > > run a test. However, we should be sharing this between runs, not
> > > creating a fresh copy for every time check-acceptance is
> > > run, surely ?
> > >
> >
> > They contain results and files needed for debugging the results of
> > tests, not artefacts needed to re-run them.  Everything that is
> > shareable is in the "~/avocado/data/caches" directory.
>
> This doesn't in practice seem to be the case. Picking a subdirectory
> at random:
>
> ./build/clang/tests/results/job-2021-07-30T11.20-63bd0a6/test-results/tmp_dir4_a3m36o/091-tests_acceptance_machine_sparc64_sun4u.py_Sun4uMachine.test_sparc64_sun4u/day23
>
> This contains (among other things) a vmlinux file which I assume is
> the one we run on the guest. It looks to me like this is a directory
> where we unzipped/untarred a downloaded file with the guest image.
>
> And another:
>
> ./build/clang/tests/results/job-2021-07-30T11.20-63bd0a6/test-results/tmp_dirwowk1bzp/026-tests_acceptance_boot_linux_console.py_BootLinuxConsole.test_arm_cubieboard_initrd/
>
> This seems to contain a rootfilesystem for some test or other,
> with a boot/, lib/, usr/, etc.
>
> These all look like artefacts to me, in the sense that they're
> the same every time.
>
> I notice that all these have 'tmp_dir*' directories in the paths. Is the
> problem just that we're failing to clean up a tempdir in some situations?
>

These are all directories meant to be temporary (the name gives it
away) and meant to be cleaned up.  You actually found a bug in the
"avocado_qemu.Test" class that is *not* calling the base
"avocado.Test" class tearDown().  It's a trivial one liner fix:

---

diff --git a/tests/acceptance/avocado_qemu/__init__.py
b/tests/acceptance/avocado_qemu/__init__.py
index 2c4fef3e14..1e807e2e55 100644
--- a/tests/acceptance/avocado_qemu/__init__.py
+++ b/tests/acceptance/avocado_qemu/__init__.py
@@ -276,6 +276,7 @@ def tearDown(self):
for vm in self._vms.values():
vm.shutdown()
self._sd = None
+super(Test, self).tearDown()

def fetch_asset(self, name,
asset_hash=None, algorithm=None,

---

> thanks
> -- PMM
>

Thanks a lot for spotting that, I'll send a fix to the ML right away.

Best regards,
- Cleber.




Re: [PATCH for-6.1] hw/arm/boot: Report error if there is no fw_cfg device in the machine

2021-08-03 Thread Leif Lindholm
On Mon, Jul 26, 2021 at 17:33:51 +0100, Peter Maydell wrote:
> If the user provides both a BIOS/firmware image and also a guest
> kernel filename, arm_setup_firmware_boot() will pass the
> kernel image to the firmware via the fw_cfg device. However we
> weren't checking whether there really was a fw_cfg device present,
> and if there wasn't we would crash.
> 
> This crash can be provoked with a command line such as
>  qemu-system-aarch64 -M raspi3 -kernel /dev/null -bios /dev/null -display none
> 
> It is currently only possible on the raspi3 machine, because unless
> the machine sets info->firmware_loaded we won't call
> arm_setup_firmware_boot(), and the only machines which set that are:
>  * virt (has a fw-cfg device)
>  * sbsa-ref (checks itself for kernel_filename && firmware_loaded)
>  * raspi3 (crashes)
> 
> But this is an unfortunate beartrap to leave for future machine
> model implementors, so we should handle this situation in boot.c.
> 
> Check in arm_setup_firmware_boot() whether the fw-cfg device exists
> before trying to load files into it, and if it doesn't exist then
> exit with a hopefully helpful error message.
> 
> Because we now handle this check in a machine-agnostic way, we
> can remove the check from sbsa-ref.
> 
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/503
> Signed-off-by: Peter Maydell 

Reviewed-by: Leif Lindholm 

However, the subject line threw me slightly. How about:?
"Report error if trying to load kernel with no fw_cfg"

> ---
> I didn't reuse exactly the same wording sbsa-ref used to have,
> because the bit about loading an OS from hard disk might not
> apply to all machine types.
> ---
>  hw/arm/boot.c | 9 +
>  hw/arm/sbsa-ref.c | 7 ---
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/arm/boot.c b/hw/arm/boot.c
> index d7b059225e6..57efb61ee41 100644
> --- a/hw/arm/boot.c
> +++ b/hw/arm/boot.c
> @@ -1243,6 +1243,15 @@ static void arm_setup_firmware_boot(ARMCPU *cpu, 
> struct arm_boot_info *info)
>  bool try_decompressing_kernel;
>  
>  fw_cfg = fw_cfg_find();
> +
> +if (!fw_cfg) {
> +error_report("This machine type does not support loading both "
> + "a guest firmware/BIOS image and a guest kernel at "
> + "the same time. You should change your QEMU command 
> "
> + "line to specify one or the other, but not both.");
> +exit(1);
> +}
> +
>  try_decompressing_kernel = arm_feature(&cpu->env,
> ARM_FEATURE_AARCH64);
>  
> diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
> index 43c19b49234..c1629df6031 100644
> --- a/hw/arm/sbsa-ref.c
> +++ b/hw/arm/sbsa-ref.c
> @@ -691,13 +691,6 @@ static void sbsa_ref_init(MachineState *machine)
>  
>  firmware_loaded = sbsa_firmware_init(sms, sysmem, secure_sysmem);
>  
> -if (machine->kernel_filename && firmware_loaded) {
> -error_report("sbsa-ref: No fw_cfg device on this machine, "
> - "so -kernel option is not supported when firmware 
> loaded, "
> - "please load OS from hard disk instead");
> -exit(1);
> -}
> -
>  /*
>   * This machine has EL3 enabled, external firmware should supply PSCI
>   * implementation, so the QEMU's internal PSCI is disabled.
> -- 
> 2.20.1
> 



Re: Status of stable release effort ?

2021-08-03 Thread Michael Roth
Quoting Daniel P. Berrangé (2021-07-26 05:13:41)
> We are currently in the final lead up to shipping the 6.1.0 release
> 
> AFAICT from git tags, the latest stable release was 5.0.1 in Sep 2020
> 
>https://gitlab.com/qemu-project/qemu/-/tags
> 
> There have been many patches sent to qemu-stable each month e.g.
> 
>   https://lists.nongnu.org/archive/html/qemu-stable/2021-06/threads.html
> 
> but no 5.1.1, 5.2.1 or 6.0.1 releases IIUC
> 
> Is QEMU stable still an active effort that we need to CC patches to ?

Hi Daniel,

Long story short: I tossed away my test setup when I switched employers
last year with plans to make better use of the new gitlab infrastructure
and make it easier to swap in different maintainers as-needed, sort of
like what Peter has been working on for qemu mainline. Things ramped up
a bit faster than expected here however and it's been on my queue ever
since.

My plan has been to resume regular stable releases starting with 6.0.1
within the next few weeks, but still need to take a good hard look at the
testing situation and how we can make the process more resilient. I'd
hoped to find time to dig into laying out an overall test plan a bit more
before responding but will find some cycles to do so as I work on getting
the 6.0.1 release together.

I apologize to anyone who spent time curating patches for
qemu-sta...@nongu.org, the ones targetted for anything prior to 6.0.1 likely
won't make a stable release, but if there's something I should reconsider
please let me know. And please don't let my poor handling of things affect
the overall process, there have been others in the community that have shown
interest helping with stable in the past, but the testing situation has always
made it difficult move forward with things, so if that can be improved in that
regard then I think we could expect the process improve.

Thanks,

Mike

> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
> 
>



Re: [PATCH v6 2/2] tests/tcg/s390x: Test SIGILL and SIGSEGV handling

2021-08-03 Thread Thomas Huth

On 05/07/2021 23.04, Ilya Leoshkevich wrote:

Verify that s390x-specific uc_mcontext.psw.addr is reported correctly
and that signal handling interacts properly with debugging.

Signed-off-by: Ilya Leoshkevich 
---
  tests/tcg/s390x/Makefile.target   |  18 +-
  tests/tcg/s390x/gdbstub/test-signals-s390x.py |  76 
  tests/tcg/s390x/signals-s390x.c   | 165 ++
  3 files changed, 258 insertions(+), 1 deletion(-)
  create mode 100644 tests/tcg/s390x/gdbstub/test-signals-s390x.py
  create mode 100644 tests/tcg/s390x/signals-s390x.c

diff --git a/tests/tcg/s390x/Makefile.target b/tests/tcg/s390x/Makefile.target
index 0036b8a505..0a5b25c156 100644
--- a/tests/tcg/s390x/Makefile.target
+++ b/tests/tcg/s390x/Makefile.target
@@ -1,4 +1,5 @@
-VPATH+=$(SRC_PATH)/tests/tcg/s390x
+S390X_SRC=$(SRC_PATH)/tests/tcg/s390x
+VPATH+=$(S390X_SRC)
  CFLAGS+=-march=zEC12 -m64
  TESTS+=hello-s390x
  TESTS+=csst
@@ -12,3 +13,18 @@ TESTS+=mvc
  # This triggers failures on s390x hosts about 4% of the time
  run-signals: signals
$(call skip-test, $<, "BROKEN awaiting sigframe clean-ups")
+
+TESTS+=signals-s390x
+
+ifneq ($(HAVE_GDB_BIN),)
+GDB_SCRIPT=$(SRC_PATH)/tests/guest-debug/run-test.py
+
+run-gdbstub-signals-s390x: signals-s390x
+   $(call run-test, $@, $(GDB_SCRIPT) \
+   --gdb $(HAVE_GDB_BIN) \
+   --qemu $(QEMU) --qargs "$(QEMU_OPTS)" \
+   --bin $< --test $(S390X_SRC)/gdbstub/test-signals-s390x.py, \
+   "mixing signals and debugging on s390x")
+
+EXTRA_RUNS += run-gdbstub-signals-s390x
+endif


 Hi Ilya,

FYI, I've put patch 1/2 into a pull request today, but this testing patch 
has a non-trivial conflict with commit eba61056e4cc ... could you please 
check how to get this enabled properly again

and then respin this patch 2/2 here? Thanks!

 Thomas




  1   2   3   >