Re: [PATCH v3] memory: Directly dispatch alias accesses on origin memory region

2021-04-20 Thread Mark Cave-Ayland

On 19/04/2021 21:58, Philippe Mathieu-Daudé wrote:


Hi Mark,

On 4/19/21 10:13 PM, Mark Cave-Ayland wrote:

On 17/04/2021 15:02, Philippe Mathieu-Daudé wrote:


Since commit 2cdfcf272d ("memory: assign MemoryRegionOps to all
regions"), all newly created regions are assigned with
unassigned_mem_ops (which might be then overwritten).

When using aliased container regions, and there is no region mapped
at address 0 in the container, the memory_region_dispatch_read()
and memory_region_dispatch_write() calls incorrectly return the
container unassigned_mem_ops, because the alias offset is not used.

The memory_region_init_alias() flow is:

    memory_region_init_alias()
    -> memory_region_init()
   -> object_initialize(TYPE_MEMORY_REGION)
  -> memory_region_initfn()
     -> mr->ops = &unassigned_mem_ops;

Later when accessing the alias, the memory_region_dispatch_read()
flow is:

    memory_region_dispatch_read(offset)
    -> memory_region_access_valid(mr)   <- offset is ignored
   -> mr->ops->valid.accepts()
  -> unassigned_mem_accepts()
  <- false
   <- false
     <- MEMTX_DECODE_ERROR

The caller gets a MEMTX_DECODE_ERROR while the access is OK.

Fix by dispatching aliases recusirvely, accessing its origin region
after adding the alias offset.

Signed-off-by: Philippe Mathieu-Daudé 
---
v3:
- reworded, mentioning the "alias to container" case
- use recursive call instead of while(), because easier when debugging
    therefore reset Richard R-b tag.
v2:
- use while()
---
   softmmu/memory.c | 10 ++
   1 file changed, 10 insertions(+)

diff --git a/softmmu/memory.c b/softmmu/memory.c
index d4493ef9e43..23bdbfac079 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1442,6 +1442,11 @@ MemTxResult
memory_region_dispatch_read(MemoryRegion *mr,
   unsigned size = memop_size(op);
   MemTxResult r;
   +    if (mr->alias) {
+    return memory_region_dispatch_read(mr->alias,
+   addr + mr->alias_offset,
+   pval, op, attrs);
+    }
   if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
   *pval = unassigned_mem_read(mr, addr, size);
   return MEMTX_DECODE_ERROR;
@@ -1486,6 +1491,11 @@ MemTxResult
memory_region_dispatch_write(MemoryRegion *mr,
   {
   unsigned size = memop_size(op);
   +    if (mr->alias) {
+    return memory_region_dispatch_write(mr->alias,
+    addr + mr->alias_offset,
+    data, op, attrs);
+    }
   if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
   unassigned_mem_write(mr, addr, data, size);
   return MEMTX_DECODE_ERROR;


Whilst working on my q800 patches I realised that this was a similar
problem to the one I had with my macio.alias implementation at [1]:
except that in my case the unassigned_mem_ops mr->ops->valid.accepts()
function was being invoked on a ROM memory region instead of an alias. I
think this is exactly the same issue that you are attempting to fix with
your related patch at
https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg03190.html
("memory: Initialize MemoryRegionOps for RAM memory regions").


So if 2 contributors hit similar issues, there is something wrong with
the API. I don't see your use case or mine as forbidded by the
documentation in "exec/memory.h".

My patch might not be the proper fix, but we need to figure out how
to avoid others to hit the same problem, as it is very hard to debug.


I agree with this sentiment: it has taken me a while to figure out what was 
happening, and that was only because I spotted accesses being rejected with -d 
guest_errors.


From my perspective the names memory_region_dispatch_read() and 
memory_region_dispatch_write() were the misleading part, although I remember thinking 
it odd whilst trying to use them that I had to start delving into sections etc. just 
to recurse a memory access.



At least an assertion and a comment.


I eventually realised that I needed functions that could dispatch
reads/writes to both IO memory regions and ROM memory regions, and that
functionality is covered by the address_space_*() access functions.
Using the address_space_*() functions I was then able to come up with
the working implementation at [2] that handles accesses to both IO
memory regions and ROM memory regions correctly.

The reason I initially used the
memory_region_dispatch_read()/memory_region_dispatch_write() functions
was because I could see that was how the virtio devices dispatched
accesses through the proxy. However I'm wondering now if this API can
only be used for terminating IO memory regions, and so the alias_offset
that you're applying above should actually be applied elsewhere instead.


I figured out the AddressSpace API make these cases simpler, but IIRC
there is some overhead, some function tries to resolve / update
s

[Bug 1808565] Re: Reading /proc/self/task//maps is not remapped to the target

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1808565

Title:
  Reading /proc/self/task//maps is not remapped to the target

Status in QEMU:
  Incomplete

Bug description:
  Seeing this in qemu-user 3.1.0

  The code in is_proc_myself which supports remapping of /proc/self/maps
  and /proc//maps does not support remapping of
  /proc/self/task//maps or /proc//task//maps. Extending
  is_proc_myself to cover these cases causes the maps to be rewritten
  correctly.

  These are useful in multithreaded programs to avoid freezing the
  entire program to capture the maps for a single tid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1808565/+subscriptions



[Bug 1808563] Re: Listing the contents of / lists QEMU_LD_PREFIX instead

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1808563

Title:
  Listing the contents of / lists QEMU_LD_PREFIX instead

Status in QEMU:
  Incomplete

Bug description:
  Seeing this in qemu-user version 3.1.0

  Demo:
  $ QEMU_LD_PREFIX=$(pwd)/usr/armv7a-cros-linux-gnueabi ../run/qemu-arm 
/tmp/coreutils --coreutils-prog=ls / 
  etc  lib  usr
  $ ls /
  boot  etc   lib lib64   lost+found  mntroot  sbin  sys  usr
  bin   dev   export  homelib32   netproc  run   tmp  var
  $ ls usr/armv7a-cros-linux-gnueabi
  etc  lib  usr

  In strace, the openat for "/" is remapped to the directory specified in 
QEMU_LD_PREFIX:
  [pid  5302] openat(AT_FDCWD, "/tmp/qemu/usr/armv7a-cros-linux-gnueabi", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3

  As an aside, if I change the code to do chdir("/"); opendir("."); it
  works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1808563/+subscriptions



[Bug 1808824] Re: Mouse leaves VM window when Grab on Hover isn't selected Windows 10 and Intel HAX

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1808824

Title:
  Mouse leaves VM window when Grab on Hover isn't selected Windows 10
  and Intel HAX

Status in QEMU:
  Incomplete

Bug description:
  On Windows 10.0.17134 I have been having the problem that the mouse
  will leave the VM window after a short time when grab on hover isn't
  selected.  The VM will then try to grab on Hover and the mouse will
  grab in weird places and it will become very unwieldy to control the
  mouse in the VM window.

  This is exasperated by super slow response making it nearly unusable
  if the Intel® Hardware Accelerated Execution Manager (Intel® HAXM) is
  not currently installed on my machine.

  I know they are different things but they compounded on each other
  when you have a mouse that is not staying in the VM window and the
  VM's visualized cpu is acting VERY slow the system is unusable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1808824/+subscriptions



Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Paul Durrant

On 20/04/2021 04:35, Igor Druzhinin wrote:

When we're replacing the existing mapping there is possibility of a race
on memory map with other threads doing mmap operations - the address being
unmapped/re-mapped could be occupied by another thread in between.

Linux mmap man page recommends keeping the existing mappings in place to
reserve the place and instead utilize the fact that the next mmap operation
with MAP_FIXED flag passed will implicitly destroy the existing mappings
behind the chosen address. This behavior is guaranteed by POSIX / BSD and
therefore is portable.

Note that it wouldn't make the replacement atomic for parallel accesses to
the replaced region - those might still fail with SIGBUS due to
xenforeignmemory_map not being atomic. So we're still not expecting those.

Tested-by: Anthony PERARD 
Signed-off-by: Igor Druzhinin 


Reviewed-by: Paul Durrant 


---
  hw/i386/xen/xen-mapcache.c | 15 ++-
  1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
index 5b120ed..e82b7dc 100644
--- a/hw/i386/xen/xen-mapcache.c
+++ b/hw/i386/xen/xen-mapcache.c
@@ -171,7 +171,20 @@ static void xen_remap_bucket(MapCacheEntry *entry,
  if (!(entry->flags & XEN_MAPCACHE_ENTRY_DUMMY)) {
  ram_block_notify_remove(entry->vaddr_base, entry->size);
  }
-if (munmap(entry->vaddr_base, entry->size) != 0) {
+
+/*
+ * If an entry is being replaced by another mapping and we're using
+ * MAP_FIXED flag for it - there is possibility of a race for vaddr
+ * address with another thread doing an mmap call itself
+ * (see man 2 mmap). To avoid that we skip explicit unmapping here
+ * and allow the kernel to destroy the previous mappings by replacing
+ * them in mmap call later.
+ *
+ * Non-identical replacements are not allowed therefore.
+ */
+assert(!vaddr || (entry->vaddr_base == vaddr && entry->size == size));
+
+if (!vaddr && munmap(entry->vaddr_base, entry->size) != 0) {
  perror("unmap fails");
  exit(-1);
  }






[PATCH v2 01/12] ui: Get the fd associated with udmabuf driver

2021-04-20 Thread Vivek Kasireddy
Try to open the udmabuf dev node for the first time or return the
fd if the device was previously opened.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 include/ui/console.h |  3 +++
 ui/meson.build   |  1 +
 ui/udmabuf.c | 40 
 3 files changed, 44 insertions(+)
 create mode 100644 ui/udmabuf.c

diff --git a/include/ui/console.h b/include/ui/console.h
index ca3c7af6a6..b30b63976a 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -471,4 +471,7 @@ bool vnc_display_reload_certs(const char *id,  Error 
**errp);
 /* input.c */
 int index_from_key(const char *key, size_t key_length);
 
+/* udmabuf.c */
+int udmabuf_fd(void);
+
 #endif
diff --git a/ui/meson.build b/ui/meson.build
index e8d3ff41b9..7a709ff548 100644
--- a/ui/meson.build
+++ b/ui/meson.build
@@ -11,6 +11,7 @@ softmmu_ss.add(files(
   'kbd-state.c',
   'keymaps.c',
   'qemu-pixman.c',
+  'udmabuf.c',
 ))
 softmmu_ss.add([spice_headers, files('spice-module.c')])
 
diff --git a/ui/udmabuf.c b/ui/udmabuf.c
new file mode 100644
index 00..e6234fd86f
--- /dev/null
+++ b/ui/udmabuf.c
@@ -0,0 +1,40 @@
+/*
+ * udmabuf helper functions.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "ui/console.h"
+
+#ifdef CONFIG_LINUX
+
+#include 
+#include 
+
+int udmabuf_fd(void)
+{
+static bool first = true;
+static int udmabuf;
+
+if (!first) {
+return udmabuf;
+}
+first = false;
+
+udmabuf = open("/dev/udmabuf", O_RDWR);
+if (udmabuf < 0) {
+warn_report("open /dev/udmabuf: %s", strerror(errno));
+}
+return udmabuf;
+}
+
+#else
+
+int udmabuf_fd(void)
+{
+return -1;
+}
+
+#endif
-- 
2.26.2




[PATCH v2 04/12] virtio-gpu: Refactor virtio_gpu_set_scanout

2021-04-20 Thread Vivek Kasireddy
Store the meta-data associated with a FB in a new object
(struct virtio_gpu_framebuffer) and pass the object to set_scanout.
Also move code in set_scanout into a do_set_scanout function.
This will be helpful when adding set_scanout_blob API.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu.c| 149 +++--
 include/hw/virtio/virtio-gpu.h |   8 ++
 2 files changed, 94 insertions(+), 63 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index de7462a515..5e1152aa2a 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -544,95 +544,118 @@ static void virtio_unref_resource(pixman_image_t *image, 
void *data)
 pixman_image_unref(data);
 }
 
-static void virtio_gpu_set_scanout(VirtIOGPU *g,
-   struct virtio_gpu_ctrl_command *cmd)
+static void virtio_gpu_do_set_scanout(VirtIOGPU *g,
+  uint32_t scanout_id,
+  struct virtio_gpu_framebuffer *fb,
+  struct virtio_gpu_simple_resource *res,
+  struct virtio_gpu_rect *r,
+  uint32_t *error)
 {
-struct virtio_gpu_simple_resource *res, *ores;
+struct virtio_gpu_simple_resource *ores;
 struct virtio_gpu_scanout *scanout;
-pixman_format_code_t format;
-uint32_t offset;
-int bpp;
-struct virtio_gpu_set_scanout ss;
-
-VIRTIO_GPU_FILL_CMD(ss);
-virtio_gpu_bswap_32(&ss, sizeof(ss));
-trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
- ss.r.width, ss.r.height, ss.r.x, ss.r.y);
+uint8_t *data;
 
-if (ss.scanout_id >= g->parent_obj.conf.max_outputs) {
+if (scanout_id >= g->parent_obj.conf.max_outputs) {
 qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal scanout id specified %d",
-  __func__, ss.scanout_id);
-cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
-return;
-}
-
-g->parent_obj.enable = 1;
-if (ss.resource_id == 0) {
-virtio_gpu_disable_scanout(g, ss.scanout_id);
-return;
-}
-
-/* create a surface for this scanout */
-res = virtio_gpu_find_check_resource(g, ss.resource_id, true,
- __func__, &cmd->error);
-if (!res) {
+  __func__, scanout_id);
+*error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
 return;
 }
+scanout = &g->parent_obj.scanout[scanout_id];
 
-if (ss.r.x > res->width ||
-ss.r.y > res->height ||
-ss.r.width < 16 ||
-ss.r.height < 16 ||
-ss.r.width > res->width ||
-ss.r.height > res->height ||
-ss.r.x + ss.r.width > res->width ||
-ss.r.y + ss.r.height > res->height) {
+if (r->x > fb->width ||
+r->y > fb->height ||
+r->width < 16 ||
+r->height < 16 ||
+r->width > fb->width ||
+r->height > fb->height ||
+r->x + r->width > fb->width ||
+r->y + r->height > fb->height) {
 qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal scanout %d bounds for"
-  " resource %d, (%d,%d)+%d,%d vs %d %d\n",
-  __func__, ss.scanout_id, ss.resource_id, ss.r.x, ss.r.y,
-  ss.r.width, ss.r.height, res->width, res->height);
-cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+  " resource %d, rect (%d,%d)+%d,%d, fb %d %d\n",
+  __func__, scanout_id, res->resource_id,
+  r->x, r->y, r->width, r->height,
+  fb->width, fb->height);
+*error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
 return;
 }
 
-scanout = &g->parent_obj.scanout[ss.scanout_id];
+g->parent_obj.enable = 1;
+data = (uint8_t *)pixman_image_get_data(res->image);
 
-format = pixman_image_get_format(res->image);
-bpp = DIV_ROUND_UP(PIXMAN_FORMAT_BPP(format), 8);
-offset = (ss.r.x * bpp) + ss.r.y * pixman_image_get_stride(res->image);
-if (!scanout->ds || surface_data(scanout->ds)
-!= ((uint8_t *)pixman_image_get_data(res->image) + offset) ||
-scanout->width != ss.r.width ||
-scanout->height != ss.r.height) {
+/* create a surface for this scanout */
+if (!scanout->ds ||
+surface_data(scanout->ds) != data + fb->offset ||
+scanout->width != r->width ||
+scanout->height != r->height) {
 pixman_image_t *rect;
-void *ptr = (uint8_t *)pixman_image_get_data(res->image) + offset;
-rect = pixman_image_create_bits(format, ss.r.width, ss.r.height, ptr,
-pixman_image_get_stride(res->image));
-pixman_image_ref(res->image);
-pixman_image_set_destroy_function(rect, virtio_unref_resource,
- 

[PATCH v2 06/12] virtio-gpu: Add initial definitions for blob resources

2021-04-20 Thread Vivek Kasireddy
Add the property bit, configuration flag and other relevant
macros and definitions associated with this feature.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu-base.c   |  3 +++
 hw/display/virtio-gpu.c| 14 ++
 include/hw/virtio/virtio-gpu.h |  3 +++
 3 files changed, 20 insertions(+)

diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 25f8920fdb..70e4d53d26 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -210,6 +210,9 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t 
features,
 if (virtio_gpu_edid_enabled(g->conf)) {
 features |= (1 << VIRTIO_GPU_F_EDID);
 }
+if (virtio_gpu_blob_enabled(g->conf)) {
+features |= (1 << VIRTIO_GPU_F_RESOURCE_BLOB);
+}
 
 return features;
 }
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 85c8eb749d..8b67d39889 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1176,6 +1176,18 @@ static void virtio_gpu_device_realize(DeviceState *qdev, 
Error **errp)
 #endif
 }
 
+if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
+if (!virtio_gpu_have_udmabuf()) {
+error_setg(errp, "cannot enable blob resources without udmabuf");
+return;
+}
+
+if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
+error_setg(errp, "blobs and virgl are not compatible (yet)");
+return;
+}
+}
+
 if (!virtio_gpu_base_device_realize(qdev,
 virtio_gpu_handle_ctrl_cb,
 virtio_gpu_handle_cursor_cb,
@@ -1292,6 +1304,8 @@ static Property virtio_gpu_properties[] = {
 DEFINE_PROP_BIT("stats", VirtIOGPU, parent_obj.conf.flags,
 VIRTIO_GPU_FLAG_STATS_ENABLED, false),
 #endif
+DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
+VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
 DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 879f4cfff6..415aeddb07 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -85,6 +85,7 @@ enum virtio_gpu_base_conf_flags {
 VIRTIO_GPU_FLAG_STATS_ENABLED,
 VIRTIO_GPU_FLAG_EDID_ENABLED,
 VIRTIO_GPU_FLAG_DMABUF_ENABLED,
+VIRTIO_GPU_FLAG_BLOB_ENABLED,
 };
 
 #define virtio_gpu_virgl_enabled(_cfg) \
@@ -95,6 +96,8 @@ enum virtio_gpu_base_conf_flags {
 (_cfg.flags & (1 << VIRTIO_GPU_FLAG_EDID_ENABLED))
 #define virtio_gpu_dmabuf_enabled(_cfg) \
 (_cfg.flags & (1 << VIRTIO_GPU_FLAG_DMABUF_ENABLED))
+#define virtio_gpu_blob_enabled(_cfg) \
+(_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
 
 struct virtio_gpu_base_conf {
 uint32_t max_outputs;
-- 
2.26.2




[PATCH v2 03/12] virtio-gpu: Add virtio_gpu_find_check_resource

2021-04-20 Thread Vivek Kasireddy
Move finding the resource and validating its backing storage into one
function.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu.c | 66 +
 1 file changed, 47 insertions(+), 19 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index c9f5e36fd0..de7462a515 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -35,6 +35,10 @@
 
 static struct virtio_gpu_simple_resource*
 virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
+static struct virtio_gpu_simple_resource *
+virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
+   bool require_backing,
+   const char *caller, uint32_t *error);
 
 static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
struct virtio_gpu_simple_resource *res);
@@ -63,7 +67,8 @@ static void update_cursor_data_simple(VirtIOGPU *g,
 struct virtio_gpu_simple_resource *res;
 uint32_t pixels;
 
-res = virtio_gpu_find_resource(g, resource_id);
+res = virtio_gpu_find_check_resource(g, resource_id, false,
+ __func__, NULL);
 if (!res) {
 return;
 }
@@ -158,6 +163,37 @@ virtio_gpu_find_resource(VirtIOGPU *g, uint32_t 
resource_id)
 return NULL;
 }
 
+static struct virtio_gpu_simple_resource *
+virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
+   bool require_backing,
+   const char *caller, uint32_t *error)
+{
+struct virtio_gpu_simple_resource *res;
+
+res = virtio_gpu_find_resource(g, resource_id);
+if (!res) {
+qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid resource specified %d\n",
+  caller, resource_id);
+if (error) {
+*error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+}
+return NULL;
+}
+
+if (require_backing) {
+if (!res->iov || !res->image) {
+qemu_log_mask(LOG_GUEST_ERROR, "%s: no backing storage %d\n",
+  caller, resource_id);
+if (error) {
+*error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+}
+return NULL;
+}
+}
+
+return res;
+}
+
 void virtio_gpu_ctrl_response(VirtIOGPU *g,
   struct virtio_gpu_ctrl_command *cmd,
   struct virtio_gpu_ctrl_hdr *resp,
@@ -396,11 +432,9 @@ static void virtio_gpu_transfer_to_host_2d(VirtIOGPU *g,
 virtio_gpu_t2d_bswap(&t2d);
 trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
 
-res = virtio_gpu_find_resource(g, t2d.resource_id);
-if (!res || !res->iov) {
-qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal resource specified %d\n",
-  __func__, t2d.resource_id);
-cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+res = virtio_gpu_find_check_resource(g, t2d.resource_id, true,
+ __func__, &cmd->error);
+if (!res) {
 return;
 }
 
@@ -454,11 +488,9 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
 trace_virtio_gpu_cmd_res_flush(rf.resource_id,
rf.r.width, rf.r.height, rf.r.x, rf.r.y);
 
-res = virtio_gpu_find_resource(g, rf.resource_id);
+res = virtio_gpu_find_check_resource(g, rf.resource_id, false,
+ __func__, &cmd->error);
 if (!res) {
-qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal resource specified %d\n",
-  __func__, rf.resource_id);
-cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
 return;
 }
 
@@ -541,11 +573,9 @@ static void virtio_gpu_set_scanout(VirtIOGPU *g,
 }
 
 /* create a surface for this scanout */
-res = virtio_gpu_find_resource(g, ss.resource_id);
+res = virtio_gpu_find_check_resource(g, ss.resource_id, true,
+ __func__, &cmd->error);
 if (!res) {
-qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal resource specified %d\n",
-  __func__, ss.resource_id);
-cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
 return;
 }
 
@@ -737,11 +767,9 @@ virtio_gpu_resource_detach_backing(VirtIOGPU *g,
 virtio_gpu_bswap_32(&detach, sizeof(detach));
 trace_virtio_gpu_cmd_res_back_detach(detach.resource_id);
 
-res = virtio_gpu_find_resource(g, detach.resource_id);
-if (!res || !res->iov) {
-qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal resource specified %d\n",
-  __func__, detach.resource_id);
-cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+res = virtio_gpu_find_check_resource(g, detach.resource_id, true,
+ __func__, &cmd->error);
+if (!res) {
 return;
 }

[PATCH v2 00/12] virtio-gpu: Add support for Blob resources feature

2021-04-20 Thread Vivek Kasireddy
Enabling this feature would eliminate data copies from the resource
object in the Guest to the shadow resource in Qemu. This patch series
however adds support only for Blobs of type
VIRTIO_GPU_BLOB_MEM_GUEST with property VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE.

Most of the patches in this series are a rebased, refactored and bugfixed 
versions of Gerd Hoffmann's patches located here:
https://gitlab.freedesktop.org/virgl/qemu/-/commits/virtio-gpu-next

v2:
- Moved dpy_gl_update from set_scanout to resource_flush
- Dropped the modifier
- Rebase and other minor refactoring

Cc: Gerd Hoffmann 
Cc: Marc-André Lureau 
Cc: Dongwon Kim 
Cc: Tina Zhang 

Vivek Kasireddy (12):
  ui: Get the fd associated with udmabuf driver
  virtio-gpu: Add udmabuf helpers
  virtio-gpu: Add virtio_gpu_find_check_resource
  virtio-gpu: Refactor virtio_gpu_set_scanout
  virtio-gpu: Refactor virtio_gpu_create_mapping_iov
  virtio-gpu: Add initial definitions for blob resources
  virtio-gpu: Add virtio_gpu_resource_create_blob
  ui/pixman: Add qemu_pixman_to_drm_format()
  virtio-gpu: Add helpers to create and destroy dmabuf objects
  virtio-gpu: Factor out update scanout
  virtio-gpu: Add virtio_gpu_set_scanout_blob
  virtio-gpu: Update cursor data using blob

 hw/display/meson.build   |   2 +-
 hw/display/trace-events  |   2 +
 hw/display/virtio-gpu-3d.c   |   3 +-
 hw/display/virtio-gpu-base.c |   3 +
 hw/display/virtio-gpu-udmabuf.c  | 275 ++
 hw/display/virtio-gpu.c  | 436 ++-
 include/hw/virtio/virtio-gpu-bswap.h |  16 +
 include/hw/virtio/virtio-gpu.h   |  40 ++-
 include/standard-headers/linux/udmabuf.h |  32 ++
 include/ui/console.h |   3 +
 include/ui/qemu-pixman.h |   1 +
 scripts/update-linux-headers.sh  |   3 +
 ui/meson.build   |   1 +
 ui/qemu-pixman.c |  35 +-
 ui/udmabuf.c |  40 +++
 15 files changed, 781 insertions(+), 111 deletions(-)
 create mode 100644 hw/display/virtio-gpu-udmabuf.c
 create mode 100644 include/standard-headers/linux/udmabuf.h
 create mode 100644 ui/udmabuf.c

-- 
2.26.2




[PATCH v2 02/12] virtio-gpu: Add udmabuf helpers

2021-04-20 Thread Vivek Kasireddy
Add helper functions to create a dmabuf for a resource and mmap it.
To be able to create a dmabuf using the udmabuf driver, Qemu needs
to be lauched with the memfd memory backend like this:

qemu-system-x86_64 -m 8192m -object memory-backend-memfd,id=mem1,size=8192M
-machine memory-backend=mem1

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/meson.build   |   2 +-
 hw/display/virtio-gpu-udmabuf.c  | 183 +++
 include/hw/virtio/virtio-gpu.h   |  10 ++
 include/standard-headers/linux/udmabuf.h |  32 
 scripts/update-linux-headers.sh  |   3 +
 5 files changed, 229 insertions(+), 1 deletion(-)
 create mode 100644 hw/display/virtio-gpu-udmabuf.c
 create mode 100644 include/standard-headers/linux/udmabuf.h

diff --git a/hw/display/meson.build b/hw/display/meson.build
index 9d79e3951d..e0fa88d62f 100644
--- a/hw/display/meson.build
+++ b/hw/display/meson.build
@@ -56,7 +56,7 @@ softmmu_ss.add(when: [pixman, 'CONFIG_ATI_VGA'], if_true: 
files('ati.c', 'ati_2d
 if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
   virtio_gpu_ss = ss.source_set()
   virtio_gpu_ss.add(when: 'CONFIG_VIRTIO_GPU',
-if_true: [files('virtio-gpu-base.c', 'virtio-gpu.c'), 
pixman, virgl])
+if_true: [files('virtio-gpu-base.c', 'virtio-gpu.c', 
'virtio-gpu-udmabuf.c'), pixman, virgl])
   virtio_gpu_ss.add(when: ['CONFIG_VIRTIO_GPU', 'CONFIG_VIRGL'],
 if_true: [files('virtio-gpu-3d.c'), pixman, virgl])
   virtio_gpu_ss.add(when: 'CONFIG_VHOST_USER_GPU', if_true: 
files('vhost-user-gpu.c'))
diff --git a/hw/display/virtio-gpu-udmabuf.c b/hw/display/virtio-gpu-udmabuf.c
new file mode 100644
index 00..2c0e7b2455
--- /dev/null
+++ b/hw/display/virtio-gpu-udmabuf.c
@@ -0,0 +1,183 @@
+/*
+ * Virtio GPU Device
+ *
+ * Copyright Red Hat, Inc. 2013-2014
+ *
+ * Authors:
+ * Dave Airlie 
+ * Gerd Hoffmann 
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qemu-common.h"
+#include "qemu/iov.h"
+#include "ui/console.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-pixman.h"
+#include "trace.h"
+
+#ifdef CONFIG_LINUX
+
+#include "exec/ramblock.h"
+#include "sysemu/hostmem.h"
+#include 
+#include 
+#include 
+#include "standard-headers/linux/udmabuf.h"
+
+static void virtio_gpu_create_udmabuf(struct virtio_gpu_simple_resource *res)
+{
+struct udmabuf_create_list *list;
+RAMBlock *rb;
+ram_addr_t offset;
+int udmabuf, i;
+
+udmabuf = udmabuf_fd();
+if (udmabuf < 0) {
+return;
+}
+
+list = g_malloc0(sizeof(struct udmabuf_create_list) +
+ sizeof(struct udmabuf_create_item) * res->iov_cnt);
+
+for (i = 0; i < res->iov_cnt; i++) {
+rcu_read_lock();
+rb = qemu_ram_block_from_host(res->iov[i].iov_base, false, &offset);
+rcu_read_unlock();
+
+if (!rb || rb->fd < 0) {
+g_free(list);
+return;
+}
+
+list->list[i].memfd  = rb->fd;
+list->list[i].offset = offset;
+list->list[i].size   = res->iov[i].iov_len;
+}
+
+list->count = res->iov_cnt;
+list->flags = UDMABUF_FLAGS_CLOEXEC;
+
+res->dmabuf_fd = ioctl(udmabuf, UDMABUF_CREATE_LIST, list);
+if (res->dmabuf_fd < 0) {
+warn_report("%s: UDMABUF_CREATE_LIST: %s", __func__,
+strerror(errno));
+}
+g_free(list);
+}
+
+static void virtio_gpu_remap_udmabuf(struct virtio_gpu_simple_resource *res)
+{
+res->remapsz = res->width * res->height * 4;
+res->remapsz = QEMU_ALIGN_UP(res->remapsz, qemu_real_host_page_size);
+
+res->remapped = mmap(NULL, res->remapsz, PROT_READ,
+ MAP_SHARED, res->dmabuf_fd, 0);
+if (res->remapped == MAP_FAILED) {
+warn_report("%s: dmabuf mmap failed: %s", __func__,
+strerror(errno));
+res->remapped = NULL;
+}
+}
+
+static void virtio_gpu_destroy_udmabuf(struct virtio_gpu_simple_resource *res)
+{
+if (res->remapped) {
+munmap(res->remapped, res->remapsz);
+res->remapped = NULL;
+}
+if (res->dmabuf_fd >= 0) {
+close(res->dmabuf_fd);
+res->dmabuf_fd = -1;
+}
+}
+
+static int find_memory_backend_type(Object *obj, void *opaque)
+{
+bool *memfd_backend = opaque;
+int ret;
+
+if (object_dynamic_cast(obj, TYPE_MEMORY_BACKEND)) {
+HostMemoryBackend *backend = MEMORY_BACKEND(obj);
+RAMBlock *rb = backend->mr.ram_block;
+
+if (rb && rb->fd > 0) {
+ret = fcntl(rb->fd, F_GET_SEALS);
+if (ret > 0) {
+*memfd_backend = true;
+}
+}
+}
+
+return 0;
+}
+
+bool virtio_gpu_have_udmabuf(void)
+{
+Object *memdev_root;
+int udmabuf;
+bool memfd_b

[PATCH v2 08/12] ui/pixman: Add qemu_pixman_to_drm_format()

2021-04-20 Thread Vivek Kasireddy
This new function to get the drm_format associated with a pixman
format will be useful while creating a dmabuf.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 include/ui/qemu-pixman.h |  1 +
 ui/qemu-pixman.c | 35 ---
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/include/ui/qemu-pixman.h b/include/ui/qemu-pixman.h
index 87737a6f16..806ddcd7cd 100644
--- a/include/ui/qemu-pixman.h
+++ b/include/ui/qemu-pixman.h
@@ -62,6 +62,7 @@ typedef struct PixelFormat {
 PixelFormat qemu_pixelformat_from_pixman(pixman_format_code_t format);
 pixman_format_code_t qemu_default_pixman_format(int bpp, bool native_endian);
 pixman_format_code_t qemu_drm_format_to_pixman(uint32_t drm_format);
+uint32_t qemu_pixman_to_drm_format(pixman_format_code_t pixman);
 int qemu_pixman_get_type(int rshift, int gshift, int bshift);
 pixman_format_code_t qemu_pixman_get_format(PixelFormat *pf);
 bool qemu_pixman_check_format(DisplayChangeListener *dcl,
diff --git a/ui/qemu-pixman.c b/ui/qemu-pixman.c
index 85f2945e88..3ab7e2e958 100644
--- a/ui/qemu-pixman.c
+++ b/ui/qemu-pixman.c
@@ -89,21 +89,34 @@ pixman_format_code_t qemu_default_pixman_format(int bpp, 
bool native_endian)
 }
 
 /* Note: drm is little endian, pixman is native endian */
+static const struct {
+uint32_t drm_format;
+pixman_format_code_t pixman_format;
+} drm_format_pixman_map[] = {
+{ DRM_FORMAT_RGB888,   PIXMAN_LE_r8g8b8   },
+{ DRM_FORMAT_ARGB, PIXMAN_LE_a8r8g8b8 },
+{ DRM_FORMAT_XRGB, PIXMAN_LE_x8r8g8b8 }
+};
+
 pixman_format_code_t qemu_drm_format_to_pixman(uint32_t drm_format)
 {
-static const struct {
-uint32_t drm_format;
-pixman_format_code_t pixman;
-} map[] = {
-{ DRM_FORMAT_RGB888,   PIXMAN_LE_r8g8b8   },
-{ DRM_FORMAT_ARGB, PIXMAN_LE_a8r8g8b8 },
-{ DRM_FORMAT_XRGB, PIXMAN_LE_x8r8g8b8 }
-};
 int i;
 
-for (i = 0; i < ARRAY_SIZE(map); i++) {
-if (drm_format == map[i].drm_format) {
-return map[i].pixman;
+for (i = 0; i < ARRAY_SIZE(drm_format_pixman_map); i++) {
+if (drm_format == drm_format_pixman_map[i].drm_format) {
+return drm_format_pixman_map[i].pixman_format;
+}
+}
+return 0;
+}
+
+uint32_t qemu_pixman_to_drm_format(pixman_format_code_t pixman_format)
+{
+int i;
+
+for (i = 0; i < ARRAY_SIZE(drm_format_pixman_map); i++) {
+if (pixman_format == drm_format_pixman_map[i].pixman_format) {
+return drm_format_pixman_map[i].drm_format;
 }
 }
 return 0;
-- 
2.26.2




[PATCH v2 05/12] virtio-gpu: Refactor virtio_gpu_create_mapping_iov

2021-04-20 Thread Vivek Kasireddy
Instead of passing the attach_backing object to extract nr_entries
and offset, explicitly pass these as arguments to this function.
This will be helpful when adding create_blob API.

Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu-3d.c |  3 ++-
 hw/display/virtio-gpu.c| 22 +++---
 include/hw/virtio/virtio-gpu.h |  2 +-
 3 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/hw/display/virtio-gpu-3d.c b/hw/display/virtio-gpu-3d.c
index d98964858e..87ca661a26 100644
--- a/hw/display/virtio-gpu-3d.c
+++ b/hw/display/virtio-gpu-3d.c
@@ -288,7 +288,8 @@ static void virgl_resource_attach_backing(VirtIOGPU *g,
 VIRTIO_GPU_FILL_CMD(att_rb);
 trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
 
-ret = virtio_gpu_create_mapping_iov(g, &att_rb, cmd, NULL, &res_iovs);
+ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
+cmd, NULL, &res_iovs);
 if (ret != 0) {
 cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
 return;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5e1152aa2a..85c8eb749d 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -659,7 +659,7 @@ static void virtio_gpu_set_scanout(VirtIOGPU *g,
 }
 
 int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
-  struct virtio_gpu_resource_attach_backing 
*ab,
+  uint32_t nr_entries, uint32_t offset,
   struct virtio_gpu_ctrl_command *cmd,
   uint64_t **addr, struct iovec **iov)
 {
@@ -667,17 +667,17 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
 size_t esize, s;
 int i;
 
-if (ab->nr_entries > 16384) {
+if (nr_entries > 16384) {
 qemu_log_mask(LOG_GUEST_ERROR,
   "%s: nr_entries is too big (%d > 16384)\n",
-  __func__, ab->nr_entries);
+  __func__, nr_entries);
 return -1;
 }
 
-esize = sizeof(*ents) * ab->nr_entries;
+esize = sizeof(*ents) * nr_entries;
 ents = g_malloc(esize);
 s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
-   sizeof(*ab), ents, esize);
+   offset, ents, esize);
 if (s != esize) {
 qemu_log_mask(LOG_GUEST_ERROR,
   "%s: command data size incorrect %zu vs %zu\n",
@@ -686,11 +686,11 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
 return -1;
 }
 
-*iov = g_malloc0(sizeof(struct iovec) * ab->nr_entries);
+*iov = g_malloc0(sizeof(struct iovec) * nr_entries);
 if (addr) {
-*addr = g_malloc0(sizeof(uint64_t) * ab->nr_entries);
+*addr = g_malloc0(sizeof(uint64_t) * nr_entries);
 }
-for (i = 0; i < ab->nr_entries; i++) {
+for (i = 0; i < nr_entries; i++) {
 uint64_t a = le64_to_cpu(ents[i].addr);
 uint32_t l = le32_to_cpu(ents[i].length);
 hwaddr len = l;
@@ -702,8 +702,7 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
 }
 if (!(*iov)[i].iov_base || len != l) {
 qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map MMIO memory for"
-  " resource %d element %d\n",
-  __func__, ab->resource_id, i);
+  " element %d\n", __func__, i);
 if ((*iov)[i].iov_base) {
 i++; /* cleanup the 'i'th map */
 }
@@ -770,7 +769,8 @@ virtio_gpu_resource_attach_backing(VirtIOGPU *g,
 return;
 }
 
-ret = virtio_gpu_create_mapping_iov(g, &ab, cmd, &res->addrs, &res->iov);
+ret = virtio_gpu_create_mapping_iov(g, ab.nr_entries, sizeof(ab),
+cmd, &res->addrs, &res->iov);
 if (ret != 0) {
 cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
 return;
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 6fd86d6b92..879f4cfff6 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -220,7 +220,7 @@ void virtio_gpu_get_display_info(VirtIOGPU *g,
 void virtio_gpu_get_edid(VirtIOGPU *g,
  struct virtio_gpu_ctrl_command *cmd);
 int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
-  struct virtio_gpu_resource_attach_backing 
*ab,
+  uint32_t nr_entries, uint32_t offset,
   struct virtio_gpu_ctrl_command *cmd,
   uint64_t **addr, struct iovec **iov);
 void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
-- 
2.26.2




[PATCH v2 10/12] virtio-gpu: Factor out update scanout

2021-04-20 Thread Vivek Kasireddy
Creating a small helper function for updating the scanout
will be useful in the next patch where this needs to be
done early in do_set_scanout before returning.

Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu.c | 35 +++
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 511addf811..0b432feb00 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -601,6 +601,28 @@ static void virtio_unref_resource(pixman_image_t *image, 
void *data)
 pixman_image_unref(data);
 }
 
+static void virtio_gpu_update_scanout(VirtIOGPU *g,
+  uint32_t scanout_id,
+  struct virtio_gpu_simple_resource *res,
+  struct virtio_gpu_rect *r)
+{
+struct virtio_gpu_simple_resource *ores;
+struct virtio_gpu_scanout *scanout;
+
+scanout = &g->parent_obj.scanout[scanout_id];
+ores = virtio_gpu_find_resource(g, scanout->resource_id);
+if (ores) {
+ores->scanout_bitmask &= ~(1 << scanout_id);
+}
+
+res->scanout_bitmask |= (1 << scanout_id);
+scanout->resource_id = res->resource_id;
+scanout->x = r->x;
+scanout->y = r->y;
+scanout->width = r->width;
+scanout->height = r->height;
+}
+
 static void virtio_gpu_do_set_scanout(VirtIOGPU *g,
   uint32_t scanout_id,
   struct virtio_gpu_framebuffer *fb,
@@ -608,7 +630,6 @@ static void virtio_gpu_do_set_scanout(VirtIOGPU *g,
   struct virtio_gpu_rect *r,
   uint32_t *error)
 {
-struct virtio_gpu_simple_resource *ores;
 struct virtio_gpu_scanout *scanout;
 uint8_t *data;
 
@@ -668,17 +689,7 @@ static void virtio_gpu_do_set_scanout(VirtIOGPU *g,
 scanout->ds);
 }
 
-ores = virtio_gpu_find_resource(g, scanout->resource_id);
-if (ores) {
-ores->scanout_bitmask &= ~(1 << scanout_id);
-}
-
-res->scanout_bitmask |= (1 << scanout_id);
-scanout->resource_id = res->resource_id;
-scanout->x = r->x;
-scanout->y = r->y;
-scanout->width = r->width;
-scanout->height = r->height;
+virtio_gpu_update_scanout(g, scanout_id, res, r);
 }
 
 static void virtio_gpu_set_scanout(VirtIOGPU *g,
-- 
2.26.2




[PATCH v2 07/12] virtio-gpu: Add virtio_gpu_resource_create_blob

2021-04-20 Thread Vivek Kasireddy
This API allows Qemu to register the blob allocated by the Guest
as a new resource and map its backing storage.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/trace-events  |  1 +
 hw/display/virtio-gpu-udmabuf.c  |  9 +++-
 hw/display/virtio-gpu.c  | 74 ++--
 include/hw/virtio/virtio-gpu-bswap.h |  9 
 include/hw/virtio/virtio-gpu.h   |  2 +
 5 files changed, 90 insertions(+), 5 deletions(-)

diff --git a/hw/display/trace-events b/hw/display/trace-events
index 957b8ba994..99e5256aac 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -42,6 +42,7 @@ virtio_gpu_cmd_get_edid(uint32_t scanout) "scanout %d"
 virtio_gpu_cmd_set_scanout(uint32_t id, uint32_t res, uint32_t w, uint32_t h, 
uint32_t x, uint32_t y) "id %d, res 0x%x, w %d, h %d, x %d, y %d"
 virtio_gpu_cmd_res_create_2d(uint32_t res, uint32_t fmt, uint32_t w, uint32_t 
h) "res 0x%x, fmt 0x%x, w %d, h %d"
 virtio_gpu_cmd_res_create_3d(uint32_t res, uint32_t fmt, uint32_t w, uint32_t 
h, uint32_t d) "res 0x%x, fmt 0x%x, w %d, h %d, d %d"
+virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" 
PRId64
 virtio_gpu_cmd_res_unref(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_back_attach(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_back_detach(uint32_t res) "res 0x%x"
diff --git a/hw/display/virtio-gpu-udmabuf.c b/hw/display/virtio-gpu-udmabuf.c
index 2c0e7b2455..8bbce08550 100644
--- a/hw/display/virtio-gpu-udmabuf.c
+++ b/hw/display/virtio-gpu-udmabuf.c
@@ -72,7 +72,10 @@ static void virtio_gpu_create_udmabuf(struct 
virtio_gpu_simple_resource *res)
 
 static void virtio_gpu_remap_udmabuf(struct virtio_gpu_simple_resource *res)
 {
-res->remapsz = res->width * res->height * 4;
+if (res->blob_size) {
+res->remapsz = res->blob_size;
+}
+
 res->remapsz = QEMU_ALIGN_UP(res->remapsz, qemu_real_host_page_size);
 
 res->remapped = mmap(NULL, res->remapsz, PROT_READ,
@@ -152,7 +155,9 @@ void virtio_gpu_init_udmabuf(struct 
virtio_gpu_simple_resource *res)
 pdata = res->remapped;
 }
 
-(void) pdata;
+if (pdata) {
+res->blob = pdata;
+}
 }
 
 void virtio_gpu_fini_udmabuf(struct virtio_gpu_simple_resource *res)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 8b67d39889..511addf811 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -181,7 +181,7 @@ virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t 
resource_id,
 }
 
 if (require_backing) {
-if (!res->iov || !res->image) {
+if (!res->iov || (!res->image && !res->blob)) {
 qemu_log_mask(LOG_GUEST_ERROR, "%s: no backing storage %d\n",
   caller, resource_id);
 if (error) {
@@ -357,6 +357,63 @@ static void virtio_gpu_resource_create_2d(VirtIOGPU *g,
 g->hostmem += res->hostmem;
 }
 
+static void virtio_gpu_resource_create_blob(VirtIOGPU *g,
+struct virtio_gpu_ctrl_command 
*cmd)
+{
+struct virtio_gpu_simple_resource *res;
+struct virtio_gpu_resource_create_blob cblob;
+int ret;
+
+VIRTIO_GPU_FILL_CMD(cblob);
+virtio_gpu_create_blob_bswap(&cblob);
+trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+if (cblob.resource_id == 0) {
+qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+  __func__);
+cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+return;
+}
+
+res = virtio_gpu_find_resource(g, cblob.resource_id);
+if (res) {
+qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+  __func__, cblob.resource_id);
+cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+return;
+}
+
+res = g_new0(struct virtio_gpu_simple_resource, 1);
+res->resource_id = cblob.resource_id;
+res->blob_size = cblob.size;
+
+if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_GUEST &&
+cblob.blob_flags != VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE) {
+qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid memory type\n",
+  __func__);
+cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+g_free(res);
+return;
+}
+
+if (res->iov) {
+cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+return;
+}
+
+ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
+cmd, &res->addrs, &res->iov);
+if (ret != 0) {
+cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+return;
+}
+
+res->iov_cnt = cblob.nr_entries;
+virtio_gpu_init_udmabuf(res);
+
+QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
 static void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id)
 {
 struct virtio_gpu_scanout *scanout = &g->parent_obj.scanout[scanout_id];
@@ -434,7 +491,7 @@ static void virti

[PATCH v2 09/12] virtio-gpu: Add helpers to create and destroy dmabuf objects

2021-04-20 Thread Vivek Kasireddy
These helpers can be useful for creating dmabuf objects from blobs
and submitting them to the UI.

Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu-udmabuf.c | 87 +
 include/hw/virtio/virtio-gpu.h  | 15 ++
 2 files changed, 102 insertions(+)

diff --git a/hw/display/virtio-gpu-udmabuf.c b/hw/display/virtio-gpu-udmabuf.c
index 8bbce08550..40a5da0d30 100644
--- a/hw/display/virtio-gpu-udmabuf.c
+++ b/hw/display/virtio-gpu-udmabuf.c
@@ -167,6 +167,84 @@ void virtio_gpu_fini_udmabuf(struct 
virtio_gpu_simple_resource *res)
 }
 }
 
+static void virtio_gpu_free_one_dmabuf(VirtIOGPU *g, VGPUDMABuf *dmabuf,
+   uint32_t scanout_id)
+{
+struct virtio_gpu_scanout *scanout = &g->parent_obj.scanout[scanout_id];
+
+QTAILQ_REMOVE(&g->dmabuf.bufs, dmabuf, next);
+dpy_gl_release_dmabuf(scanout->con, &dmabuf->buf);
+g_free(dmabuf);
+}
+
+static void virtio_gpu_free_dmabufs(VirtIOGPU *g,
+uint32_t scanout_id)
+{
+VGPUDMABuf *dmabuf, *tmp;
+
+QTAILQ_FOREACH_SAFE(dmabuf, &g->dmabuf.bufs, next, tmp) {
+if (dmabuf != g->dmabuf.primary &&
+dmabuf->scanout_id == scanout_id) {
+virtio_gpu_free_one_dmabuf(g, dmabuf, scanout_id);
+break;
+}
+}
+}
+
+static VGPUDMABuf
+*virtio_gpu_create_dmabuf(VirtIOGPU *g,
+  uint32_t scanout_id,
+  struct virtio_gpu_simple_resource *res,
+  struct virtio_gpu_framebuffer *fb)
+{
+VGPUDMABuf *dmabuf;
+
+if (res->dmabuf_fd < 0) {
+return NULL;
+}
+
+dmabuf = g_new0(VGPUDMABuf, 1);
+dmabuf->buf.width = fb->width;
+dmabuf->buf.height = fb->height;
+dmabuf->buf.stride = fb->stride;
+dmabuf->buf.fourcc = qemu_pixman_to_drm_format(fb->format);
+dmabuf->buf.fd = res->dmabuf_fd;
+
+dmabuf->scanout_id = scanout_id;
+QTAILQ_INSERT_HEAD(&g->dmabuf.bufs, dmabuf, next);
+
+return dmabuf;
+}
+
+int virtio_gpu_update_dmabuf(VirtIOGPU *g,
+ uint32_t scanout_id,
+ struct virtio_gpu_simple_resource *res,
+ struct virtio_gpu_framebuffer *fb)
+{
+struct virtio_gpu_scanout *scanout = &g->parent_obj.scanout[scanout_id];
+VGPUDMABuf *primary;
+bool free_bufs = false;
+
+primary = virtio_gpu_create_dmabuf(g, scanout_id, res, fb);
+if (!primary) {
+return -EINVAL;
+}
+
+if (g->dmabuf.primary != primary) {
+g->dmabuf.primary = primary;
+qemu_console_resize(scanout->con,
+primary->buf.width, primary->buf.height);
+dpy_gl_scanout_dmabuf(scanout->con, &primary->buf);
+free_bufs = true;
+}
+
+if (free_bufs) {
+virtio_gpu_free_dmabufs(g, scanout_id);
+}
+
+return 0;
+}
+
 #else
 
 bool virtio_gpu_have_udmabuf(void)
@@ -185,4 +263,13 @@ void virtio_gpu_fini_udmabuf(struct 
virtio_gpu_simple_resource *res)
 /* nothing (stub) */
 }
 
+int virtio_gpu_update_dmabuf(VirtIOGPU *g,
+ uint32_t scanout_id,
+ struct virtio_gpu_simple_resource *res,
+ struct virtio_gpu_framebuffer *fb)
+{
+/* nothing (stub) */
+return 0;
+}
+
 #endif
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index a65215dc52..1efb97136d 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -149,6 +149,12 @@ struct VirtIOGPUBaseClass {
 DEFINE_PROP_UINT32("xres", _state, _conf.xres, 1024), \
 DEFINE_PROP_UINT32("yres", _state, _conf.yres, 768)
 
+typedef struct VGPUDMABuf {
+QemuDmaBuf buf;
+uint32_t scanout_id;
+QTAILQ_ENTRY(VGPUDMABuf) next;
+} VGPUDMABuf;
+
 struct VirtIOGPU {
 VirtIOGPUBase parent_obj;
 
@@ -179,6 +185,11 @@ struct VirtIOGPU {
 uint32_t req_3d;
 uint32_t bytes_3d;
 } stats;
+
+struct {
+QTAILQ_HEAD(, VGPUDMABuf) bufs;
+VGPUDMABuf *primary;
+} dmabuf;
 };
 
 struct VhostUserGPU {
@@ -236,6 +247,10 @@ void virtio_gpu_process_cmdq(VirtIOGPU *g);
 bool virtio_gpu_have_udmabuf(void);
 void virtio_gpu_init_udmabuf(struct virtio_gpu_simple_resource *res);
 void virtio_gpu_fini_udmabuf(struct virtio_gpu_simple_resource *res);
+int virtio_gpu_update_dmabuf(VirtIOGPU *g,
+ uint32_t scanout_id,
+ struct virtio_gpu_simple_resource *res,
+ struct virtio_gpu_framebuffer *fb);
 
 /* virtio-gpu-3d.c */
 void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
-- 
2.26.2




[PATCH v2 11/12] virtio-gpu: Add virtio_gpu_set_scanout_blob

2021-04-20 Thread Vivek Kasireddy
This API allows Qemu to set the blob allocated by the Guest as
the scanout buffer. If Opengl support is available, then the
scanout buffer would be submitted as a dmabuf to the UI; if not,
a pixman image is created from the scanout buffer and is
submitted to the UI via the display surface.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/trace-events  |   1 +
 hw/display/virtio-gpu.c  | 105 +--
 include/hw/virtio/virtio-gpu-bswap.h |   7 ++
 3 files changed, 105 insertions(+), 8 deletions(-)

diff --git a/hw/display/trace-events b/hw/display/trace-events
index 99e5256aac..96fe1ea3de 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -40,6 +40,7 @@ virtio_gpu_features(bool virgl) "virgl %d"
 virtio_gpu_cmd_get_display_info(void) ""
 virtio_gpu_cmd_get_edid(uint32_t scanout) "scanout %d"
 virtio_gpu_cmd_set_scanout(uint32_t id, uint32_t res, uint32_t w, uint32_t h, 
uint32_t x, uint32_t y) "id %d, res 0x%x, w %d, h %d, x %d, y %d"
+virtio_gpu_cmd_set_scanout_blob(uint32_t id, uint32_t res, uint32_t w, 
uint32_t h, uint32_t x, uint32_t y) "id %d, res 0x%x, w %d, h %d, x %d, y %d"
 virtio_gpu_cmd_res_create_2d(uint32_t res, uint32_t fmt, uint32_t w, uint32_t 
h) "res 0x%x, fmt 0x%x, w %d, h %d"
 virtio_gpu_cmd_res_create_3d(uint32_t res, uint32_t fmt, uint32_t w, uint32_t 
h, uint32_t d) "res 0x%x, fmt 0x%x, w %d, h %d, d %d"
 virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" 
PRId64
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 0b432feb00..4399fb78f7 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -448,7 +448,9 @@ static void virtio_gpu_resource_destroy(VirtIOGPU *g,
 }
 }
 
-pixman_image_unref(res->image);
+if (res->image) {
+pixman_image_unref(res->image);
+}
 virtio_gpu_cleanup_mapping(g, res);
 QTAILQ_REMOVE(&g->reslist, res, next);
 g->hostmem -= res->hostmem;
@@ -537,6 +539,7 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
 {
 struct virtio_gpu_simple_resource *res;
 struct virtio_gpu_resource_flush rf;
+struct virtio_gpu_scanout *scanout;
 pixman_region16_t flush_region;
 int i;
 
@@ -547,16 +550,28 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
 
 res = virtio_gpu_find_check_resource(g, rf.resource_id, false,
  __func__, &cmd->error);
-if (!res || res->blob) {
+if (!res) {
 return;
 }
 
-if (rf.r.x > res->width ||
+if (res->blob && display_opengl) {
+for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
+scanout = &g->parent_obj.scanout[i];
+if (scanout->resource_id == res->resource_id) {
+dpy_gl_update(scanout->con, 0, 0, scanout->width,
+  scanout->height);
+return;
+}
+}
+}
+
+if (!res->blob &&
+(rf.r.x > res->width ||
 rf.r.y > res->height ||
 rf.r.width > res->width ||
 rf.r.height > res->height ||
 rf.r.x + rf.r.width > res->width ||
-rf.r.y + rf.r.height > res->height) {
+rf.r.y + rf.r.height > res->height)) {
 qemu_log_mask(LOG_GUEST_ERROR, "%s: flush bounds outside resource"
   " bounds for resource %d: %d %d %d %d vs %d %d\n",
   __func__, rf.resource_id, rf.r.x, rf.r.y,
@@ -568,7 +583,6 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
 pixman_region_init_rect(&flush_region,
 rf.r.x, rf.r.y, rf.r.width, rf.r.height);
 for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
-struct virtio_gpu_scanout *scanout;
 pixman_region16_t region, finalregion;
 pixman_box16_t *extents;
 
@@ -659,10 +673,23 @@ static void virtio_gpu_do_set_scanout(VirtIOGPU *g,
 }
 
 g->parent_obj.enable = 1;
-data = (uint8_t *)pixman_image_get_data(res->image);
+
+if (res->blob) {
+if (display_opengl) {
+if (!virtio_gpu_update_dmabuf(g, scanout_id, res, fb)) {
+virtio_gpu_update_scanout(g, scanout_id, res, r);
+return;
+}
+}
+
+data = res->blob;
+} else {
+data = (uint8_t *)pixman_image_get_data(res->image);
+}
 
 /* create a surface for this scanout */
-if (!scanout->ds ||
+if ((res->blob && !display_opengl) ||
+!scanout->ds ||
 surface_data(scanout->ds) != data + fb->offset ||
 scanout->width != r->width ||
 scanout->height != r->height) {
@@ -726,6 +753,61 @@ static void virtio_gpu_set_scanout(VirtIOGPU *g,
   &fb, res, &ss.r, &cmd->error);
 }
 
+static void virtio_gpu_set_scanout_blob(VirtIOGPU *g,
+struct virtio_gpu_ctrl_command *cmd)
+{
+struct virtio_gpu_simple_resource *res;
+stru

[PATCH v2 12/12] virtio-gpu: Update cursor data using blob

2021-04-20 Thread Vivek Kasireddy
If a blob is available for the cursor, copy the data from the blob.

Based-on-patch-by: Gerd Hoffmann 
Signed-off-by: Vivek Kasireddy 
---
 hw/display/virtio-gpu.c | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 4399fb78f7..30be1f2602 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -66,6 +66,7 @@ static void update_cursor_data_simple(VirtIOGPU *g,
 {
 struct virtio_gpu_simple_resource *res;
 uint32_t pixels;
+void *data;
 
 res = virtio_gpu_find_check_resource(g, resource_id, false,
  __func__, NULL);
@@ -73,14 +74,22 @@ static void update_cursor_data_simple(VirtIOGPU *g,
 return;
 }
 
-if (pixman_image_get_width(res->image)  != s->current_cursor->width ||
-pixman_image_get_height(res->image) != s->current_cursor->height) {
-return;
+if (res->blob_size) {
+if (res->blob_size < (s->current_cursor->width *
+  s->current_cursor->height * 4)) {
+return;
+}
+data = res->blob;
+} else {
+if (pixman_image_get_width(res->image)  != s->current_cursor->width ||
+pixman_image_get_height(res->image) != s->current_cursor->height) {
+return;
+}
+data = pixman_image_get_data(res->image);
 }
 
 pixels = s->current_cursor->width * s->current_cursor->height;
-memcpy(s->current_cursor->data,
-   pixman_image_get_data(res->image),
+memcpy(s->current_cursor->data, data,
pixels * sizeof(uint32_t));
 }
 
-- 
2.26.2




Re: [PATCH 01/14] hw/block/nvme: rename __nvme_zrm_open

2021-04-20 Thread Klaus Jensen

On Apr 20 07:53, Thomas Huth wrote:

On 19/04/2021 21.27, Klaus Jensen wrote:

From: Klaus Jensen 

Get rid of the (reserved) double underscore use.

Cc: Philippe Mathieu-Daudé 
Cc: Thomas Huth 
Signed-off-by: Klaus Jensen 
---
 hw/block/nvme.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)


I think it would be good to mention the change with NVME_ZRM_AUTO in 
the patch description, too.


Apart from that:
Reviewed-by: Thomas Huth 



Makes sense, thanks!


signature.asc
Description: PGP signature


Re: [RFC PATCH 00/11] RISC-V: support clic v0.9 specification

2021-04-20 Thread LIU Zhiwei



On 2021/4/20 下午2:26, Alistair Francis wrote:

On Tue, Apr 20, 2021 at 11:45 AM LIU Zhiwei  wrote:


On 2021/4/20 上午7:30, Alistair Francis wrote:

On Fri, Apr 9, 2021 at 5:56 PM LIU Zhiwei  wrote:

This patch set gives an implementation of "RISC-V Core-Local Interrupt
Controller(CLIC) Version 0.9-draft-20210217". It comes from [1], where
you can find the pdf format or the source code.

I take over the job from Michael Clark, who gave the first implementation
of clic-v0.7 specification. If there is any copyright question, please
let me know.

You need to make sure you leave all original copyright notices and SoB in place.

OK.

Is it OK that keep the original copyright notices for new files and
your SoB in every patch,  Michael?


Features:
1. support four kinds of trigger types.
2. Preserve the CSR WARL/WPRI semantics.
3. Option to select different modes, such as M/S/U or M/U.
4. At most 4096 interrupts.
5. At most 1024 apertures.

Todo:
1. Encode the interrupt trigger information to exccode.
2. Support complete CSR mclicbase when its number is fixed.
3. Gave each aperture an independend address.

It have passed my qtest case and freertos test. Welcome to have a try
for your hardware.

It doesn't seem to be connected to any machine. How are you testing this?

There is a machine called SMARTL in my repository[1].  The qtest case
is  tests/qtest/test-riscv32-clic.c. If it's better, I can upstream the
machine together next version.

I don't really want to add a new hardware device when it isn't
connected to a machine. It would be great if we could connect it to a
machine. If not SMARTL maybe we could add it as an option to the virt
machine?
Currently it is good to  connect CLIC to virt machine.  I can fix it in 
the next version if it is OK for you.

What is SMARTL? Is that a publically available board?


SMARTL is a fpga evaluation board.  We usually use it  to debug programs 
for XuanTie CPU serials.
It has a 32bit CPU, 1 UART,  4 timers, and the CLIC interrupt 
controller. I will give  a detailed documentation

when I upstream it.

There are still many other  RISC-V boards, but more complex. I plan to 
upstream the XuanTie CPU

and  some widely used boards after the P extension and CLIC.

Zhiwei



Alistair


Zhiwei

[1]https://github.com/romanheros/qemu, branch: riscv-clic-upstream-rfc



Alistair


Any advice is welcomed. Thanks very much.

Zhiwei

[1] specification website: https://github.com/riscv/riscv-fast-interrupt.
[2] Michael Clark origin work: 
https://github.com/sifive/riscv-qemu/tree/sifive-clic.


LIU Zhiwei (11):
target/riscv: Add CLIC CSR mintstatus
target/riscv: Update CSR xintthresh in CLIC mode
hw/intc: Add CLIC device
target/riscv: Update CSR xie in CLIC mode
target/riscv: Update CSR xip in CLIC mode
target/riscv: Update CSR xtvec in CLIC mode
target/riscv: Update CSR xtvt in CLIC mode
target/riscv: Update CSR xnxti in CLIC mode
target/riscv: Update CSR mclicbase in CLIC mode
target/riscv: Update interrupt handling in CLIC mode
target/riscv: Update interrupt return in CLIC mode

   default-configs/devices/riscv32-softmmu.mak |   1 +
   default-configs/devices/riscv64-softmmu.mak |   1 +
   hw/intc/Kconfig |   3 +
   hw/intc/meson.build |   1 +
   hw/intc/riscv_clic.c| 836 
   include/hw/intc/riscv_clic.h| 103 +++
   target/riscv/cpu.h  |   9 +
   target/riscv/cpu_bits.h |  32 +
   target/riscv/cpu_helper.c   | 117 ++-
   target/riscv/csr.c  | 247 +-
   target/riscv/op_helper.c|  25 +
   11 files changed, 1363 insertions(+), 12 deletions(-)
   create mode 100644 hw/intc/riscv_clic.c
   create mode 100644 include/hw/intc/riscv_clic.h

--
2.25.1






Re: [PATCH v3] memory: Directly dispatch alias accesses on origin memory region

2021-04-20 Thread Mark Cave-Ayland

On 19/04/2021 22:11, Philippe Mathieu-Daudé wrote:


My patch might not be the proper fix, but we need to figure out how
to avoid others to hit the same problem, as it is very hard to debug.

At least an assertion and a comment.


Something like:

-- >8 --
diff --git a/softmmu/memory.c b/softmmu/memory.c
index d4493ef9e43..e031ac6e074 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1442,6 +1442,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
 unsigned size = memop_size(op);
 MemTxResult r;

+assert(!(mr->alias && !mr>alias_offset)); /* Use AddressSpace API
instead */
 if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
 *pval = unassigned_mem_read(mr, addr, size);
 return MEMTX_DECODE_ERROR;


AFAICT the dispatch functions don't handle rom/ram or aliases so you might need to go 
a bit stronger. The check is further complicated by the fact that you can have 
rom/ram devices which do define the underlying mr->ops.


I'm wondering for memory_region_dispatch_read() if this would work?

   assert(!mr->alias && !memory_access_is_direct(mr, false));


ATB,

Mark.



[Bug 1914236] Re: QEMU: scsi: use-after-free in mptsas_process_scsi_io_request() of mptsas1068 emulator

2021-04-20 Thread Mauro Matteo Cascella
Upstream commit:
https://git.qemu.org/?p=qemu.git;a=commit;h=3791642c8d60029adf9b00bcb4e34d7d8a1aea4d

** Changed in: qemu
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1914236

Title:
  QEMU: scsi: use-after-free in mptsas_process_scsi_io_request() of
  mptsas1068 emulator

Status in QEMU:
  Fix Committed

Bug description:
  * Cheolwoo Myung of Seoul National University reported a use-after-free issue 
in the SCSI Megaraid
emulator of the QEMU.

  * It occurs while handling mptsas_process_scsi_io_request(), as it does not
check a list in s->pending.

  * This was found in version 5.2.0 (master)

  ==31872==ERROR: AddressSanitizer: heap-use-after-free on address
  0x60c000107568 at pc 0x564514950c7c bp 0x7fff524ef4b0 sp 0x7fff524ef4a0 WRITE 
of size 8 at 0x60c000107568 thread T0
  #0 0x564514950c7b in mptsas_process_scsi_io_request ../hw/scsi/mptsas.c:306
  #1 0x564514950c7b in mptsas_fetch_request ../hw/scsi/mptsas.c:775
  #2 0x564514950c7b in mptsas_fetch_requests ../hw/scsi/mptsas.c:790
  #3 0x56451585c25d in aio_bh_poll ../util/async.c:164
  #4 0x5645158d7e7d in aio_dispatch ../util/aio-posix.c:381
  #5 0x56451585be2d in aio_ctx_dispatch ../util/async.c:306
  #6 0x7f1cc8af4416 in g_main_context_dispatch
  (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x4c416)
  #7 0x56451583f059 in glib_pollfds_poll ../util/main-loop.c:221
  #8 0x56451583f059 in os_host_main_loop_wait ../util/main-loop.c:244
  #9 0x56451583f059 in main_loop_wait ../util/main-loop.c:520
  #10 0x56451536b181 in qemu_main_loop ../softmmu/vl.c:1537
  #11 0x5645143ddd3d in main ../softmmu/main.c:50
  #12 0x7f1cc2650b96 in __libc_start_main
  (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
  #13 0x5645143eece9 in _start
  (/home/cwmyung/prj/hyfuzz/src/qemu-repro/build/qemu-system-i386+0x1d55ce9)

  0x60c000107568 is located 104 bytes inside of 120-byte region
  [0x60c000107500,0x60c000107578)
  freed by thread T0 here:
  #0 0x7f1cca9777a8 in __interceptor_free
  (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xde7a8)
  #1 0x56451495008b in mptsas_process_scsi_io_request ../hw/scsi/mptsas.c:358
  #2 0x56451495008b in mptsas_fetch_request ../hw/scsi/mptsas.c:775
  #3 0x56451495008b in mptsas_fetch_requests ../hw/scsi/mptsas.c:790
  #4 0x7fff524ef8bf ()

  previously allocated by thread T0 here:
  #0 0x7f1cca977d28 in __interceptor_calloc
  (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded28)
  #1 0x7f1cc8af9b10 in g_malloc0
  (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x51b10)
  #2 0x7fff524ef8bf ()

  SUMMARY: AddressSanitizer: heap-use-after-free ../hw/scsi/mptsas.c:306
  in mptsas_process_scsi_io_request
  Shadow bytes around the buggy address:
  0x0c1880018e50: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c1880018e60: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
  0x0c1880018e70: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c1880018e80: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c1880018e90: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
  =>0x0c1880018ea0: fd fd fd fd fd fd fd fd fd fd fd fd fd[fd]fd fa
  0x0c1880018eb0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c1880018ec0: 00 00 00 00 00 00 00 fa fa fa fa fa fa fa fa fa
  0x0c1880018ed0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c1880018ee0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c1880018ef0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable: 00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone: fa
  Freed heap region: fd
  Stack left redzone: f1
  Stack mid redzone: f2
  Stack right redzone: f3
  Stack after return: f5
  Stack use after scope: f8
  Global redzone: f9
  Global init order: f6
  Poisoned by user: f7
  Container overflow: fc
  Array cookie: ac
  Intra object redzone: bb
  ASan internal: fe
  Left alloca redzone: ca
  Right alloca redzone: cb
  ==31872==ABORTING

  
  To reproduce this issue, please run the QEMU with the following command
  line.

  
  # To enable ASan option, please set configuration with the following command
  $ ./configure --target-list=i386-softmmu --disable-werror --enable-sanitizers
  $ make

  # To reproduce this issue, please run the QEMU process with the
  following command line.
  $ ./qemu-system-i386 -m 512 -drive
  file=./hyfuzz.img,index=0,media=disk,format=raw -device
  mptsas1068,id=scsi -device scsi-hd,drive=SysDisk -drive
  id=SysDisk,if=none,file=./disk.img

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1914236/+subscriptions



[PATCH v2] hw/riscv: Fix OT IBEX reset vector

2021-04-20 Thread Alexander Wagner
The IBEX documentation [1] specifies the reset vector to be "the most
significant 3 bytes of the boot address and the reset value (0x80) as
the least significant byte".

[1] 
https://github.com/lowRISC/ibex/blob/master/doc/03_reference/exception_interrupts.rst

Signed-off-by: Alexander Wagner 
Reviewed-by: Alistair Francis 
---
 hw/riscv/opentitan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/riscv/opentitan.c b/hw/riscv/opentitan.c
index e168bffe69..ca4c1be6f6 100644
--- a/hw/riscv/opentitan.c
+++ b/hw/riscv/opentitan.c
@@ -120,7 +120,7 @@ static void lowrisc_ibex_soc_realize(DeviceState *dev_soc, 
Error **errp)
 &error_abort);
 object_property_set_int(OBJECT(&s->cpus), "num-harts", ms->smp.cpus,
 &error_abort);
-object_property_set_int(OBJECT(&s->cpus), "resetvec", 0x8090, 
&error_abort);
+object_property_set_int(OBJECT(&s->cpus), "resetvec", 0x8080, 
&error_abort);
 sysbus_realize(SYS_BUS_DEVICE(&s->cpus), &error_abort);
 
 /* Boot ROM */
-- 
2.25.1




[Bug 1814381] Re: qemu can't resolve ::1 when no network is available

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1814381

Title:
  qemu can't resolve ::1 when no network is available

Status in QEMU:
  Incomplete

Bug description:
  I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
  even just something about my laptop.  However we have a test failure
  in nbdkit which only occurs when my laptop is not connected to wifi or
  other networking.  It boils down to:

    $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
    qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported

  In a successful case it should connect to a local NBD server on port
  1234, but if you don't have that you will see:

    qemu-img: Could not open
  'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
  socket: Connection refused

  I can ‘ping6 ::1’ fine.

  It also works if I replace ‘::1’ with ‘localhost6’.

  My /etc/hosts contains:

  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1814381/+subscriptions



[Bug 1814343] Re: Initrd not loaded on riscv32

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: Confirmed => Incomplete

** Tags added: riscv

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1814343

Title:
  Initrd not loaded on riscv32

Status in QEMU:
  Incomplete

Bug description:
  I attempted to run qemu with a ram disk. However, when reading the
  contents of the disk from within the VM I only get back zeros.

  I was able to trace the issue to a mismatch of expectations on line 93
  of hw/riscv/virt.c. Specifically, when running in 32-bit mode the
  value of kernel_entry is sign extended to 64-bits, but
  load_image_targphys expects the start address to not be sign extended.

  Straw man patch (works for 32-bit but would probably break 64-bit
  VMs?):

  diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
  index e7f0716fb6..32216f993c 100644
  --- a/hw/riscv/virt.c
  +++ b/hw/riscv/virt.c
  @@ -90,7 +90,7 @@ static hwaddr load_initrd(const char *filename, uint64_t 
mem_size,
* halfway into RAM, and for boards with 256MB of RAM or more we put
* the initrd at 128MB.
*/
  -*start = kernel_entry + MIN(mem_size / 2, 128 * MiB);
  +*start = (kernel_entry & 0x) + MIN(mem_size / 2, 128 * MiB);
   
   size = load_ramdisk(filename, *start, mem_size - *start);
   if (size == -1) {

  
  Run command:

  $ qemu/build/riscv32-softmmu/qemu-system-riscv32 -machine virt -kernel
  mykernel.elf -nographic -initrd payload

  Commit hash:

  3a183e330dbd7dbcac3841737ac874979552cca2

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1814343/+subscriptions



[Bug 1815993] Re: drive-backup with iscsi will cause vm disk no response

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1815993

Title:
  drive-backup with iscsi will cause vm disk no response

Status in QEMU:
  Incomplete

Bug description:
  virsh qemu-monitor-command ${DOMAIN} '{ "execute" : "drive-backup" ,
  "arguments" : { "device" : "drive-virtio-disk0" , "sync" : "top" ,
  "target" : "iscsi://192.168.1.100:3260/iqn.2019-01.com.iaas/0" } }'

  When the drive-backup is running, I manually crash the iscsi server(or
  interrupt network, eg. iptables -j DROP).

  Then after less than 1 minute:
  virsh qemu-monitor-command ${DOMAIN} --pretty '{ "execute": "query-block" }' 
will block and no any response, until timeout. This is still excusable.
  But, the disk(drive-virtio-disk0)will occur the same situation:in vm os, the 
disk will block and no any response.

  In other words, when qemu and iscsi-server lose contact, It will cause
  the vm unable.

  ---
  Host: centos 7.5
  qemu version: ovirt-4.2(qemu-2.12.0)
  qemu command line: qemu-system-x86_64 -name guest=test,debug-threads=on -S 
-object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-190-test./master-key.aes
 -machine pc-i440fx-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -m 
1024 -mem-prealloc -mem-path /dev/hugepages1G/libvirt/qemu/190-test -realtime 
mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 
1c8611c2-a18a-4b1c-b40b-9d82040eafa4 -smbios type=1,manufacturer=IaaS 
-no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot menu=on,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
file=/opt/vol/sas/fb0c7c37-13e7-41fe-b3f8-f0fbaaeec7ce,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
 -drive 
file=/opt/vol/sas/bde66671-536d-49cd-8b46-a4f1ea7be513,format=qcow2,if=none,id=drive-virtio-disk1,cache=writeback
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1,write-cache=on
 -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:85:45:3e:d4:3a,bus=pci.0,addr=0x6 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-chardev socket,id=charchannel0,fd=35,server,nowait -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,password -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on

  iscsi:
  yum -y install targetcli python-rtslib
  systemctl start target
  systemctl enable target

  targetcli /iscsi create iqn.2019-01.com.iaas

  targetcli /iscsi/iqn.2019-01.com.iaas/tpg1 set attribute
  authentication=0 demo_mode_write_protect=0 generate_node_acls=1

  targetcli /iscsi/iqn.2019-01.com.iaas/tpg1/portals create 192.168.1.100 3260
  targetcli /backstores/fileio create testfile1 /backup/file1 2G
  targetcli /iscsi/iqn.2019-01.com.iaas/tpg1/luns create 
/backstores/fileio/testfile1

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1815993/+subscriptions



[Bug 1799766] Re: -device does not work as -drive do

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1799766

Title:
  -device does not work as -drive do

Status in QEMU:
  Incomplete

Bug description:
  Copy/paste of https://stackoverflow.com/questions/52929723/qemu-eject-
  complains-device-is-not-found-while-it-is-there , since I found this
  bug trying to find an answer to an own question on Stack Overflow.

  Below, what was my question the answer I wrote, all exposes the bug.

  

  
  I need to eject a floppy from QEmu 3.0 monitor, but the command surprisingly 
fails complaining the device is not found, while it is really there.

  Listing of devices:

  (qemu) info block
  fda: dos-6-22/Dos622-1.img (raw)
  Attached to:  /machine/unattached/device[11]
  Removable device: not locked, tray closed
  Cache mode:   writeback

  hda: hda.img (raw)
  Attached to:  /machine/peripheral-anon/device[1]
  Cache mode:   writeback

  Eject command result:

  (qemu) eject fda
  Device 'fda' not found

  This is so although this documentation says this is how I have to do:
  https://www.linux-kvm.org/page/Change_cdrom (just that I want to eject
  the floppy instead of the CD‑ROM).

  The `change` command complains the same:

  (qemu) change fda dos-6-22/Dos622-2.img raw
  Device 'fda' not found

  Is this a bug or me doing something wrong?

  I tried using different node names, with always the same result.

  

  I’m posting as an answer, but I’m not strictly sure. I can just say,
  if I understand correctly, this is a bug.

  The answer comes in two parts.

  First part, is a stripped down failing invocation:

  qemu-system-i386 \
 -monitor stdio \
 -machine type=isapc,vmport=off \
 -blockdev driver=file,node-name=fda-img,filename=fda.img \
 -blockdev driver=raw,node-name=fda,file=fda-img \
 -global isa-fdc.driveA=fda

  (qemu) info block
  ide1-cd0: [not inserted]
  Attached to:  /machine/unattached/device[19]
  Removable device: not locked, tray closed

  sd0: [not inserted]
  Removable device: not locked, tray closed

  fda: fda.img (raw)
  Attached to:  /machine/unattached/device[13]
  Removable device: not locked, tray closed
  Cache mode:   writeback
  (qemu) eject fda
  Device 'fda' not found

  Second part, is the same without the last argument `-global isa-
  fdc.driveA=fda`:

  qemu-system-i386 \
 -monitor stdio \
 -machine type=isapc,vmport=off \
 -blockdev driver=file,node-name=fda-img,filename=fda.img \
 -blockdev driver=raw,node-name=fda,file=fda-img

  (qemu) info block
  ide1-cd0: [not inserted]
  Attached to:  /machine/unattached/device[19]
  Removable device: not locked, tray closed

  floppy0: [not inserted]
  Attached to:  /machine/unattached/device[13]
  Removable device: not locked, tray closed

  sd0: [not inserted]
  Removable device: not locked, tray closed
  (qemu) eject floppy0

  There is more error when `-global isa-fdc.driveA=fda` is removed.
  However, the documentation says:

  > -global driver=driver,property=property,value=value
  > Set default value of driver’s property prop to value, e.g.:

  > qemu-system-i386 -global ide-hd.physical_block_size=4096 disk-image.img
  > In particular, you can **use this to set driver properties for devices 
which are created automatically by the machine model**. To create a device 
which is not created automatically and set properties on it, use -device.

  > -global driver.prop=value is shorthand for -global
  driver=driver,property=prop,value=value. The longhand syntax works
  even when driver contains a dot.

  What I put a stress on in the quote, suggest I’m not misusing
  `-global` and that’s most probably a bug.

  **Update for more details:**

  It seems using `-drive` instead of `-device` and `driveA` assignment,
  the result is not the same, although RedHat documentation recommands
  using `-device` instead of `-drive` and QEmu 3.0 documentation says
  `-drive` is ess

[Bug 1806040] Re: Nested VMX virtualization error on last Qemu versions

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1806040

Title:
  Nested VMX virtualization error on last Qemu versions

Status in QEMU:
  Incomplete

Bug description:
  Recently updated Qemu on a Sony VAIO sve14ag18m with Ubuntu Bionic
  4.15.0-38 from Git

  After launching a few VMs, noticed that i could not create Snapshot due to 
this error:
  "Nested VMX virtualization does not support live migration yet"

  I've created a new Windows 7 X64 machine with this compilation of Qemu
  and the problem persisted, so it's not because of the old machines.

  I launch Qemu with this params (I use them for malware analisys adn re...):
  qemu-system-x86_64 -monitor stdio -display none -m 4096 -smp cpus=4 
-usbdevice tablet -drive 
file=VM.img,index=0,media=disk,format=qcow2,cache=unsafe -net 
nic,macaddr="" -net bridge,br=br0 -cpu host,-hypervisor,kvm=off -vnc 
127.0.0.1:0 -enable-kvm 

  
  Deleting the changes made on this commit solved the problem, but I dont have 
idea what is this for, so... xDD 
  https://github.com/qemu/qemu/commit/d98f26073bebddcd3da0ba1b86c3a34e840c0fb8

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1806040/+subscriptions



[Bug 1813398] Re: qemu user calls malloc after fork in multi-threaded process

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1813398

Title:
  qemu user calls malloc after fork in multi-threaded process

Status in QEMU:
  Incomplete

Bug description:
  qemu user may hang in malloc on a musl based system because
  it calls malloc after fork (in a pthread_atfork handler)
  in the child process.

  this is undefined behaviour since the parent process is
  multi-threaded and only as-safe functions may be called
  in the child then. (if malloc/free is called concurrently
  with fork the malloc state will be corrupted in the child,
  it works on glibc because glibc takes the malloc locks
  before the fork syscall, but that breaks the as-safety of
  fork and thus non-conforming to posix)

  discussed at
  https://www.openwall.com/lists/musl/2019/01/26/1

  the bug is hard to reproduce (requires the call_rcu thread
  to call free concurrently with do_fork in the main thread),
  this one is observed with qemu-arm 3.1.0 running on x86_64
  executing an arm busybox sh:

  (gdb) bt
  #0  malloc (n=, n@entry=9) at src/malloc/malloc.c:306
  #1  0x60184ad3 in g_malloc (n_bytes=n_bytes@entry=9) at gmem.c:99
  #2  0x6018bcab in g_strdup (str=, str@entry=0x60200abf 
"call_rcu") at gstrfuncs.c:363
  #3  0x6016e31d in qemu_thread_create 
(thread=thread@entry=0x7ffe367d1870, name=name@entry=0x60200abf "call_rcu", 
  start_routine=start_routine@entry=0x60174c00 , 
arg=arg@entry=0x0, mode=mode@entry=1)
  at /home/pmos/build/src/qemu-3.1.0/util/qemu-thread-posix.c:526
  #4  0x60174b99 in rcu_init_complete () at 
/home/pmos/build/src/qemu-3.1.0/util/rcu.c:327
  #5  0x601c4fac in __fork_handler (who=1) at 
src/thread/pthread_atfork.c:26
  #6  0x601be8db in fork () at src/process/fork.c:33
  #7  0x6009d191 in do_fork (env=0x627aaed0, flags=flags@entry=17, 
newsp=newsp@entry=0, parent_tidptr=parent_tidptr@entry=0, 
  newtls=newtls@entry=0, child_tidptr=child_tidptr@entry=0) at 
/home/pmos/build/src/qemu-3.1.0/linux-user/syscall.c:5528
  #8  0x600af894 in do_syscall1 (cpu_env=cpu_env@entry=0x627aaed0, 
num=num@entry=2, arg1=arg1@entry=0, arg2=arg2@entry=-8700192, 
  arg3=, arg4=8, arg5=1015744, arg6=-74144, arg7=0, arg8=0) 
at /home/pmos/build/src/qemu-3.1.0/linux-user/syscall.c:7042
  #9  0x600a835c in do_syscall (cpu_env=cpu_env@entry=0x627aaed0, 
num=2, arg1=0, arg2=-8700192, arg3=, 
  arg4=, arg5=1015744, arg6=-74144, arg7=0, arg8=0) at 
/home/pmos/build/src/qemu-3.1.0/linux-user/syscall.c:11533
  #10 0x600c265f in cpu_loop (env=env@entry=0x627aaed0) at 
/home/pmos/build/src/qemu-3.1.0/linux-user/arm/cpu_loop.c:360
  #11 0x600417a2 in main (argc=, argv=0x7ffe367d57b8, 
envp=)
  at /home/pmos/build/src/qemu-3.1.0/linux-user/main.c:819

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1813398/+subscriptions



[Bug 1811888] Re: Qemu refuses to multiboot Elf64 kernels

2021-04-20 Thread Thomas Huth
** Changed in: qemu
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1811888

Title:
  Qemu refuses to multiboot Elf64 kernels

Status in QEMU:
  New

Bug description:
  Qemu does not multiboot Elf64 bit kernels when emulating x86_64
  systems. This is unfortunate because it renders the `-kernel` option
  quite useless. It's true that a multiboot compatible bootloader puts
  you in protected mode by default, and you have to set up the long mode
  yourself. While it is easy to put such 32-bit bootstrap code in a 64
  bit binary, it is not possible to compile a 64 bit kernel into a 32
  bit binary.

  After quick search, it turned out that loading 64 bit elf binaries has
  been disabled to be compatible with GRUB. However, since that time,
  both GRUB and Syslinux load 64 bit ELF kernels just fine, which makes
  qemu incompatible with them. Furthermore, it seems that this feature
  does and has always worked fine and that people have since submitted
  patches to re-enable it.

  https://patchwork.ozlabs.org/patch/62142/
  https://patchwork.kernel.org/patch/9770523/

  Please consider applying the attached patch.

  Best Regards,
  Lukasz Janyst

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1811888/+subscriptions



Re: [PATCH] migration: Deprecate redundant query-migrate result @blocked

2021-04-20 Thread Daniel P . Berrangé
On Tue, Apr 20, 2021 at 07:19:06AM +0200, Markus Armbruster wrote:
> Result @blocked is true when and only when result @blocked-reasons is
> present.  It's always non-empty when present.  @blocked is redundant.
> It was introduced in commit 3af8554bd0 "migration: Add blocker
> information", and has not been released.  This gives us a chance to
> fix the interface with minimal fuss.
> 
> Unfortunately, we're already too close to the release to risk dropping
> it.  Deprecate it instead.
> 
> Signed-off-by: Markus Armbruster 
> ---
> This is alternative to "[PATCH v2] migration: Drop redundant
> query-migrate result @blocked".
> 
>  qapi/migration.json | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: Live migration using a specified networking adapter

2021-04-20 Thread Daniel P . Berrangé
On Mon, Apr 19, 2021 at 08:06:23PM +0100, Dr. David Alan Gilbert wrote:
> * Jing-Wei Su (jwsu1...@gmail.com) wrote:
> > Hello experts,
> > 
> > 
> > I have a network topology like this diagram.
> > 
> > When start live migration moving a VM from Host A to B,
> > 
> > the migration process uses either 10GbE (10.0.0.1) or 1 GbE (10.0.0.2),
> > 
> > but the user cannot specify the source NIC by current migrate command.
> > 
> > 
> > To solve the problem, my rough idea is to add a source ipv4:port argument,
> > 
> > the migration command seems like
> > 
> > ```
> > 
> > migrate -b tcp:10.0.0.1: -d tcp:10.0.0.3:.
> 
> I'm not sure what the OS lets us do, but if it lets us pick the IP and
> port I think that would work; I don't think you need another tcp:
> since we already know which protocol we're using.
> 
> > 
> > ```
> > 
> > Is it an available solution? Or, is there any concern and sugesstion?
> > 
> > Besides the idea, is there any good way to this issue?
> 
> It's unusual; I don't think I've seen anyone ask for it before.
> I assume there's a wayto get the host network stack to prefer
> the 10GbE interface.
> Or to use separate subnets; rememember that each interface
> can have multiple IP addresses.

Yes, the recommended way is to have separate subnets for the
two NICs, and then use the IP address associated with the
fast NIC. Network routing will ensure migration traffic flows
over the main NIC. This is known to work well as OpenStack
uses this approach with QEMU/libvirt already.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[Bug 1811244] Re: qemu 3.1/i386 crashes/guest hangs when MTTCG is enabled

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1811244

Title:
  qemu 3.1/i386 crashes/guest hangs when MTTCG is enabled

Status in QEMU:
  Incomplete

Bug description:
  When MTTCG is enabled, QEMU 3.1.0 sometimes crashes when running the
  following command line:

  qemu-system-i386 -kernel
  /home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/bootstrap
  -append bootstrap -initrd
  "/home/jermar/work/software/l4/fiasco/.build-i386/fiasco
  
-serial_esc,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f/sigma0
  ,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f/moe
  
rom/ahci.cfg,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f/ned
  ,test_env.lua ,/home/jermar/Kernkonzept/software/l4/pkg/ahci-
  driver/examples/md5sum/ahci.cfg
  ,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f/l4re
  ,/home/jermar/Kernkonzept/software/l4/pkg/ahci-
  driver/examples/md5sum/ahci.io
  ,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f/io
  ,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f
  /ahci-drv
  ,/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x86_gen/l4f
  /ahci-md5-sync" -smp 4 -accel tcg,thread=multi -device ahci,id=ahci0
  -drive
  if=none,file=/home/jermar/Kernkonzept/software/l4/.build-i386/pkg
  /ahci-driver/test/examples/test_ahci.img,format=raw,id=drive-sata0-0-0
  -device ide-drive,bus=ahci0.0,drive=drive-sata0-0-0,id=sata0-0-0
  -serial stdio -nographic -monitor none

  The host is x86_64.

  The stack at the time of the crash (core dump and debug binary
  attached to the bug):

  Core was generated by `qemu-system-i386 -kernel 
/home/jermar/Kernkonzept/software/l4/.build-i386/bin/x'.
  Program terminated with signal SIGSEGV, Segmentation fault.
  #0  io_writex (env=env@entry=0x565355ca0140, 
iotlbentry=iotlbentry@entry=0x565355ca9120, mmu_idx=2, val=val@entry=0, 
addr=addr@entry=3938451632, retaddr=retaddr@entry=140487132809203, 
recheck=false, size=4)
  at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/cputlb.c:791
  791   if (mr->global_locking && !qemu_mutex_iothread_locked()) {
  [Current thread is 1 (Thread 0x7fc5af7fe700 (LWP 3625719))]
  Missing separate debuginfos, use: dnf debuginfo-install 
SDL2-2.0.9-1.fc29.x86_64 at-spi2-atk-2.30.0-1.fc29.x86_64 
at-spi2-core-2.30.0-2.fc29.x86_64 atk-2.30.0-1.fc29.x86_64 
bzip2-libs-1.0.6-28.fc29.x86_64 cairo4
  (gdb) bt
  #0  0x565354f5f365 in io_writex
  (env=env@entry=0x565355ca0140, 
iotlbentry=iotlbentry@entry=0x565355ca9120, mmu_idx=2, val=val@entry=0, 
addr=addr@entry=3938451632, retaddr=retaddr@entry=140487132809203, 
recheck=false, size=4)
  at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/cputlb.c:791
  #1  0x565354f621b2 in io_writel (recheck=, 
retaddr=140487132809203, addr=3938451632, val=0, index=0, mmu_idx=2, 
env=0x565355ca0140)
  at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/softmmu_template.h:310
  #2  0x565354f621b2 in helper_le_stl_mmu (env=0x565355ca0140, 
addr=, val=0, oi=34, retaddr=140487132809203)
  at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/softmmu_template.h:310
  #3  0x7fc5b5a587f3 in code_gen_buffer ()
  #4  0x565354f75fd0 in cpu_tb_exec (itb=, 
cpu=0x7fc5b5a5aa40 ) at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/cpu-exec.c:171
  #5  0x565354f75fd0 in cpu_loop_exec_tb (tb_exit=, 
last_tb=, tb=, cpu=0x7fc5b5a5aa40 
)
  at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/cpu-exec.c:615
  #6  0x565354f75fd0 in cpu_exec (cpu=cpu@entry=0x565355c97e90) at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/accel/tcg/cpu-exec.c:725
  #7  0x565354f33b1f in tcg_cpu_exec (cpu=0x565355c97e90) at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/cpus.c:1429
  #8  0x565354f35e83 in qemu_tcg_cpu_thread_fn (arg=0x565355c97e90) at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/cpus.c:1733
  #9  0x565354f35e83 in qemu_tcg_cpu_thread_fn 
(arg=arg@entry=0x565355c97e90) at 
/home/jermar/software/HelenOS/helenos.git/contrib/qemu/qemu-3.1.0/cpus.c:1707
  #10 0x

[Bug 1809684] Re: amdgpu passthrough on POWER9 (ppc64el) not working

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1809684

Title:
  amdgpu passthrough on POWER9 (ppc64el) not working

Status in QEMU:
  Incomplete

Bug description:
  When attempting to pass a Vega 56 GPU to a virtualized guest using
  QEMU 3.1 on ppc64el (POWER9), the guest is unable to initialize the
  GPU.  Further digging reveals the driver attempting to allocate a
  large BAR, which then fails:

  [6.058544] [drm] PCI I/O BAR is not found.
  
  [6.679193] amdgpu :00:03.0: BAR 2: releasing [mem 
0x2104-0x2104001f pref]
  [6.679306] amdgpu :00:03.0: BAR 0: releasing [mem 
0x2102-0x2103 pref]
  [6.679361] amdgpu :00:03.0: BAR 0: no space for [mem size 0x2 
pref]
  [6.679403] amdgpu :00:03.0: BAR 0: failed to assign [mem size 
0x2 pref]
  [6.679444] amdgpu :00:03.0: BAR 2: assigned [mem 
0x20008020-0x2000803f pref]
  [6.681420] amdgpu :00:03.0: VRAM: 8176M 0x00F4 - 
0x00F5FEFF (8176M used)
  [6.681507] amdgpu :00:03.0: GART: 512M 0x - 
0x1FFF
  [6.681594] amdgpu :00:03.0: AGP: 267419648M 0x00F8 - 
0x
  [6.681653] [drm] Detected VRAM RAM=8176M, BAR=0M
  [6.681688] [drm] RAM width 2048bits HBM
  [6.681885] [TTM] Zone  kernel: Available graphics memory: 4176288 kiB
  [6.681978] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
  [6.682043] [TTM] Initializing pool allocator
  [6.682159] [drm] amdgpu: 8176M of VRAM memory ready
  [6.682249] [drm] amdgpu: 6117M of GTT memory ready.
  [6.682387] [drm] GART: num cpu pages 8192, num gpu pages 131072
  [6.682466] amdgpu :00:03.0: (-22) kernel bo map failed
  [6.682539] [drm:amdgpu_device_init [amdgpu]] *ERROR* 
amdgpu_vram_scratch_init failed -22
  [6.682592] amdgpu :00:03.0: amdgpu_device_ip_init failed
  [6.682636] amdgpu :00:03.0: Fatal error during GPU init

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1809684/+subscriptions



[Bug 1813045] Re: qemu-ga fsfreeze crashes the kernel

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1813045

Title:
  qemu-ga fsfreeze crashes the kernel

Status in QEMU:
  Incomplete

Bug description:
  We use mainly Cloudlinux, Debian and Centos.
  We experienced many crashes on our qemu instances based on Cloudlinux during 
a snapshot.
  The issue is not related to CloudLinux directly, but to Qemu agent, which 
does not freeze the file system(s) correctly. What is actually happening:

  When VM backup is invoked, Qemu agent freezes the file systems, so no single 
change will be made during the backup. But Qemu agent does not respect the 
loop* devices in freezing order (we have checked its sources), which leads to 
the next situation: 
  1) freeze loopback fs
---> send async reqs to loopback thread
  2) freeze main fs
  3) loopback thread wakes up and trying to write data to the main fs, which is 
still frozen, and this finally leads to the hung task and kernel crash. 

  I believe this is the culprit:

  /dev/loop0 /tmp ext3 rw,nosuid,noexec,relatime,data=ordered 0 0
  /dev/loop0 /var/tmp ext3 rw,nosuid,noexec,relatime,data=ordered 0 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1813045/+subscriptions



[Bug 1808563] Re: Listing the contents of / lists QEMU_LD_PREFIX instead

2021-04-20 Thread Thomas Huth
This might also be a duplicate of bug
https://bugs.launchpad.net/qemu/+bug/1813307

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1808563

Title:
  Listing the contents of / lists QEMU_LD_PREFIX instead

Status in QEMU:
  Incomplete

Bug description:
  Seeing this in qemu-user version 3.1.0

  Demo:
  $ QEMU_LD_PREFIX=$(pwd)/usr/armv7a-cros-linux-gnueabi ../run/qemu-arm 
/tmp/coreutils --coreutils-prog=ls / 
  etc  lib  usr
  $ ls /
  boot  etc   lib lib64   lost+found  mntroot  sbin  sys  usr
  bin   dev   export  homelib32   netproc  run   tmp  var
  $ ls usr/armv7a-cros-linux-gnueabi
  etc  lib  usr

  In strace, the openat for "/" is remapped to the directory specified in 
QEMU_LD_PREFIX:
  [pid  5302] openat(AT_FDCWD, "/tmp/qemu/usr/armv7a-cros-linux-gnueabi", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3

  As an aside, if I change the code to do chdir("/"); opendir("."); it
  works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1808563/+subscriptions



[Bug 1818122] Re: QEMU 3.1 makes libxslt to crash on ppc64

2021-04-20 Thread Thomas Huth
QEMU, like most open source projects, relies on contributors who have
motivation, skills and available time to work on implementing particular
features. They naturally tend to focus on features that result in the
greatest benefit to their own use cases. Thus simply declaring that an
open source project, must support something won't magically make it
happen.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1818122

Title:
  QEMU 3.1 makes libxslt to crash on ppc64

Status in QEMU:
  Incomplete

Bug description:
  Host: clean Ubuntu Disco with QEMU 3.1

  Guest: Alpine Linux edge with xmlto

  Steps to set up guest:
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/vmlinuz-vanilla
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/initramfs-vanilla
  qemu-system-ppc64 -m 1G -kernel vmlinuz-vanilla -initrd initramfs-vanilla 
-append "console=hvc0 ip=dhcp 
alpine_repo=http://dl-cdn.alpinelinux.org/alpine/edge/main/ 
modloop=http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/modloop-vanilla";
 -device virtio-rng-pci -nographic
  This brings up an VM with an in-memory Alpine Linux.

  Steps to reproduce:
  Login as root and execute the following commands.
  apk add xmlto
  ntpd -nqp time.google.com // For TLS OCSP
  wget https://ddosolitary.org/manpage-base.xsl
  wget https://ddosolitary.org/shadowsocks-libev.xml
  xmlto -m manpage-base.xsl man shadowsocks-libev.xml
  The downloaded files are from this project: 
https://github.com/shadowsocks/shadowsocks-libev The former is directly taken 
from the "doc" directory and the latter is an intermediate build output 
generated by asciidoc from doc/shadowsocks-libev.asciidoc

  Expected behavior: The command silently succeeds producing
  shadowsocks-libev.8

  Actual behavior: 
  runtime error: file 
file:///usr/share/xml/docbook/xsl-stylesheets-1.79.1/manpages/tbl.xsl line 450 
element text
  xsltApplySequenceConstructor: A potential infinite template recursion was 
detected.
  You can adjust xsltMaxDepth (--maxdepth) in order to raise the maximum number 
of nested template calls and variables/params (currently set to 3000).
  Templates:
  #0 name process.colspan
  #1 name process.colspan
  #2 name process.colspan
  #3 name process.colspan
  #4 name process.colspan
  #5 name process.colspan
  #6 name process.colspan
  #7 name process.colspan
  #8 name process.colspan
  #9 name process.colspan
  #10 name process.colspan
  #11 name process.colspan
  #12 name process.colspan
  #13 name process.colspan
  #14 name process.colspan
  Variables:
  #0
  type
  colspan
  #1
  colspan
  #2
  type
  colspan
  #3
  colspan
  #4
  type
  colspan
  #5
  colspan
  #6
  type
  colspan
  #7
  colspan
  #8
  type
  colspan
  #9
  colspan
  #10
  type
  colspan
  #11
  colspan
  #12
  type
  colspan
  #13
  colspan
  #14
  type
  colspan
  error: file /root/shadowsocks-libev.xml
  xsltRunStylesheet : run failed

  Note:
  I tried increasing --maxdepth as suggested in the error output but that will 
result in a segfault.
  This error doesn't occur with an older QEMU (I tested QEMU 2.12 on Ubuntu 
Cosmic) or different architectures on QEMU 3.1 (I tested x86, x86_64, arm, 
aarch64, s390x). Also it didn't help to use an older Alpine Linux (I tested 
v3.8). So I think it is caused by a bug in QEMU rather than the distro/package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1818122/+subscriptions



[Bug 1818122] Re: QEMU 3.1 makes libxslt to crash on ppc64

2021-04-20 Thread Thomas Huth
sorry for the previous post, posted the wrong text into this bug :-(

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1818122

Title:
  QEMU 3.1 makes libxslt to crash on ppc64

Status in QEMU:
  Incomplete

Bug description:
  Host: clean Ubuntu Disco with QEMU 3.1

  Guest: Alpine Linux edge with xmlto

  Steps to set up guest:
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/vmlinuz-vanilla
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/initramfs-vanilla
  qemu-system-ppc64 -m 1G -kernel vmlinuz-vanilla -initrd initramfs-vanilla 
-append "console=hvc0 ip=dhcp 
alpine_repo=http://dl-cdn.alpinelinux.org/alpine/edge/main/ 
modloop=http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/modloop-vanilla";
 -device virtio-rng-pci -nographic
  This brings up an VM with an in-memory Alpine Linux.

  Steps to reproduce:
  Login as root and execute the following commands.
  apk add xmlto
  ntpd -nqp time.google.com // For TLS OCSP
  wget https://ddosolitary.org/manpage-base.xsl
  wget https://ddosolitary.org/shadowsocks-libev.xml
  xmlto -m manpage-base.xsl man shadowsocks-libev.xml
  The downloaded files are from this project: 
https://github.com/shadowsocks/shadowsocks-libev The former is directly taken 
from the "doc" directory and the latter is an intermediate build output 
generated by asciidoc from doc/shadowsocks-libev.asciidoc

  Expected behavior: The command silently succeeds producing
  shadowsocks-libev.8

  Actual behavior: 
  runtime error: file 
file:///usr/share/xml/docbook/xsl-stylesheets-1.79.1/manpages/tbl.xsl line 450 
element text
  xsltApplySequenceConstructor: A potential infinite template recursion was 
detected.
  You can adjust xsltMaxDepth (--maxdepth) in order to raise the maximum number 
of nested template calls and variables/params (currently set to 3000).
  Templates:
  #0 name process.colspan
  #1 name process.colspan
  #2 name process.colspan
  #3 name process.colspan
  #4 name process.colspan
  #5 name process.colspan
  #6 name process.colspan
  #7 name process.colspan
  #8 name process.colspan
  #9 name process.colspan
  #10 name process.colspan
  #11 name process.colspan
  #12 name process.colspan
  #13 name process.colspan
  #14 name process.colspan
  Variables:
  #0
  type
  colspan
  #1
  colspan
  #2
  type
  colspan
  #3
  colspan
  #4
  type
  colspan
  #5
  colspan
  #6
  type
  colspan
  #7
  colspan
  #8
  type
  colspan
  #9
  colspan
  #10
  type
  colspan
  #11
  colspan
  #12
  type
  colspan
  #13
  colspan
  #14
  type
  colspan
  error: file /root/shadowsocks-libev.xml
  xsltRunStylesheet : run failed

  Note:
  I tried increasing --maxdepth as suggested in the error output but that will 
result in a segfault.
  This error doesn't occur with an older QEMU (I tested QEMU 2.12 on Ubuntu 
Cosmic) or different architectures on QEMU 3.1 (I tested x86, x86_64, arm, 
aarch64, s390x). Also it didn't help to use an older Alpine Linux (I tested 
v3.8). So I think it is caused by a bug in QEMU rather than the distro/package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1818122/+subscriptions



[Bug 1809144] Re: SVM instructions fail with SVME bit enabled

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1809144

Title:
  SVM instructions fail with SVME bit enabled

Status in QEMU:
  Incomplete

Bug description:
  I was trying to use QEMU/TCG to emulate some stuff that uses SVM.
  I know SVM is only partially implemented but I gave it a try anyway.

  I found that if SVM is enabled in the same basic block in which there's a 
call to VMSAVE/etc,
  the call fails as illegal op because the flags don't get updated correctly.

  The pseudocode for the asm I'm running is:

  ```
  EFER |= SVME; set the appropriate bit with wrmsr
  vmsave
  ```

  This is an example of the relevant translate.c code:

  ```
  if (!(s->flags & HF_SVME_MASK) || !s->pe) {
  goto illegal_op;
  }
  if (s->cpl != 0) {
  gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
  break;
  }
  ```

  s->flags doesn't get updated after the wrmsr instruction and so QEMU
  raises an illegal opcode interrupt.

  A quick fix is to make the tb end after `wrmsr` instructions, but it's an 
hack afaik.
  I'm not too comfortable with QEMU's code, so I don't know what a proper fix 
would be.

  Cheers,

  thebabush

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1809144/+subscriptions



[Bug 1805913] Re: readdir() returns NULL (errno=EOVERFLOW) for 32-bit user-static qemu on 64-bit host

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1805913

Title:
  readdir() returns NULL (errno=EOVERFLOW) for 32-bit user-static qemu
  on 64-bit host

Status in QEMU:
  Incomplete

Bug description:
  This can be simply reproduced by compiling and running the attached C
  code (readdir-bug.c) under 32-bit user-static qemu, such as qemu-arm-
  static:

  # Setup docker for user-static binfmt
  docker run --rm --privileged multiarch/qemu-user-static:register --reset
  # Compile the code and run (readdir for / is fine, so create a new directory 
/test).
  docker run -v /path/to/qemu-arm-static:/usr/bin/qemu-arm-static -v 
/path/to/readdir-bug.c:/tmp/readdir-bug.c -it --rm arm32v7/ubuntu:18.10 bash -c 
'{ apt update && apt install -y gcc; } >&/dev/null && mkdir -p /test && cd 
/test && gcc /tmp/readdir-bug.c && ./a.out'
  dir=0xff5b4150
  readdir(dir)=(nil)
  errno=75: Value too large for defined data type

  Do remember to replace the /path/to/qemu-arm-static and /path/to
  /readdir-bug.c to the actual paths of the files.

  The root cause is in glibc:
  
https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getdents.c;h=6d09a5be7057e2792be9150d3a2c7b293cf6fc34;hb=a5275ba5378c9256d18e582572b4315e8edfcbfb#l87

  By C standard, the return type of readdir() is DIR*, in which the
  inode number and offset are 32-bit integers, therefore, glibc calls
  getdents64() and check if the inode number and offset fits the 32-bit
  range, and reports EOVERFLOW if not.

  The problem here is for 32-bit user-static qemu running on 64-bit
  host, getdents64 simply passing through the inode number and offset
  from underlying getdents64 syscall (from 64-bit kernel), which is very
  likely to not fit into 32-bit range. On real hardware, the 32-bit
  kernel creates 32-bit inode numbers, therefore works properly.

  The glibc code makes sense to do the check to be conformant with C
  standard, therefore ideally it should be a fix on qemu side. I admit
  this is difficult because qemu has to maintain a mapping between
  underlying 64-bit inode numbers and 32-bit inode numbers, which would
  severely hurt the performance. I don't expect this could be fix
  anytime soon (or even there would be a fix), but it would be
  worthwhile to surface this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1805913/+subscriptions



[Bug 1818122] Re: QEMU 3.1 makes libxslt to crash on ppc64

2021-04-20 Thread Thomas Huth
I meant to say:
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1818122

Title:
  QEMU 3.1 makes libxslt to crash on ppc64

Status in QEMU:
  Incomplete

Bug description:
  Host: clean Ubuntu Disco with QEMU 3.1

  Guest: Alpine Linux edge with xmlto

  Steps to set up guest:
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/vmlinuz-vanilla
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/initramfs-vanilla
  qemu-system-ppc64 -m 1G -kernel vmlinuz-vanilla -initrd initramfs-vanilla 
-append "console=hvc0 ip=dhcp 
alpine_repo=http://dl-cdn.alpinelinux.org/alpine/edge/main/ 
modloop=http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/modloop-vanilla";
 -device virtio-rng-pci -nographic
  This brings up an VM with an in-memory Alpine Linux.

  Steps to reproduce:
  Login as root and execute the following commands.
  apk add xmlto
  ntpd -nqp time.google.com // For TLS OCSP
  wget https://ddosolitary.org/manpage-base.xsl
  wget https://ddosolitary.org/shadowsocks-libev.xml
  xmlto -m manpage-base.xsl man shadowsocks-libev.xml
  The downloaded files are from this project: 
https://github.com/shadowsocks/shadowsocks-libev The former is directly taken 
from the "doc" directory and the latter is an intermediate build output 
generated by asciidoc from doc/shadowsocks-libev.asciidoc

  Expected behavior: The command silently succeeds producing
  shadowsocks-libev.8

  Actual behavior: 
  runtime error: file 
file:///usr/share/xml/docbook/xsl-stylesheets-1.79.1/manpages/tbl.xsl line 450 
element text
  xsltApplySequenceConstructor: A potential infinite template recursion was 
detected.
  You can adjust xsltMaxDepth (--maxdepth) in order to raise the maximum number 
of nested template calls and variables/params (currently set to 3000).
  Templates:
  #0 name process.colspan
  #1 name process.colspan
  #2 name process.colspan
  #3 name process.colspan
  #4 name process.colspan
  #5 name process.colspan
  #6 name process.colspan
  #7 name process.colspan
  #8 name process.colspan
  #9 name process.colspan
  #10 name process.colspan
  #11 name process.colspan
  #12 name process.colspan
  #13 name process.colspan
  #14 name process.colspan
  Variables:
  #0
  type
  colspan
  #1
  colspan
  #2
  type
  colspan
  #3
  colspan
  #4
  type
  colspan
  #5
  colspan
  #6
  type
  colspan
  #7
  colspan
  #8
  type
  colspan
  #9
  colspan
  #10
  type
  colspan
  #11
  colspan
  #12
  type
  colspan
  #13
  colspan
  #14
  type
  colspan
  error: file /root/shadowsocks-libev.xml
  xsltRunStylesheet : run failed

  Note:
  I tried increasing --maxdepth as suggested in the error output but that will 
result in a segfault.
  This error doesn't occur with an older QEMU (I tested QEMU 2.12 on Ubuntu 
Cosmic) or different architectures on QEMU 3.1 (I tested x86, x86_64, arm, 
aarch64, s390x). Also it didn't help to use an older Alpine Linux (I tested 
v3.8). So I think it is caused by a bug in QEMU rather than the distro/package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1818122/+subscriptions



[Bug 1808928] Re: Bitmap Extra data is not supported

2021-04-20 Thread Thomas Huth
John, did you find some spare time to work on those patches that you've
mentioned? I.e. has this been addressed already?

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1808928

Title:
  Bitmap Extra data is not supported

Status in QEMU:
  Incomplete

Bug description:
  i am using dirty bitmaps and drive-backup. It works as aspected.

  Lately, i encounter a disastrous error. There is not any information
  about that situation. I cannot reach/open/attach/info or anything with
  a qcow2 file.

  virsh version
  Compiled against library: libvirt 4.10.0
  Using library: libvirt 4.10.0
  Using API: QEMU 4.10.0
  Running hypervisor: QEMU 2.12.0

  "qemu-img: Could not open '/var/lib/libvirt/images/test.qcow2': Bitmap
  extra data is not supported"

  what is that mean? what should i do?
  i cannot remove bitmap. i cannot open image or query.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1808928/+subscriptions



[Bug 1821595] Re: Failed to emulate MMIO access with EmulatorReturnStatus: 2

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1821595

Title:
  Failed to emulate MMIO access with EmulatorReturnStatus: 2

Status in QEMU:
  Incomplete

Bug description:
  I have compiled qemu with enable-whpx parameter for Hyper-V Platform API in 
Mingw64 . When I tried run with Windows 7 iso file I have faced issue with the 
following problem: 
  qemu-system-x86_64.exe: WHPX: Failed to emulate MMIO access with 
EmulatorReturnStatus: 2
  qemu-system-x86_64.exe: WHPX: Failed to exec a virtual processor

  
  configuration directives:

  ../configure --target-list=x86_64-softmmu,i386-softmmu --enable-lzo\
   --enable-bzip2 --enable-tools --enable-sdl --enable-gtk --enable-hax\
   --enable-vdi --enable-qcow1 --enable-whpx --disable-capstone\
   --disable-werror --disable-stack-protector --prefix="../../QEMU-bin"

  
  Qemu command line:
  qemu-system-x86_64.exe -m 1024 -cdrom 
"C:\Users\vmcs\Documents\en_windows_7_home_premium_with_sp1_x86_dvd_u_676701.iso"
 -display sdl -machine q35 -accel whpx

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1821595/+subscriptions



[Bug 1821131] Re: VM running under latest Qemu receives 2, 3, 8, and = when sent the keysyms for @, #, *, and + respectively

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1821131

Title:
  VM running under latest Qemu receives 2, 3, 8, and = when sent the
  keysyms for @, #, *, and + respectively

Status in QEMU:
  Incomplete

Bug description:
  Git commit hash where bug was reproduced:
  62a172e6a77d9072bb1a18f295ce0fcf4b90a4f2

  A user of my application bVNC reported that when he connects to his
  VMs running under Qemu, he cannot send @, #, and *. Instead, 2, 3, and
  8 appear in the VM respectively. I built the latest Qemu from source,
  and reproduced the issue.

  bVNC converts keycodes or unicode characters that the Android keyboard
  sends it to corresponding keysyms. For example, it sends keysym 64 for
  @ rather than sending SHIFT+2.

  A debug log of the application sending the keysyms shows metaState ==
  0, indicating lack of modifier keys.

  When 2 appears in place of @:

  03-21 00:11:21.761  8864  8864 I RemoteKeyboard: Sending key. Down: true, 
key: 64. keysym:64, metaState: 0
  03-21 00:11:21.763  8864  8864 I RemoteKeyboard: Sending key. Down: false, 
key: 64. keysym:64, metaState: 0

  When 3 appears in place of #:

  03-21 00:11:08.947  8864  8864 I RemoteKeyboard: Sending key. Down: true, 
key: 35. keysym:35, metaState: 0
  03-21 00:11:08.950  8864  8864 I RemoteKeyboard: Sending key. Down: false, 
key: 35. keysym:35, metaState: 0

  When 0 appears instead of *:

  03-21 00:11:28.586  8864  8864 I RemoteKeyboard: Sending key. Down: true, 
key: 42. keysym:42, metaState: 0
  03-21 00:11:28.588  8864  8864 I RemoteKeyboard: Sending key. Down: false, 
key: 42. keysym:42, metaState: 0

  When = appears instead of +:
  03-21 01:05:40.021 10061 10061 I RemoteKeyboard: Sending key. Down: true, 
key: 43. keysym:43, metaState: 0
  03-21 01:05:40.022 10061 10061 I RemoteKeyboard: Sending key. Down: false, 
key: 43. keysym:43, metaState: 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1821131/+subscriptions



Re: [PATCH v2] migration: Drop redundant query-migrate result @blocked

2021-04-20 Thread Daniel P . Berrangé
On Tue, Apr 20, 2021 at 07:19:07AM +0200, Markus Armbruster wrote:
> Result @blocked is true when and only when result @blocked-reasons is
> present.  It's always non-empty when present.  @blocked is redundant.
> It was introduced in commit 3af8554bd0 "migration: Add blocker
> information", and has not been released.  This gives us a chance to
> fix the interface with minimal fuss: drop @blocked.
> 
> Signed-off-by: Markus Armbruster 
> ---
> This is alternative to "[PATCH] migration: Deprecate redundant
> query-migrate result @blocked".
> 
>  qapi/migration.json   |  7 +++
>  migration/migration.c | 29 +
>  monitor/hmp-cmds.c|  2 +-
>  3 files changed, 17 insertions(+), 21 deletions(-)
> 
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 9bf0bc4d25..7a5bdf9a0d 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -224,9 +224,9 @@
>  #only returned if VFIO device is present, migration is supported by 
> all
>  #VFIO devices and status is 'active' or 'completed' (since 5.2)
>  #
> -# @blocked: True if outgoing migration is blocked (since 6.0)
> -#
> -# @blocked-reasons: A list of reasons an outgoing migration is blocked 
> (since 6.0)
> +# @blocked-reasons: A list of reasons an outgoing migration is blocked.
> +#   Present and non-empty when migration is blocked.
> +#   (since 6.0)
>  #
>  # Since: 0.14
>  ##
> @@ -241,7 +241,6 @@
> '*setup-time': 'int',
> '*cpu-throttle-percentage': 'int',
> '*error-desc': 'str',
> -   'blocked': 'bool',
> '*blocked-reasons': ['str'],
> '*postcopy-blocktime' : 'uint32',
> '*postcopy-vcpu-blocktime': ['uint32'],
> diff --git a/migration/migration.c b/migration/migration.c
> index 8ca034136b..fdadee290e 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1073,27 +1073,24 @@ static void populate_vfio_info(MigrationInfo *info)
>  static void fill_source_migration_info(MigrationInfo *info)
>  {
>  MigrationState *s = migrate_get_current();
> +GSList *cur_blocker = migration_blockers;
>  
> -info->blocked = migration_is_blocked(NULL);

So migration_is_blocked() method returns true if either of the
following conditions is met

 - qemu_savevm_state_blocked  returns true - this happens
   if a savevm state handler has 'unmigratable' flag set

 - if any migration blockers are registered.

We're then calling qemu_savevm_non_migratable_list to populate
blocked_reasons  based on "unmigratable" flag, and then further
populate blocked_reasons based on migration blockers.

So I agree it is functionally the same result old and new code.


> -info->has_blocked_reasons = info->blocked;
>  info->blocked_reasons = NULL;
> -if (info->blocked) {
> -GSList *cur_blocker = migration_blockers;
>  
> -/*
> - * There are two types of reasons a migration might be blocked;
> - * a) devices marked in VMState as non-migratable, and
> - * b) Explicit migration blockers
> - * We need to add both of them here.
> - */
> -qemu_savevm_non_migratable_list(&info->blocked_reasons);
> +/*
> + * There are two types of reasons a migration might be blocked;
> + * a) devices marked in VMState as non-migratable, and
> + * b) Explicit migration blockers
> + * We need to add both of them here.
> + */
> +qemu_savevm_non_migratable_list(&info->blocked_reasons);
>  
> -while (cur_blocker) {
> -QAPI_LIST_PREPEND(info->blocked_reasons,
> -  g_strdup(error_get_pretty(cur_blocker->data)));
> -cur_blocker = g_slist_next(cur_blocker);
> -}
> +while (cur_blocker) {
> +QAPI_LIST_PREPEND(info->blocked_reasons,
> +  g_strdup(error_get_pretty(cur_blocker->data)));
> +cur_blocker = g_slist_next(cur_blocker);
>  }
> +info->has_blocked_reasons = info->blocked_reasons != NULL;
>  
>  switch (s->state) {
>  case MIGRATION_STATUS_NONE:


Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[Bug 1819908] Re: slight screen corruption when maximizing window

2021-04-20 Thread Thomas Huth
Looking through old bug tickets... can you still reproduce this issue
with the latest version of QEMU? Or could we close this ticket nowadays?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1819908

Title:
  slight screen corruption when maximizing window

Status in QEMU:
  New

Bug description:
  Host: Ubuntu disco
  qemu-kvm: 1:3.1+dfsg-2ubuntu2
  libvirt: 5.0.0-1ubuntu2

  Guest: ubuntu bionic
  guest is using cirrus video, with the extra modules kernel package installed 
and the cirrus kernel module loaded. I also tried QXL, but got the same problem.

  A non-maximized terminal window works just fine. As an example, I run
  "lsmod". It fills the screen, which then scrolls a bit.

  The moment I maximize that window, though, the rendering breaks. I can
  see the commands I type, but not their output. See attached video.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1819908/+subscriptions



[Bug 1817239] Re: add '--targets' option to qemu-binfmt-conf.sh

2021-04-20 Thread Thomas Huth
The QEMU project is currently considering to move its bug tracking to another 
system. For this we need to know which bugs are still valid and which could be 
closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state 
back to "New" within the next 60 days, otherwise this report will be marked as 
"Expired". Or mark it as "Fix Released" if the problem has been solved with a 
newer version of QEMU already. Thank you and sorry for the inconvenience.

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1817239

Title:
  add '--targets' option to qemu-binfmt-conf.sh

Status in QEMU:
  Incomplete

Bug description:
  I'd like to ask for the addition of option '--targets' to scripts
  /qemu-binfmt-conf.sh, in order to allow registering the interpreters
  for the given list of architectures only, instead of using all of the
  ones defined in qemu_target_list. The following is a possible patch
  that implements it:

   qemu-binfmt-conf.sh | 9 -
   1 file changed, 8 insertions(+), 1 deletion(-)

  diff --git a/qemu-binfmt-conf.sh b/qemu-binfmt-conf.sh
  index b5a1674..be4a19b 100644
  --- a/qemu-binfmt-conf.sh
  +++ b/qemu-binfmt-conf.sh
  @@ -170,6 +170,7 @@ usage() {
   Usage: qemu-binfmt-conf.sh [--qemu-path PATH][--debian][--systemd CPU]
  [--help][--credential yes|no][--exportdir PATH]
  [--persistent yes|no][--qemu-suffix SUFFIX]
  +   [--targets TARGETS]

  Configure binfmt_misc to use qemu interpreter

  @@ -189,6 +190,8 @@ Usage: qemu-binfmt-conf.sh [--qemu-path 
PATH][--debian][--systemd CPU]
  --persistent:  if yes, the interpreter is loaded when binfmt is
 configured and remains in memory. All future uses
 are cloned from the open file.
  +   --targets: comma-separated list of targets. If provided, only
  +  the targets in the list are registered.

   To import templates with update-binfmts, use :

  @@ -324,7 +327,7 @@ CREDENTIAL=no
   PERSISTENT=no
   QEMU_SUFFIX=""

  -options=$(getopt -o ds:Q:S:e:hc:p: -l 
debian,systemd:,qemu-path:,qemu-suffix:,exportdir:,help,credential:,persistent: 
-- "$@")
  +options=$(getopt -o ds:Q:S:e:hc:p:t: -l 
debian,systemd:,qemu-path:,qemu-suffix:,exportdir:,help,credential:,persistent:,targets:
 -- "$@")
   eval set -- "$options"

   while true ; do
  @@ -380,6 +383,10 @@ while true ; do
   shift
   PERSISTENT="$1"
   ;;
  +-t|--targets)
  +shift
  +qemu_target_list="$(echo "$1" | tr ',' ' ')"
  +;;
   *)
   break
   ;;
  --
  2.20.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1817239/+subscriptions



Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Roger Pau Monné via
On Tue, Apr 20, 2021 at 04:35:02AM +0100, Igor Druzhinin wrote:
> When we're replacing the existing mapping there is possibility of a race
> on memory map with other threads doing mmap operations - the address being
> unmapped/re-mapped could be occupied by another thread in between.
> 
> Linux mmap man page recommends keeping the existing mappings in place to
> reserve the place and instead utilize the fact that the next mmap operation
> with MAP_FIXED flag passed will implicitly destroy the existing mappings
> behind the chosen address. This behavior is guaranteed by POSIX / BSD and
> therefore is portable.
> 
> Note that it wouldn't make the replacement atomic for parallel accesses to
> the replaced region - those might still fail with SIGBUS due to
> xenforeignmemory_map not being atomic. So we're still not expecting those.
> 
> Tested-by: Anthony PERARD 
> Signed-off-by: Igor Druzhinin 

Should we add a 'Fixes: 759235653de ...' or similar tag here?

> ---
>  hw/i386/xen/xen-mapcache.c | 15 ++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
> index 5b120ed..e82b7dc 100644
> --- a/hw/i386/xen/xen-mapcache.c
> +++ b/hw/i386/xen/xen-mapcache.c
> @@ -171,7 +171,20 @@ static void xen_remap_bucket(MapCacheEntry *entry,
>  if (!(entry->flags & XEN_MAPCACHE_ENTRY_DUMMY)) {
>  ram_block_notify_remove(entry->vaddr_base, entry->size);
>  }
> -if (munmap(entry->vaddr_base, entry->size) != 0) {
> +
> +/*
> + * If an entry is being replaced by another mapping and we're using
> + * MAP_FIXED flag for it - there is possibility of a race for vaddr
> + * address with another thread doing an mmap call itself
> + * (see man 2 mmap). To avoid that we skip explicit unmapping here
> + * and allow the kernel to destroy the previous mappings by replacing
> + * them in mmap call later.
> + *
> + * Non-identical replacements are not allowed therefore.
> + */
> +assert(!vaddr || (entry->vaddr_base == vaddr && entry->size == 
> size));
> +
> +if (!vaddr && munmap(entry->vaddr_base, entry->size) != 0) {

Oh, so remappings of foreign addresses must always use the same
virtual address space, I somehow assumed that you could remap an
existing foreign map (or dummy mapping) into a different virtual
address if you so wanted.

Thanks, Roger.



[Bug 1818122] Re: QEMU 3.1 makes libxslt to crash on ppc64

2021-04-20 Thread DDoSolitary
I just checked it out with QEMU 5.2.0 and it seems that the bug has been
fixed.

** Changed in: qemu
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1818122

Title:
  QEMU 3.1 makes libxslt to crash on ppc64

Status in QEMU:
  Fix Released

Bug description:
  Host: clean Ubuntu Disco with QEMU 3.1

  Guest: Alpine Linux edge with xmlto

  Steps to set up guest:
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/vmlinuz-vanilla
  curl -O 
http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/initramfs-vanilla
  qemu-system-ppc64 -m 1G -kernel vmlinuz-vanilla -initrd initramfs-vanilla 
-append "console=hvc0 ip=dhcp 
alpine_repo=http://dl-cdn.alpinelinux.org/alpine/edge/main/ 
modloop=http://dl-cdn.alpinelinux.org/alpine/edge/releases/ppc64le/netboot/modloop-vanilla";
 -device virtio-rng-pci -nographic
  This brings up an VM with an in-memory Alpine Linux.

  Steps to reproduce:
  Login as root and execute the following commands.
  apk add xmlto
  ntpd -nqp time.google.com // For TLS OCSP
  wget https://ddosolitary.org/manpage-base.xsl
  wget https://ddosolitary.org/shadowsocks-libev.xml
  xmlto -m manpage-base.xsl man shadowsocks-libev.xml
  The downloaded files are from this project: 
https://github.com/shadowsocks/shadowsocks-libev The former is directly taken 
from the "doc" directory and the latter is an intermediate build output 
generated by asciidoc from doc/shadowsocks-libev.asciidoc

  Expected behavior: The command silently succeeds producing
  shadowsocks-libev.8

  Actual behavior: 
  runtime error: file 
file:///usr/share/xml/docbook/xsl-stylesheets-1.79.1/manpages/tbl.xsl line 450 
element text
  xsltApplySequenceConstructor: A potential infinite template recursion was 
detected.
  You can adjust xsltMaxDepth (--maxdepth) in order to raise the maximum number 
of nested template calls and variables/params (currently set to 3000).
  Templates:
  #0 name process.colspan
  #1 name process.colspan
  #2 name process.colspan
  #3 name process.colspan
  #4 name process.colspan
  #5 name process.colspan
  #6 name process.colspan
  #7 name process.colspan
  #8 name process.colspan
  #9 name process.colspan
  #10 name process.colspan
  #11 name process.colspan
  #12 name process.colspan
  #13 name process.colspan
  #14 name process.colspan
  Variables:
  #0
  type
  colspan
  #1
  colspan
  #2
  type
  colspan
  #3
  colspan
  #4
  type
  colspan
  #5
  colspan
  #6
  type
  colspan
  #7
  colspan
  #8
  type
  colspan
  #9
  colspan
  #10
  type
  colspan
  #11
  colspan
  #12
  type
  colspan
  #13
  colspan
  #14
  type
  colspan
  error: file /root/shadowsocks-libev.xml
  xsltRunStylesheet : run failed

  Note:
  I tried increasing --maxdepth as suggested in the error output but that will 
result in a segfault.
  This error doesn't occur with an older QEMU (I tested QEMU 2.12 on Ubuntu 
Cosmic) or different architectures on QEMU 3.1 (I tested x86, x86_64, arm, 
aarch64, s390x). Also it didn't help to use an older Alpine Linux (I tested 
v3.8). So I think it is caused by a bug in QEMU rather than the distro/package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1818122/+subscriptions



Re: [PATCH v5 07/31] target/arm: Use cpu_abort in assert_hflags_rebuild_correctly

2021-04-20 Thread Peter Maydell
On Mon, 19 Apr 2021 at 21:36, Richard Henderson
 wrote:
>
> Using cpu_abort takes care of things like unregistering a
> SIGABRT handler for user-only.

I would find this argument more persuasive if we didn't have a
ton of other places where we call abort() or assert() or
g_assert_not_reached(). Either we should have a consistent
mechanism for assertions in linux-user turning off the SIGABRT
handler, or we should just not worry about it on the basis that it's
a "can't happen" error anyway...

thanks
-- PMM



Re: [PATCH v3] memory: Directly dispatch alias accesses on origin memory region

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/20/21 9:00 AM, Mark Cave-Ayland wrote:
> On 19/04/2021 21:58, Philippe Mathieu-Daudé wrote:
> 
>> Hi Mark,
>>
>> On 4/19/21 10:13 PM, Mark Cave-Ayland wrote:
>>> On 17/04/2021 15:02, Philippe Mathieu-Daudé wrote:
>>>
 Since commit 2cdfcf272d ("memory: assign MemoryRegionOps to all
 regions"), all newly created regions are assigned with
 unassigned_mem_ops (which might be then overwritten).

 When using aliased container regions, and there is no region mapped
 at address 0 in the container, the memory_region_dispatch_read()
 and memory_region_dispatch_write() calls incorrectly return the
 container unassigned_mem_ops, because the alias offset is not used.

 The memory_region_init_alias() flow is:

     memory_region_init_alias()
     -> memory_region_init()
    -> object_initialize(TYPE_MEMORY_REGION)
   -> memory_region_initfn()
  -> mr->ops = &unassigned_mem_ops;

 Later when accessing the alias, the memory_region_dispatch_read()
 flow is:

     memory_region_dispatch_read(offset)
     -> memory_region_access_valid(mr)   <- offset is ignored
    -> mr->ops->valid.accepts()
   -> unassigned_mem_accepts()
   <- false
    <- false
  <- MEMTX_DECODE_ERROR

 The caller gets a MEMTX_DECODE_ERROR while the access is OK.

 Fix by dispatching aliases recusirvely, accessing its origin region
 after adding the alias offset.

 Signed-off-by: Philippe Mathieu-Daudé 
 ---
 v3:
 - reworded, mentioning the "alias to container" case
 - use recursive call instead of while(), because easier when debugging
     therefore reset Richard R-b tag.
 v2:
 - use while()
 ---
    softmmu/memory.c | 10 ++
    1 file changed, 10 insertions(+)

 diff --git a/softmmu/memory.c b/softmmu/memory.c
 index d4493ef9e43..23bdbfac079 100644
 --- a/softmmu/memory.c
 +++ b/softmmu/memory.c
 @@ -1442,6 +1442,11 @@ MemTxResult
 memory_region_dispatch_read(MemoryRegion *mr,
    unsigned size = memop_size(op);
    MemTxResult r;
    +    if (mr->alias) {
 +    return memory_region_dispatch_read(mr->alias,
 +   addr + mr->alias_offset,
 +   pval, op, attrs);
 +    }
    if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
    *pval = unassigned_mem_read(mr, addr, size);
    return MEMTX_DECODE_ERROR;
 @@ -1486,6 +1491,11 @@ MemTxResult
 memory_region_dispatch_write(MemoryRegion *mr,
    {
    unsigned size = memop_size(op);
    +    if (mr->alias) {
 +    return memory_region_dispatch_write(mr->alias,
 +    addr + mr->alias_offset,
 +    data, op, attrs);
 +    }
    if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
    unassigned_mem_write(mr, addr, data, size);
    return MEMTX_DECODE_ERROR;
>>>
>>> Whilst working on my q800 patches I realised that this was a similar
>>> problem to the one I had with my macio.alias implementation at [1]:
>>> except that in my case the unassigned_mem_ops mr->ops->valid.accepts()
>>> function was being invoked on a ROM memory region instead of an alias. I
>>> think this is exactly the same issue that you are attempting to fix with
>>> your related patch at
>>> https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg03190.html
>>> ("memory: Initialize MemoryRegionOps for RAM memory regions").
>>
>> So if 2 contributors hit similar issues, there is something wrong with
>> the API. I don't see your use case or mine as forbidded by the
>> documentation in "exec/memory.h".
>>
>> My patch might not be the proper fix, but we need to figure out how
>> to avoid others to hit the same problem, as it is very hard to debug.
> 
> I agree with this sentiment: it has taken me a while to figure out what
> was happening, and that was only because I spotted accesses being
> rejected with -d guest_errors.
> 
> From my perspective the names memory_region_dispatch_read() and
> memory_region_dispatch_write() were the misleading part, although I
> remember thinking it odd whilst trying to use them that I had to start
> delving into sections etc. just to recurse a memory access.
> 
>> At least an assertion and a comment.
>>
>>> I eventually realised that I needed functions that could dispatch
>>> reads/writes to both IO memory regions and ROM memory regions, and that
>>> functionality is covered by the address_space_*() access functions.
>>> Using the address_space_*() functions I was then able to come up with
>>> the working implementation at [2] that handles accesses to both IO
>>> memory regions and ROM memory regions correctly.
>>>
>>> The reaso

Re: [PATCH qemu v18] spapr: Implement Open Firmware client interface

2021-04-20 Thread Alexey Kardashevskiy




On 20/04/2021 13:14, David Gibson wrote:


Overall, looking good.  I'm pretty much happy to take it into 6.1.  I
do have quite a few comments below, but they're basically all just
polish.

On Wed, Mar 31, 2021 at 01:53:08PM +1100, Alexey Kardashevskiy wrote:

The PAPR platform which describes an OS environment that's presented by


Nit: remove "which" and this will become a sentence.


a combination of a hypervisor and firmware. The features it specifies
require collaboration between the firmware and the hypervisor.

Since the beginning, the runtime component of the firmware (RTAS) has
been implemented as a 20 byte shim which simply forwards it to
a hypercall implemented in qemu. The boot time firmware component is
SLOF - but a build that's specific to qemu, and has always needed to be
updated in sync with it. Even though we've managed to limit the amount
of runtime communication we need between qemu and SLOF, there's some,
and it has become increasingly awkward to handle as we've implemented
new features.

This implements a boot time OF client interface (CI) which is
enabled by a new "x-vof" pseries machine option (stands for "Virtual Open
Firmware). When enabled, QEMU implements the custom H_OF_CLIENT hcall
which implements Open Firmware Client Interface (OF CI). This allows
using a smaller stateless firmware which does not have to manage
the device tree.


The above is a really good description of the rationale, thanks.


The new "vof.bin" firmware image is included with source code under
pc-bios/. It also includes RTAS blob.

This implements a handful of CI methods just to get -kernel/-initrd
working. In particular, this implements the device tree fetching and
simple memory allocator - "claim" (an OF CI memory allocator) and updates
"/memory@0/available" to report the client about available memory.

This implements changing some device tree properties which we know how
to deal with, the rest is ignored. To allow changes, this skips
fdt_pack() when x-vof=on as not packing the blob leaves some room for
appending.

In absence of SLOF, this assigns phandles to device tree nodes to make
device tree traversing work.

When x-vof=on, this adds "/chosen" every time QEMU (re)builds a tree.

This adds basic instances support which are managed by a hash map
ihandle -> [phandle].

Before the guest started, the used memory is:
0..e60 - the initial firmware
8000..1 - stack
40.. - kernel
3ea.. - initramdisk


This memory map info would probably be more useful in a comment
somewhere in the code than in the commit message.


This OF CI does not implement "interpret".

Unlike SLOF, this does not format uninitialized nvram. Instead, this
includes a disk image with pre-formatted nvram.

With this basic support, this can only boot into kernel directly.
However this is just enough for the petitboot kernel and initradmdisk to
boot from any possible source. Note this requires reasonably recent guest
kernel with:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=df5be5be8735

The immediate benefit is much faster booting time which especially
crucial with fully emulated early CPU bring up environments. Also this
may come handy when/if GRUB-in-the-userspace sees light of the day.

This separates VOF and sPAPR in a hope that VOF bits may be reused by
other POWERPC boards which do not support pSeries.

This is coded in assumption that later on we might be adding support for
booting from QEMU backends (blockdev is the first candidate) without
devices/drivers in between as OF1275 does not require that and
it is quite easy to so.

Signed-off-by: Alexey Kardashevskiy 
---

The example command line is:

/home/aik/pbuild/qemu-killslof-localhost-ppc64/qemu-system-ppc64 \
-nodefaults \
-chardev stdio,id=STDIO0,signal=off,mux=on \
-device spapr-vty,id=svty0,reg=0x71000110,chardev=STDIO0 \
-mon id=MON0,chardev=STDIO0,mode=readline \
-nographic \
-vga none \
-enable-kvm \
-m 8G \
-machine 
pseries,x-vof=on,cap-cfpc=broken,cap-sbbc=broken,cap-ibs=broken,cap-ccf-assist=off
 \
-kernel pbuild/kernel-le-guest/vmlinux \
-initrd pb/rootfs.cpio.xz \
-drive 
id=DRIVE0,if=none,file=./p/qemu-killslof/pc-bios/vof-nvram.bin,format=raw \
-global spapr-nvram.drive=DRIVE0 \
-snapshot \
-smp 8,threads=8 \
-L /home/aik/t/qemu-ppc64-bios/ \
-trace events=qemu_trace_events \
-d guest_errors \
-chardev socket,id=SOCKET0,server,nowait,path=qemu.mon.tmux26 \
-mon chardev=SOCKET0,mode=control

---
Changes:
v18:
* fixed top addr (max address for "claim") on radix - it equals to ram_size
and vof->top_addr was uint32_t
* fixed "available" property which got broken in v14 but it is only visible
to clients which care (== grub)
* reshuffled vof_dt_memory_available() calls, added vof_init() to allow
vof_claim() before rendering the FDT

v17:
* mv hw/ppc/vof.h include/hw/ppc/vof.h
* VofMachineIfClass -> VofMachineClass; it is not VofMachineInterface as
nobody used this scheme, usually "Interface" is dropped, a couple of times
it i

Re: [PATCH] migration: Deprecate redundant query-migrate result @blocked

2021-04-20 Thread Dr. David Alan Gilbert
* Daniel P. Berrangé (berra...@redhat.com) wrote:
> On Tue, Apr 20, 2021 at 07:19:06AM +0200, Markus Armbruster wrote:
> > Result @blocked is true when and only when result @blocked-reasons is
> > present.  It's always non-empty when present.  @blocked is redundant.
> > It was introduced in commit 3af8554bd0 "migration: Add blocker
> > information", and has not been released.  This gives us a chance to
> > fix the interface with minimal fuss.
> > 
> > Unfortunately, we're already too close to the release to risk dropping
> > it.  Deprecate it instead.
> > 
> > Signed-off-by: Markus Armbruster 
> > ---
> > This is alternative to "[PATCH v2] migration: Drop redundant
> > query-migrate result @blocked".
> > 
> >  qapi/migration.json | 9 +++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> Reviewed-by: Daniel P. Berrangé 

Yes,


Reviewed-by: Dr. David Alan Gilbert 

Peter: Do you want to pick that up directly ?

Dave

> 
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [RFC PATCH-for-6.1 8/9] hw/clock: Declare clock_new() internally

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/19/21 4:26 PM, Peter Maydell wrote:
> On Fri, 9 Apr 2021 at 07:24, Philippe Mathieu-Daudé  wrote:
>>
>> To enforce correct API usage, restrict the clock creation to
>> hw/core/. The only possible ways to create a clock are:
>>
>> - Constant clock at the board level
>>   Using machine_create_constant_clock() in machine_init()
>>
>> - Propagated clock in QDev
>>   Using qdev_init_clock_in() or qdev_init_clock_out() in
>>   TYPE_DEVICE instance_init().
> 
> Why isn't it OK to have a constant clock inside a device ?

I'm not an electronic engineer, so I guessed because I never used
a component which generate a clock source without being externally
excited by a crystal or oscillator. Such exciters are components
on the board. I might be wrong.

Using clock source out of qdev is giving us headache... So I'm
trying to enforce all clocks being used via qdev. Looking at the
resting cases and thinking about hardware, my understanding is
what's left belong to the "(constant) clock source on the board",
added this machine_create_constant_clock() method to complete the
enforced API.

Maybe what I'm trying to fix is a side-effect of the non-qdev reset
problem, and if we get a QOM tree reset, then a clock on a non-qdev
object would properly propagate its constant value to its children.



[Bug 1805913] Re: readdir() returns NULL (errno=EOVERFLOW) for 32-bit user-static qemu on 64-bit host

2021-04-20 Thread Peter Maydell
This is still a bug, and still blocked on the kernel providing APIs to QEMU to 
request 32-bit directory entries. Linus Walleij proposed a kernel patch to add 
a suitable fcntl flag but as far as I'm aware it didn't get in so far:
 
https://lore.kernel.org/qemu-devel/20201117233928.255671-1-linus.wall...@linaro.org/


** Changed in: qemu
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1805913

Title:
  readdir() returns NULL (errno=EOVERFLOW) for 32-bit user-static qemu
  on 64-bit host

Status in QEMU:
  Confirmed

Bug description:
  This can be simply reproduced by compiling and running the attached C
  code (readdir-bug.c) under 32-bit user-static qemu, such as qemu-arm-
  static:

  # Setup docker for user-static binfmt
  docker run --rm --privileged multiarch/qemu-user-static:register --reset
  # Compile the code and run (readdir for / is fine, so create a new directory 
/test).
  docker run -v /path/to/qemu-arm-static:/usr/bin/qemu-arm-static -v 
/path/to/readdir-bug.c:/tmp/readdir-bug.c -it --rm arm32v7/ubuntu:18.10 bash -c 
'{ apt update && apt install -y gcc; } >&/dev/null && mkdir -p /test && cd 
/test && gcc /tmp/readdir-bug.c && ./a.out'
  dir=0xff5b4150
  readdir(dir)=(nil)
  errno=75: Value too large for defined data type

  Do remember to replace the /path/to/qemu-arm-static and /path/to
  /readdir-bug.c to the actual paths of the files.

  The root cause is in glibc:
  
https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getdents.c;h=6d09a5be7057e2792be9150d3a2c7b293cf6fc34;hb=a5275ba5378c9256d18e582572b4315e8edfcbfb#l87

  By C standard, the return type of readdir() is DIR*, in which the
  inode number and offset are 32-bit integers, therefore, glibc calls
  getdents64() and check if the inode number and offset fits the 32-bit
  range, and reports EOVERFLOW if not.

  The problem here is for 32-bit user-static qemu running on 64-bit
  host, getdents64 simply passing through the inode number and offset
  from underlying getdents64 syscall (from 64-bit kernel), which is very
  likely to not fit into 32-bit range. On real hardware, the 32-bit
  kernel creates 32-bit inode numbers, therefore works properly.

  The glibc code makes sense to do the check to be conformant with C
  standard, therefore ideally it should be a fix on qemu side. I admit
  this is difficult because qemu has to maintain a mapping between
  underlying 64-bit inode numbers and 32-bit inode numbers, which would
  severely hurt the performance. I don't expect this could be fix
  anytime soon (or even there would be a fix), but it would be
  worthwhile to surface this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1805913/+subscriptions



[Bug 1808565] Re: Reading /proc/self/task//maps is not remapped to the target

2021-04-20 Thread Peter Maydell
Code inspection shows we still don't handle /proc/self/task.


** Changed in: qemu
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1808565

Title:
  Reading /proc/self/task//maps is not remapped to the target

Status in QEMU:
  Confirmed

Bug description:
  Seeing this in qemu-user 3.1.0

  The code in is_proc_myself which supports remapping of /proc/self/maps
  and /proc//maps does not support remapping of
  /proc/self/task//maps or /proc//task//maps. Extending
  is_proc_myself to cover these cases causes the maps to be rewritten
  correctly.

  These are useful in multithreaded programs to avoid freezing the
  entire program to capture the maps for a single tid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1808565/+subscriptions



Re: [RFC v14 52/80] tests: device-introspect-test: cope with ARM TCG-only devices

2021-04-20 Thread Alex Bennée


Claudio Fontana  writes:

> On 4/19/21 12:29 PM, Thomas Huth wrote:
>> On 19/04/2021 12.24, Claudio Fontana wrote:
>>> On 4/19/21 12:22 PM, Thomas Huth wrote:
 On 16/04/2021 18.27, Claudio Fontana wrote:
> Skip the test_device_intro_concrete for now for ARM KVM-only build,
> as on ARM we currently build devices for ARM that are not
> compatible with a KVM-only build.
>
> We can remove this workaround when we fix this in KConfig etc,
> and we only list and build machines that are compatible with KVM
> for KVM-only builds.
>
> Signed-off-by: Claudio Fontana 
> Cc: Philippe Mathieu-Daudé 
> ---
>tests/qtest/device-introspect-test.c | 18 ++
>1 file changed, 18 insertions(+)
>
> diff --git a/tests/qtest/device-introspect-test.c 
> b/tests/qtest/device-introspect-test.c
> index bbec166dbc..1ff15e2247 100644
> --- a/tests/qtest/device-introspect-test.c
> +++ b/tests/qtest/device-introspect-test.c
> @@ -329,12 +329,30 @@ int main(int argc, char **argv)
>qtest_add_func("device/introspect/none", test_device_intro_none);
>qtest_add_func("device/introspect/abstract", 
> test_device_intro_abstract);
>qtest_add_func("device/introspect/abstract-interfaces", 
> test_abstract_interfaces);
> +
> +/*
> + * XXX currently we build also boards for ARM that are incompatible 
> with KVM.
> + * We therefore need to check this explicitly, and only test virt 
> for kvm-only
> + * arm builds.
> + * After we do the work of Kconfig etc to ensure that only 
> KVM-compatible boards
> + * are built for the kvm-only build, we could remove this.
> + */
> +#ifndef CONFIG_TCG
> +{
> +const char *arch = qtest_get_arch();
> +if (strcmp(arch, "arm") == 0 || strcmp(arch, "aarch64") == 0) {
> +goto add_machine_test_done;
> +}
> +}
> +#endif /* !CONFIG_TCG */
>if (g_test_quick()) {
>qtest_add_data_func("device/introspect/concrete/defaults/none",
>g_strdup(common_args), 
> test_device_intro_concrete);
>} else {
>qtest_cb_for_every_machine(add_machine_test_case, true);
>}
> +goto add_machine_test_done;

 That last goto does not make much sense, since the label is right below.
>>>
>>> It is necessary to remove the warning that is otherwise produced about the 
>>> unused label IIRC.
>> 
>> Ah, ok.
>> 
>> Alternatively, you could maybe also drop the label completely and simply 
>> write the #ifndef block above like this (note the "else"):
>> 
>> #ifndef CONFIG_TCG
>>  const char *arch = qtest_get_arch();
>>  if (!strcmp(arch, "arm") || !strcmp(arch, "aarch64")) {
>>  /* Do nothing */
>>  }
>>  else
>> #endif /* !CONFIG_TCG */
>> 
>> ... not sure what's nicer, though.
>> 
>>   Thomas
>> 
>
> Indeed, I tried a couple of combinations, in the end to me the less confusing 
> was the goto one,
> the #ifdef containing an open else is in my opinion worse, more
> error-prone, but I am open to additional comments/ideas.

Surely a helper makes intent clearer?

  /*
   * XXX currently we build also boards for ARM that are incompatible with KVM.
   * We therefore need to check this explicitly, and only test virt for kvm-only
   * arm builds.
   * After we do the work of Kconfig etc to ensure that only KVM-compatible 
boards
   * are built for the kvm-only build, we could remove this.
   */
  static bool skip_machine_tests(void)
  {
  #ifndef CONFIG_TCG
  const char *arch = qtest_get_arch();
  if (strcmp(arch, "arm") == 0 || strcmp(arch, "aarch64") == 0) {
  return true;
  }
  #endif /* !CONFIG_TCG */
  return false;
  }


  int main(int argc, char **argv)
  {
  g_test_init(&argc, &argv, NULL);

  qtest_add_func("device/introspect/list", test_device_intro_list);
  qtest_add_func("device/introspect/list-fields", test_qom_list_fields);
  qtest_add_func("device/introspect/none", test_device_intro_none);
  qtest_add_func("device/introspect/abstract", test_device_intro_abstract);
  qtest_add_func("device/introspect/abstract-interfaces", 
test_abstract_interfaces);

  if (!skip_machine_tests()) {
  if (g_test_quick()) {
  qtest_add_data_func("device/introspect/concrete/defaults/none",
  g_strdup(common_args), 
test_device_intro_concrete);
  } else {
  qtest_cb_for_every_machine(add_machine_test_case, true);
  }
  }

  return g_test_run();
  }


>
> Thanks,
>
> Claudio


-- 
Alex Bennée



[PATCH v2] i386: Add ratelimit for bus locks acquired in guest

2021-04-20 Thread Chenyi Qiang
Virtual Machines can exploit bus locks to degrade the performance of
system. To address this kind of performance DOS attack, bus lock VM exit
is introduced in KVM and it will report the bus locks detected in guest,
which can help userspace to enforce throttling policies.

The availability of bus lock VM exit can be detected through the
KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential
policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in
bitmap is the only supported strategy at present. It indicates that KVM
will exit to userspace to handle the bus locks.

This patch adds a ratelimit on the bus locks acquired in guest as a
mitigation policy.

Introduce a new field "bld" to record the limited speed of bus locks in
target VM. The user can specify it through the "bus-lock-detection"
as a machine property. In current implementation, the default value of
the speed is 0 per second, which means no restriction on the bus locks.

Ratelimit enforced in data transmission uses a time slice of 100ms to
get smooth output during regular operations in block jobs. As for
ratelimit on bus lock detection, simply set the ratelimit interval to 1s
and restrict the quota of bus lock occurrence to the value of "bld". A
potential alternative is to introduce the time slice as a property
which can help the user achieve more precise control.

The detail of Bus lock VM exit can be found in spec:
https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html

Signed-off-by: Chenyi Qiang 

---
Changes from RFC v1:
  - Remove the rip info output, as the rip can't reflect the bus lock
position correctly.
  - RFC v1: 
https://lore.kernel.org/qemu-devel/20210317084709.15605-1-chenyi.qi...@intel.com/
---
 hw/i386/x86.c |  6 ++
 include/hw/i386/x86.h |  7 +++
 target/i386/kvm/kvm.c | 42 ++
 3 files changed, 55 insertions(+)

diff --git a/hw/i386/x86.c b/hw/i386/x86.c
index ed796fe6ba..42d10857a6 100644
--- a/hw/i386/x86.c
+++ b/hw/i386/x86.c
@@ -1256,6 +1256,12 @@ static void x86_machine_initfn(Object *obj)
 x86ms->pci_irq_mask = ACPI_BUILD_PCI_IRQS;
 x86ms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6);
 x86ms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
+x86ms->bld = 0;
+
+object_property_add_uint64_ptr(obj, "bus-lock-detection",
+   &x86ms->bld, OBJ_PROP_FLAG_READWRITE);
+object_property_set_description(obj, "bus-lock-detection",
+"Bus lock detection ratelimit");
 }
 
 static void x86_machine_class_init(ObjectClass *oc, void *data)
diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h
index c09b648dff..d6e198b228 100644
--- a/include/hw/i386/x86.h
+++ b/include/hw/i386/x86.h
@@ -74,6 +74,13 @@ struct X86MachineState {
  * will be translated to MSI messages in the address space.
  */
 AddressSpace *ioapic_as;
+
+/*
+ * ratelimit enforced on detected bus locks, the default value
+ * is 0 per second
+ */
+uint64_t bld;
+RateLimit bld_limit;
 };
 
 #define X86_MACHINE_SMM  "smm"
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 7fe9f52710..a75fac0404 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -130,6 +130,8 @@ static bool has_msr_mcg_ext_ctl;
 static struct kvm_cpuid2 *cpuid_cache;
 static struct kvm_msr_list *kvm_feature_msrs;
 
+#define SLICE_TIME 10ULL /* ns */
+
 int kvm_has_pit_state2(void)
 {
 return has_pit_state2;
@@ -2267,6 +2269,27 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
 }
 }
 
+if (object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE)) {
+X86MachineState *x86ms = X86_MACHINE(ms);
+
+if (x86ms->bld > 0) {
+ret = kvm_check_extension(s, KVM_CAP_X86_BUS_LOCK_EXIT);
+if (!(ret & KVM_BUS_LOCK_DETECTION_EXIT)) {
+error_report("kvm: bus lock detection unsupported");
+return -ENOTSUP;
+}
+ret = kvm_vm_enable_cap(s, KVM_CAP_X86_BUS_LOCK_EXIT, 0,
+KVM_BUS_LOCK_DETECTION_EXIT);
+if (ret < 0) {
+error_report("kvm: Failed to enable bus lock detection cap: 
%s",
+ strerror(-ret));
+return ret;
+}
+
+ratelimit_set_speed(&x86ms->bld_limit, x86ms->bld, SLICE_TIME);
+}
+}
+
 return 0;
 }
 
@@ -4221,6 +4244,18 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
 }
 }
 
+static void kvm_rate_limit_on_bus_lock(void)
+{
+MachineState *ms = MACHINE(qdev_get_machine());
+X86MachineState *x86ms = X86_MACHINE(ms);
+
+uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bld_limit, 1);
+
+if (delay_ns) {
+g_usleep(delay_ns / SCALE_US);
+}
+}
+
 MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run)
 {
 X86CPU 

Re: [RFC-PATCH] ppc/spapr: Add support for H_SCM_PERFORMANCE_STATS hcall

2021-04-20 Thread Vaibhav Jain
Thanks for looking into this patch David,

David Gibson  writes:

> On Thu, Apr 15, 2021 at 01:23:43PM +0530, Vaibhav Jain wrote:
>> Add support for H_SCM_PERFORMANCE_STATS described at [1] for
>> spapr nvdimms. This enables guest to fetch performance stats[2] like
>> expected life of an nvdimm ('MemLife ') etc and display them to the
>> user. Linux kernel support for fetching these performance stats and
>> exposing them to the user-space was done via [3].
>> 
>> The hcall semantics mandate that each nvdimm performance stats is
>> uniquely identied by a 8-byte ascii string (e.g 'MemLife ') and its
>> value be a 8-byte integer. These performance-stats are exchanged with
>> the guest in via a guest allocated buffer called
>> 'requestAndResultBuffer' or rr-buffer for short. This buffer contains
>> a header descibed by 'struct papr_scm_perf_stats' followed by an array
>> of performance-stats described by 'struct papr_scm_perf_stat'. The
>> hypervisor is expected to validate the rr-buffer header and then based
>> on the request copy the needed performance-stats to the array of
>> 'struct papr_scm_perf_stat' following the header.
>> 
>> The patch proposes a new function h_scm_performance_stats() that
>> services the H_SCM_PERFORMANCE_STATS hcall. After verifying the
>> validity of the rr-buffer header via scm_perf_check_rr_buffer() it
>> proceeds to fill the rr-buffer with requested performance-stats. The
>> value of individual stats is retrived from individual accessor
>> function for the stat which are indexed in the array
>> 'nvdimm_perf_stats'.
>> 
>> References:
>> [1] "Hypercall Op-codes (hcalls)"
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/powerpc/papr_hcalls.rst#n269
>> [2] Sysfs attribute documentation for papr_scm
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/testing/sysfs-bus-papr-pmem#n36
>> [3] "powerpc/papr_scm: Fetch nvdimm performance stats from PHYP"
>> https://lore.kernel.org/r/20200731064153.182203-2-vaib...@linux.ibm.com
>> 
>> Signed-off-by: Vaibhav Jain 
>> ---
>>  hw/ppc/spapr_nvdimm.c  | 243 +
>>  include/hw/ppc/spapr.h |  19 +++-
>>  2 files changed, 261 insertions(+), 1 deletion(-)
>> 
>> diff --git a/hw/ppc/spapr_nvdimm.c b/hw/ppc/spapr_nvdimm.c
>> index 252204e25f..4830eae4a4 100644
>> --- a/hw/ppc/spapr_nvdimm.c
>> +++ b/hw/ppc/spapr_nvdimm.c
>> @@ -35,6 +35,11 @@
>>  /* SCM device is unable to persist memory contents */
>>  #define PAPR_PMEM_UNARMED PPC_BIT(0)
>>  
>> +/* Maximum output buffer size needed to return all nvdimm_perf_stats */
>> +#define SCM_STATS_MAX_OUTPUT_BUFFER  (sizeof(struct papr_scm_perf_stats) + \
>> +  sizeof(struct papr_scm_perf_stat) * \
>> +  ARRAY_SIZE(nvdimm_perf_stats))
>> +
>>  bool spapr_nvdimm_validate(HotplugHandler *hotplug_dev, NVDIMMDevice 
>> *nvdimm,
>> uint64_t size, Error **errp)
>>  {
>> @@ -502,6 +507,243 @@ static target_ulong h_scm_health(PowerPCCPU *cpu, 
>> SpaprMachineState *spapr,
>>  return H_SUCCESS;
>>  }
>>  
>> +static int perf_stat_noop(SpaprDrc *drc, uint8_t unused[8], uint64_t *val)
>> +{
>> +*val = 0;
>> +return H_SUCCESS;
>> +}
>> +
>> +static int perf_stat_memlife(SpaprDrc *drc, uint8_t unused[8], uint64_t 
>> *val)
>> +{
>> +/* Assume full life available of an NVDIMM right now */
>> +*val = 100;
>
> AFAICT the reporting mechanism makes basically all the stats
> optional.  Doesn't it make more sense to omit stats, rather than use
> dummy values in this case?  Or is this just an example for the RFC?
>
Yes, this was just an RFC example to illustrate adding support for a new
stat.

>> +return H_SUCCESS;
>> +}
>> +
>> +/*
>> + * Holds all supported performance stats accessors. Each 
>> performance-statistic
>> + * is uniquely identified by a 8-byte ascii string for example: 'MemLife '
>> + * which indicate in percentage how much usage life of an nvdimm is 
>> remaining.
>> + * 'NoopStat' which is primarily used to test support for retriving 
>> performance
>> + * stats and also to replace unknown stats present in the rr-buffer.
>> + *
>> + */
>> +static const struct {
>> +char stat_id[8];
>> +int  (*stat_getval)(SpaprDrc *drc, uint8_t id[8],  uint64_t *val);
>> +} nvdimm_perf_stats[] = {
>> +{ "NoopStat", perf_stat_noop},
>> +{ "MemLife ", perf_stat_memlife},
>> +};
>> +
>> +/*
>> + * Given a nvdimm drc and stat-name return its value. In case given 
>> stat-name
>> + * isnt supported then return H_PARTIAL.
>> + */
>> +static int nvdimm_stat_getval(SpaprDrc *drc, uint8_t id[8], uint64_t *val)
>> +{
>> +int index;
>> +
>> +/* Lookup the stats-id in the nvdimm_perf_stats table */
>> +for (index = 0; index < ARRAY_SIZE(nvdimm_perf_stats); ++index) {
>> +
>
> No blank line here.
>
Sure, will fix the blank line from this and other places you reported.
>> +

Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Igor Druzhinin via

On 20/04/2021 09:53, Roger Pau Monné wrote:

On Tue, Apr 20, 2021 at 04:35:02AM +0100, Igor Druzhinin wrote:

When we're replacing the existing mapping there is possibility of a race
on memory map with other threads doing mmap operations - the address being
unmapped/re-mapped could be occupied by another thread in between.

Linux mmap man page recommends keeping the existing mappings in place to
reserve the place and instead utilize the fact that the next mmap operation
with MAP_FIXED flag passed will implicitly destroy the existing mappings
behind the chosen address. This behavior is guaranteed by POSIX / BSD and
therefore is portable.

Note that it wouldn't make the replacement atomic for parallel accesses to
the replaced region - those might still fail with SIGBUS due to
xenforeignmemory_map not being atomic. So we're still not expecting those.

Tested-by: Anthony PERARD 
Signed-off-by: Igor Druzhinin 


Should we add a 'Fixes: 759235653de ...' or similar tag here?


I was thinking about it and decided - no. There wasn't a bug here until 
QEMU introduced usage of iothreads at the state loading phase. 
Originally this process was totally single-threaded. And it's hard to 
pinpoint the exact moment when it happened which is also not directly 
related to the change here.


Igor



Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Igor Druzhinin via

On 20/04/2021 04:39, no-re...@patchew.org wrote:

Patchew URL: 
https://patchew.org/QEMU/1618889702-13104-1-git-send-email-igor.druzhi...@citrix.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 1618889702-13104-1-git-send-email-igor.druzhi...@citrix.com
Subject: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
 From https://github.com/patchew-project/qemu
  * [new tag] 
patchew/1618889702-13104-1-git-send-email-igor.druzhi...@citrix.com -> 
patchew/1618889702-13104-1-git-send-email-igor.druzhi...@citrix.com
Switched to a new branch 'test'
3102519 xen-mapcache: avoid a race on memory map while using MAP_FIXED

=== OUTPUT BEGIN ===
ERROR: Author email address is mangled by the mailing list
#2:
Author: Igor Druzhinin via 

total: 1 errors, 0 warnings, 21 lines checked



Anthony,

Is there any action required from me here? I don't completely understand 
how that happened. But I found this:


"In some cases the Author: email address in patches submitted to the
list gets mangled such that it says

John Doe via Qemu-devel 

This change is a result of workarounds for DMARC policies.

Subsystem maintainers accepting patches need to catch these and fix
them before sending pull requests, so a checkpatch.pl test is highly
desirable."

Igor



Re: [PATCH v5 04/14] softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd()

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/13/21 11:14 AM, David Hildenbrand wrote:
> Let's pass in ram flags just like we do with qemu_ram_alloc_from_file(),
> to clean up and prepare for more flags.
> 
> Simplify the documentation of passed ram flags: Looking at our
> documentation of RAM_SHARED and RAM_PMEM is sufficient, no need to be
> repetitive.
> 
> Reviewed-by: Peter Xu 
> Signed-off-by: David Hildenbrand 
> ---
>  backends/hostmem-memfd.c | 7 ---
>  hw/misc/ivshmem.c| 5 ++---
>  include/exec/memory.h| 9 +++--
>  include/exec/ram_addr.h  | 6 +-
>  softmmu/memory.c | 7 +++
>  5 files changed, 13 insertions(+), 21 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 




Re: [RFC v14 04/80] target/arm: tcg: add sysemu and user subdirs

2021-04-20 Thread Alex Bennée


Claudio Fontana  writes:

> Signed-off-by: Claudio Fontana 
> Reviewed-by: Richard Henderson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée



Re: any remaining for-6.0 issues?

2021-04-20 Thread Cornelia Huck
On Mon, 19 Apr 2021 20:38:05 +0100
Mark Cave-Ayland  wrote:

> On 19/04/2021 18:02, Cornelia Huck wrote:
> 
> >> That patch seems to be our best candidate so far, but the intermittent
> >> nature of the failures make it hard to pin down... I don't see anything
> >> obviously wrong with the patch, maybe some linux-user experts have a
> >> better idea?  
> > 
> > FWIW, I tried reproducing the issue on some local systems (no luck),
> > and on code pushed out to gitlab (where it works most of the time, and
> > the user builds where it fails are unpredictable.)
> > 
> > I fear the best we can do right now is stare at the code and try to
> > figure out what might be wrong :(  
> 
> Is there any particular reason why the unsigned long cast was removed from 
> the front? 
> Could that have an effect?

Indeed, that looks strange.

Will give it a try with the cast readded, but I'm still unable to
reproduce the error reliably...




[RFC PATCH 0/3] block-copy: lock tasks and calls list

2021-04-20 Thread Emanuele Giuseppe Esposito
This serie of patches continues Paolo's series on making the
block layer thread safe. Add a CoMutex lock for both tasks and
calls list present in block/block-copy.c

Signed-off-by: Emanuele Giuseppe Esposito 

Emanuele Giuseppe Esposito (3):
  block-copy: improve documentation for BlockCopyTask type and functions
  block-copy: add a CoMutex to the BlockCopyTask list
  block-copy: add CoMutex lock for BlockCopyCallState list

 block/block-copy.c | 41 +
 1 file changed, 33 insertions(+), 8 deletions(-)

-- 
2.30.2




[RFC PATCH 2/3] block-copy: add a CoMutex to the BlockCopyTask list

2021-04-20 Thread Emanuele Giuseppe Esposito
Because the list of tasks is only modified by coroutine
functions, add a CoMutex in order to protect the list of tasks.

Signed-off-by: Emanuele Giuseppe Esposito 
---
 block/block-copy.c | 23 +++
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 03df50a354..e785ac57e0 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -65,7 +65,9 @@ typedef struct BlockCopyTask {
 BlockCopyState *s;
 BlockCopyCallState *call_state;
 
-/* State */
+/* State. These fields are protected by the tasks_lock
+ * in BlockCopyState
+ */
 int64_t offset;
 int64_t bytes;
 bool zeroes;
@@ -95,6 +97,8 @@ typedef struct BlockCopyState {
 bool use_copy_range;
 int64_t copy_size;
 uint64_t len;
+
+CoMutex tasks_lock;
 QLIST_HEAD(, BlockCopyTask) tasks; /* All tasks from all block-copy calls 
*/
 QLIST_HEAD(, BlockCopyCallState) calls;
 
@@ -124,6 +128,7 @@ typedef struct BlockCopyState {
 RateLimit rate_limit;
 } BlockCopyState;
 
+/* Called with lock held */
 static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
 int64_t offset, int64_t bytes)
 {
@@ -145,13 +150,19 @@ static BlockCopyTask 
*find_conflicting_task(BlockCopyState *s,
 static bool coroutine_fn block_copy_wait_one(BlockCopyState *s, int64_t offset,
  int64_t bytes)
 {
-BlockCopyTask *task = find_conflicting_task(s, offset, bytes);
+BlockCopyTask *task;
+
+qemu_co_mutex_lock(&s->tasks_lock);
+task = find_conflicting_task(s, offset, bytes);
 
 if (!task) {
+qemu_co_mutex_unlock(&s->tasks_lock);
 return false;
 }
 
-qemu_co_queue_wait(&task->wait_queue, NULL);
+qemu_co_queue_wait(&task->wait_queue, &s->tasks_lock);
+
+qemu_co_mutex_unlock(&s->tasks_lock);
 
 return true;
 }
@@ -178,7 +189,9 @@ static BlockCopyTask *coroutine_fn 
block_copy_task_create(BlockCopyState *s,
 bytes = QEMU_ALIGN_UP(bytes, s->cluster_size);
 
 /* region is dirty, so no existent tasks possible in it */
+qemu_co_mutex_lock(&s->tasks_lock);
 assert(!find_conflicting_task(s, offset, bytes));
+qemu_co_mutex_unlock(&s->tasks_lock);
 
 bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes);
 s->in_flight_bytes += bytes;
@@ -192,8 +205,9 @@ static BlockCopyTask *coroutine_fn 
block_copy_task_create(BlockCopyState *s,
 .bytes = bytes,
 };
 qemu_co_queue_init(&task->wait_queue);
+qemu_co_mutex_lock(&s->tasks_lock);
 QLIST_INSERT_HEAD(&s->tasks, task, list);
-
+qemu_co_mutex_unlock(&s->tasks_lock);
 return task;
 }
 
@@ -297,6 +311,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, 
BdrvChild *target,
 }
 
 QLIST_INIT(&s->tasks);
+qemu_co_mutex_init(&s->tasks_lock);
 QLIST_INIT(&s->calls);
 
 return s;
-- 
2.30.2




[RFC PATCH 3/3] block-copy: add CoMutex lock for BlockCopyCallState list

2021-04-20 Thread Emanuele Giuseppe Esposito
Use the same tasks_lock, since the calls list is just used
for debug purposes.

Signed-off-by: Emanuele Giuseppe Esposito 
---
 block/block-copy.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index e785ac57e0..555f5fb747 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -310,10 +310,9 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, 
BdrvChild *target,
 s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
 }
 
-QLIST_INIT(&s->tasks);
 qemu_co_mutex_init(&s->tasks_lock);
+QLIST_INIT(&s->tasks);
 QLIST_INIT(&s->calls);
-
 return s;
 }
 
@@ -712,7 +711,9 @@ static int coroutine_fn 
block_copy_common(BlockCopyCallState *call_state)
 {
 int ret;
 
+qemu_co_mutex_lock(&call_state->s->tasks_lock);
 QLIST_INSERT_HEAD(&call_state->s->calls, call_state, list);
+qemu_co_mutex_unlock(&call_state->s->tasks_lock);
 
 do {
 ret = block_copy_dirty_clusters(call_state);
@@ -739,7 +740,9 @@ static int coroutine_fn 
block_copy_common(BlockCopyCallState *call_state)
 call_state->cb(call_state->cb_opaque);
 }
 
+qemu_co_mutex_lock(&call_state->s->tasks_lock);
 QLIST_REMOVE(call_state, list);
+qemu_co_mutex_unlock(&call_state->s->tasks_lock);
 
 return ret;
 }
-- 
2.30.2




[PATCH-for-6.0 v2] target/mips/rel6_translate: Change license to GNU LGPL v2.1 (or later)

2021-04-20 Thread Philippe Mathieu-Daudé
When adding this file and its new content in commit 3f7a927847a
("target/mips: LSA/DLSA R6 decodetree helpers") I did 2 mistakes:

1: Listed authors who haven't been involved in its development,
2: Used an incorrect GNU GPLv2 license text (using 'and' instead
   of 'or').

Instead of correcting the GNU GPLv2 license text, replace the license
by the 'GNU LGPL v2.1 or later' one, to be coherent with the other
translation files in the target/mips/ folder.

Suggested-by: Peter Maydell 
Signed-off-by: Philippe Mathieu-Daudé 
---
Commit introduced after 5.2 release, during the 6.0 cycle.
Harmless and useful for 6.0-rc4 IMHO.
---
 target/mips/rel6_translate.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/target/mips/rel6_translate.c b/target/mips/rel6_translate.c
index 139a7524eea..0354370927d 100644
--- a/target/mips/rel6_translate.c
+++ b/target/mips/rel6_translate.c
@@ -1,12 +1,11 @@
 /*
- *  MIPS emulation for QEMU - # Release 6 translation routines
+ *  MIPS emulation for QEMU - Release 6 translation routines
  *
- *  Copyright (c) 2004-2005 Jocelyn Mayer
- *  Copyright (c) 2006 Marius Groeger (FPU operations)
- *  Copyright (c) 2006 Thiemo Seufer (MIPS32R2 support)
  *  Copyright (c) 2020 Philippe Mathieu-Daudé
  *
- * This code is licensed under the GNU GPLv2 and later.
+ * SPDX-License-Identifier: LGPL-2.1-or-later
+ *
+ * This code is licensed under the LGPL v2.1 or later.
  */
 
 #include "qemu/osdep.h"
-- 
2.26.3




[RFC PATCH 1/3] block-copy: improve documentation for BlockCopyTask type and functions

2021-04-20 Thread Emanuele Giuseppe Esposito
As done in BlockCopyCallState, categorize BlockCopyTask
in IN, State and OUT fields. This is just to understand
which field has to be protected with a lock.

Also add coroutine_fn attribute to block_copy_task_create,
because it is only usedn block_copy_dirty_clusters, a coroutine
function itself.

Signed-off-by: Emanuele Giuseppe Esposito 
---
 block/block-copy.c | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 39ae481c8b..03df50a354 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -48,25 +48,32 @@ typedef struct BlockCopyCallState {
 QLIST_ENTRY(BlockCopyCallState) list;
 
 /* State */
-int ret;
 bool finished;
 QemuCoSleepState *sleep_state;
 bool cancelled;
 
 /* OUT parameters */
+int ret;
 bool error_is_read;
 } BlockCopyCallState;
 
 typedef struct BlockCopyTask {
+/* IN parameters. Initialized in block_copy_task_create()
+ * and never changed.
+ */
 AioTask task;
-
 BlockCopyState *s;
 BlockCopyCallState *call_state;
+
+/* State */
 int64_t offset;
 int64_t bytes;
 bool zeroes;
-QLIST_ENTRY(BlockCopyTask) list;
 CoQueue wait_queue; /* coroutines blocked on this task */
+
+/* To reference all call states from BlockCopyTask */
+QLIST_ENTRY(BlockCopyTask) list;
+
 } BlockCopyTask;
 
 static int64_t task_end(BlockCopyTask *task)
@@ -153,7 +160,7 @@ static bool coroutine_fn block_copy_wait_one(BlockCopyState 
*s, int64_t offset,
  * Search for the first dirty area in offset/bytes range and create task at
  * the beginning of it.
  */
-static BlockCopyTask *block_copy_task_create(BlockCopyState *s,
+static BlockCopyTask *coroutine_fn block_copy_task_create(BlockCopyState *s,
  BlockCopyCallState *call_state,
  int64_t offset, int64_t bytes)
 {
-- 
2.30.2




Re: [PATCH-for-6.0 v2] target/mips/rel6_translate: Change license to GNU LGPL v2.1 (or later)

2021-04-20 Thread Peter Maydell
On Tue, 20 Apr 2021 at 11:06, Philippe Mathieu-Daudé  wrote:
>
> When adding this file and its new content in commit 3f7a927847a
> ("target/mips: LSA/DLSA R6 decodetree helpers") I did 2 mistakes:
>
> 1: Listed authors who haven't been involved in its development,
> 2: Used an incorrect GNU GPLv2 license text (using 'and' instead
>of 'or').
>
> Instead of correcting the GNU GPLv2 license text, replace the license
> by the 'GNU LGPL v2.1 or later' one, to be coherent with the other
> translation files in the target/mips/ folder.
>
> Suggested-by: Peter Maydell 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
> Commit introduced after 5.2 release, during the 6.0 cycle.
> Harmless and useful for 6.0-rc4 IMHO.
> ---
>  target/mips/rel6_translate.c | 9 -
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/target/mips/rel6_translate.c b/target/mips/rel6_translate.c
> index 139a7524eea..0354370927d 100644
> --- a/target/mips/rel6_translate.c
> +++ b/target/mips/rel6_translate.c
> @@ -1,12 +1,11 @@
>  /*
> - *  MIPS emulation for QEMU - # Release 6 translation routines
> + *  MIPS emulation for QEMU - Release 6 translation routines
>   *
> - *  Copyright (c) 2004-2005 Jocelyn Mayer
> - *  Copyright (c) 2006 Marius Groeger (FPU operations)
> - *  Copyright (c) 2006 Thiemo Seufer (MIPS32R2 support)
>   *  Copyright (c) 2020 Philippe Mathieu-Daudé
>   *
> - * This code is licensed under the GNU GPLv2 and later.
> + * SPDX-License-Identifier: LGPL-2.1-or-later
> + *
> + * This code is licensed under the LGPL v2.1 or later.
>   */
>
>  #include "qemu/osdep.h"

Reviewed-by: Peter Maydell 

thanks
-- PMM



Re: [PATCH v5 04/14] softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd()

2021-04-20 Thread Philippe Mathieu-Daudé
Hi David,

On 4/20/21 11:52 AM, Philippe Mathieu-Daudé wrote:
> On 4/13/21 11:14 AM, David Hildenbrand wrote:
>> Let's pass in ram flags just like we do with qemu_ram_alloc_from_file(),
>> to clean up and prepare for more flags.
>>
>> Simplify the documentation of passed ram flags: Looking at our
>> documentation of RAM_SHARED and RAM_PMEM is sufficient, no need to be
>> repetitive.
>>
>> Reviewed-by: Peter Xu 
>> Signed-off-by: David Hildenbrand 
>> ---
>>  backends/hostmem-memfd.c | 7 ---
>>  hw/misc/ivshmem.c| 5 ++---
>>  include/exec/memory.h| 9 +++--
>>  include/exec/ram_addr.h  | 6 +-
>>  softmmu/memory.c | 7 +++
>>  5 files changed, 13 insertions(+), 21 deletions(-)
> 
> Reviewed-by: Philippe Mathieu-Daudé 
> 

Actually it would be clearer to define the 0 value, maybe:

#define RAM_NOFLAG (0 << 0)




Re: [PATCH v5 05/14] softmmu/memory: Pass ram_flags to memory_region_init_ram_shared_nomigrate()

2021-04-20 Thread Philippe Mathieu-Daudé
Hi David,

On 4/13/21 11:14 AM, David Hildenbrand wrote:
> Let's forward ram_flags instead, renaming
> memory_region_init_ram_shared_nomigrate() into
> memory_region_init_ram_flags_nomigrate(). Forward flags to
> qemu_ram_alloc() and qemu_ram_alloc_internal().
> 
> Reviewed-by: Peter Xu 
> Signed-off-by: David Hildenbrand 
> ---
>  backends/hostmem-ram.c|  6 +++--
>  hw/m68k/next-cube.c   |  4 ++--
>  include/exec/memory.h | 24 +--
>  include/exec/ram_addr.h   |  2 +-
>  .../memory-region-housekeeping.cocci  |  8 +++
>  softmmu/memory.c  | 20 

OK up to here, but the qemu_ram_alloc_internal() changes
in softmmu/physmem.c belong to a different patch (except
the line adding "new_block->flags = ram_flags").
Do you mind splitting it?

> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index cc59f05593..fdcd38ba61 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -2108,12 +2108,14 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, 
> ram_addr_t max_size,
>void (*resized)(const char*,
>uint64_t length,
>void *host),
> -  void *host, bool resizeable, bool share,
> +  void *host, uint32_t ram_flags,
>MemoryRegion *mr, Error **errp)
>  {
>  RAMBlock *new_block;
>  Error *local_err = NULL;
>  
> +assert((ram_flags & ~(RAM_SHARED | RAM_RESIZEABLE)) == 0);
> +
>  size = HOST_PAGE_ALIGN(size);
>  max_size = HOST_PAGE_ALIGN(max_size);
>  new_block = g_malloc0(sizeof(*new_block));
> @@ -2125,15 +2127,10 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, 
> ram_addr_t max_size,
>  new_block->fd = -1;
>  new_block->page_size = qemu_real_host_page_size;
>  new_block->host = host;
> +new_block->flags = ram_flags;
>  if (host) {
>  new_block->flags |= RAM_PREALLOC;
>  }
> -if (share) {
> -new_block->flags |= RAM_SHARED;
> -}
> -if (resizeable) {
> -new_block->flags |= RAM_RESIZEABLE;
> -}
>  ram_block_add(new_block, &local_err);
>  if (local_err) {
>  g_free(new_block);
> @@ -2146,15 +2143,14 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, 
> ram_addr_t max_size,
>  RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
> MemoryRegion *mr, Error **errp)
>  {
> -return qemu_ram_alloc_internal(size, size, NULL, host, false,
> -   false, mr, errp);
> +return qemu_ram_alloc_internal(size, size, NULL, host, 0, mr, errp);
>  }
>  
> -RAMBlock *qemu_ram_alloc(ram_addr_t size, bool share,
> +RAMBlock *qemu_ram_alloc(ram_addr_t size, uint32_t ram_flags,
>   MemoryRegion *mr, Error **errp)
>  {
> -return qemu_ram_alloc_internal(size, size, NULL, NULL, false,
> -   share, mr, errp);
> +assert((ram_flags & ~RAM_SHARED) == 0);
> +return qemu_ram_alloc_internal(size, size, NULL, NULL, ram_flags, mr, 
> errp);
>  }
>  
>  RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t size, ram_addr_t maxsz,
> @@ -2163,8 +2159,8 @@ RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t size, 
> ram_addr_t maxsz,
>   void *host),
>   MemoryRegion *mr, Error **errp)
>  {
> -return qemu_ram_alloc_internal(size, maxsz, resized, NULL, true,
> -   false, mr, errp);
> +return qemu_ram_alloc_internal(size, maxsz, resized, NULL,
> +   RAM_RESIZEABLE, mr, errp);
>  }
>  
>  static void reclaim_ramblock(RAMBlock *block)
> 




Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows

2021-04-20 Thread Michael S. Tsirkin
On Thu, Mar 25, 2021 at 04:33:15PM +, Peter Maydell wrote:
> Currently the gpex PCI controller implements no special behaviour for
> guest accesses to areas of the PIO and MMIO where it has not mapped
> any PCI devices, which means that for Arm you end up with a CPU
> exception due to a data abort.
> 
> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> like this return -1 for reads and ignore writes.  In the interests of
> not being surprising, make host CPU accesses to these windows behave
> as -1/discard where there's no mapped PCI device.
> 
> The old behaviour generally didn't cause any problems, because
> almost always the guest OS will map the PCI devices and then only
> access where it has mapped them. One corner case where you will see
> this kind of access is if Linux attempts to probe legacy ISA
> devices via a PIO window access. So far the only case where we've
> seen this has been via the syzkaller fuzzer.
> 
> Reported-by: Dmitry Vyukov 
> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> Signed-off-by: Peter Maydell 


Looks ok superficially

Acked-by: Michael S. Tsirkin 

Peter pls merge if appropriate.

> ---
> v1->v2 changes: put in the hw_compat machinery.
> 
> Still not sure if I want to put this in 6.0 or not.
> 
>  include/hw/pci-host/gpex.h |  4 +++
>  hw/core/machine.c  |  1 +
>  hw/pci-host/gpex.c | 56 --
>  3 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> index d48a020a952..fcf8b638200 100644
> --- a/include/hw/pci-host/gpex.h
> +++ b/include/hw/pci-host/gpex.h
> @@ -49,8 +49,12 @@ struct GPEXHost {
>  
>  MemoryRegion io_ioport;
>  MemoryRegion io_mmio;
> +MemoryRegion io_ioport_window;
> +MemoryRegion io_mmio_window;
>  qemu_irq irq[GPEX_NUM_IRQS];
>  int irq_num[GPEX_NUM_IRQS];
> +
> +bool allow_unmapped_accesses;
>  };
>  
>  struct GPEXConfig {
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 257a664ea2e..9750fad7435 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
>  { "PIIX4_PM", "smm-compat", "on"},
>  { "virtio-blk-device", "report-discard-granularity", "off" },
>  { "virtio-net-pci", "vectors", "3"},
> +{ "gpex-pcihost", "allow-unmapped-accesses", "false" },
>  };
>  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>  
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> index 2bdbe7b4561..a6752fac5e8 100644
> --- a/hw/pci-host/gpex.c
> +++ b/hw/pci-host/gpex.c
> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error 
> **errp)
>  int i;
>  
>  pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> +sysbus_init_mmio(sbd, &pex->mmio);
> +
> +/*
> + * Note that the MemoryRegions io_mmio and io_ioport that we pass
> + * to pci_register_root_bus() are not the same as the
> + * MemoryRegions io_mmio_window and io_ioport_window that we
> + * expose as SysBus MRs. The difference is in the behaviour of
> + * accesses to addresses where no PCI device has been mapped.
> + *
> + * io_mmio and io_ioport are the underlying PCI view of the PCI
> + * address space, and when a PCI device does a bus master access
> + * to a bad address this is reported back to it as a transaction
> + * failure.
> + *
> + * io_mmio_window and io_ioport_window implement "unmapped
> + * addresses read as -1 and ignore writes"; this is traditional
> + * x86 PC behaviour, which is not mandated by the PCI spec proper
> + * but expected by much PCI-using guest software, including Linux.
> + *
> + * In the interests of not being unnecessarily surprising, we
> + * implement it in the gpex PCI host controller, by providing the
> + * _window MRs, which are containers with io ops that implement
> + * the 'background' behaviour and which hold the real PCI MRs as
> + * subregions.
> + */
>  memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
>  memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>  
> -sysbus_init_mmio(sbd, &pex->mmio);
> -sysbus_init_mmio(sbd, &s->io_mmio);
> -sysbus_init_mmio(sbd, &s->io_ioport);
> +if (s->allow_unmapped_accesses) {
> +memory_region_init_io(&s->io_mmio_window, OBJECT(s),
> +  &unassigned_io_ops, OBJECT(s),
> +  "gpex_mmio_window", UINT64_MAX);
> +memory_region_init_io(&s->io_ioport_window, OBJECT(s),
> +  &unassigned_io_ops, OBJECT(s),
> +  "gpex_ioport_window", 64 * 1024);
> +
> +memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
> +memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
> +sysbus_init_mmio(sbd, &s->io_mmio_window);
> + 

Re: [PATCH v5 07/14] memory: Introduce RAM_NORESERVE and wire it up in qemu_ram_mmap()

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/13/21 11:14 AM, David Hildenbrand wrote:
> Let's introduce RAM_NORESERVE, allowing mmap'ing with MAP_NORESERVE. The
> new flag has the following semantics:
> 
> "
> RAM is mmap-ed with MAP_NORESERVE. When set, reserving swap space (or huge
> pages if applicable) is skipped: will bail out if not supported. When not
> set, the OS will do the reservation, if supported for the memory type.
> "
> 
> Allow passing it into:
> - memory_region_init_ram_nomigrate()
> - memory_region_init_resizeable_ram()
> - memory_region_init_ram_from_file()
> 
> ... and teach qemu_ram_mmap() and qemu_anon_ram_alloc() about the flag.
> Bail out if the flag is not supported, which is the case right now for
> both, POSIX and win32. We will add Linux support next and allow specifying
> RAM_NORESERVE via memory backends.
> 
> The target use case is virtio-mem, which dynamically exposes memory
> inside a large, sparse memory area to the VM.
> 
> Reviewed-by: Peter Xu 
> Signed-off-by: David Hildenbrand 
> ---
>  include/exec/cpu-common.h |  1 +
>  include/exec/memory.h | 15 ---
>  include/exec/ram_addr.h   |  3 ++-
>  include/qemu/osdep.h  |  9 -
>  migration/ram.c   |  3 +--
>  softmmu/physmem.c | 15 +++
>  util/mmap-alloc.c |  7 +++
>  util/oslib-posix.c|  6 --
>  util/oslib-win32.c| 13 -
>  9 files changed, 58 insertions(+), 14 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Roger Pau Monné via
On Tue, Apr 20, 2021 at 10:45:03AM +0100, Igor Druzhinin wrote:
> On 20/04/2021 09:53, Roger Pau Monné wrote:
> > On Tue, Apr 20, 2021 at 04:35:02AM +0100, Igor Druzhinin wrote:
> > > When we're replacing the existing mapping there is possibility of a race
> > > on memory map with other threads doing mmap operations - the address being
> > > unmapped/re-mapped could be occupied by another thread in between.
> > > 
> > > Linux mmap man page recommends keeping the existing mappings in place to
> > > reserve the place and instead utilize the fact that the next mmap 
> > > operation
> > > with MAP_FIXED flag passed will implicitly destroy the existing mappings
> > > behind the chosen address. This behavior is guaranteed by POSIX / BSD and
> > > therefore is portable.
> > > 
> > > Note that it wouldn't make the replacement atomic for parallel accesses to
> > > the replaced region - those might still fail with SIGBUS due to
> > > xenforeignmemory_map not being atomic. So we're still not expecting those.
> > > 
> > > Tested-by: Anthony PERARD 
> > > Signed-off-by: Igor Druzhinin 
> > 
> > Should we add a 'Fixes: 759235653de ...' or similar tag here?
> 
> I was thinking about it and decided - no. There wasn't a bug here until QEMU
> introduced usage of iothreads at the state loading phase. Originally this
> process was totally single-threaded. And it's hard to pinpoint the exact
> moment when it happened which is also not directly related to the change
> here.

Right, might be worth mentioning in the commit message then, that the
code was designed to be used without any parallelism, and that the
addition of iothreads is what actually triggered the bug here.

Thanks, Roger.



Re: [PATCH v5 11/14] qmp: Include "share" property of memory backends

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/13/21 11:14 AM, David Hildenbrand wrote:
> Let's include the property, which can be helpful when debugging,
> for example, to spot misuse of MAP_PRIVATE which can result in some ugly
> corner cases (e.g., double-memory consumption on shmem).
> 
> Use the same description we also use for describing the property.
> 
> Cc: Eric Blake 
> Cc: Markus Armbruster 
> Cc: Igor Mammedov 
> Signed-off-by: David Hildenbrand 
> ---
>  hw/core/machine-qmp-cmds.c | 1 +
>  qapi/machine.json  | 3 +++
>  2 files changed, 4 insertions(+)

Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH v5 10/14] qmp: Clarify memory backend properties returned via query-memdev

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/13/21 11:14 AM, David Hildenbrand wrote:
> We return information on the currently configured memory backends and
> don't configure them, so decribe what the currently set properties
> express.
> 
> Suggested-by: Markus Armbruster 
> Cc: Eric Blake 
> Cc: Markus Armbruster 
> Cc: Igor Mammedov 
> Signed-off-by: David Hildenbrand 
> ---
>  qapi/machine.json | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH v5 05/14] softmmu/memory: Pass ram_flags to memory_region_init_ram_shared_nomigrate()

2021-04-20 Thread David Hildenbrand

On 20.04.21 12:20, Philippe Mathieu-Daudé wrote:

Hi David,

On 4/13/21 11:14 AM, David Hildenbrand wrote:

Let's forward ram_flags instead, renaming
memory_region_init_ram_shared_nomigrate() into
memory_region_init_ram_flags_nomigrate(). Forward flags to
qemu_ram_alloc() and qemu_ram_alloc_internal().

Reviewed-by: Peter Xu 
Signed-off-by: David Hildenbrand 
---
  backends/hostmem-ram.c|  6 +++--
  hw/m68k/next-cube.c   |  4 ++--
  include/exec/memory.h | 24 +--
  include/exec/ram_addr.h   |  2 +-
  .../memory-region-housekeeping.cocci  |  8 +++
  softmmu/memory.c  | 20 


OK up to here, but the qemu_ram_alloc_internal() changes
in softmmu/physmem.c belong to a different patch (except
the line adding "new_block->flags = ram_flags").
Do you mind splitting it?



Can you elaborate? Temporarily passing both "ram_flags" and "bool 
resizeable, bool share" to qemu_ram_alloc_internal()?


I don't see a big benefit in doing that besides even more effective 
changes in two individual patches. But maybe if you elaborate, i can see 
what you would like to see :)


Thanks!


--
Thanks,

David / dhildenb




Re: [PATCH v5 00/31] target/arm: enforce alignment

2021-04-20 Thread Peter Maydell
On Mon, 19 Apr 2021 at 21:24, Richard Henderson
 wrote:
>
> Based-on: 20210416183106.1516563-1-richard.hender...@linaro.org
> ("[PATCH v5 for-6.1 0/9] target/arm mte fixes")
>
> Changes for v5:
>   * Address review issues.
>   * Use cpu_abort in assert_hflags_rebuild_correctly
>
> The only patch lacking review is the new one:
> 07-target-arm-Use-cpu_abort-in-assert_hflags_rebuild.patch

I've applied all except for patch 7 to target-arm.next for 6.1.

thanks
-- PMM



Re: [RFC v14 11/80] target/arm: tcg: fix comment style before move to cpu-mmu

2021-04-20 Thread Alex Bennée


Claudio Fontana  writes:

> before exporting some functionality from helper.c into a new module,
> fix the comment style of those functions.
>
> Signed-off-by: Claudio Fontana 
> Reviewed-by: Richard Henderson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée



Re: [RFC v14 09/80] target/arm: only build psci for TCG

2021-04-20 Thread Alex Bennée


Claudio Fontana  writes:

> We do not move psci.c to tcg/ because we expect other
> hypervisors to use it (waiting for HVF enablement).
>
> Signed-off-by: Claudio Fontana 
> Cc: Alexander Graf 
> Reviewed-by: Richard Henderson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée



Re: [RFC v14 13/80] target/arm: fix style in preparation of new cpregs module

2021-04-20 Thread Alex Bennée


Claudio Fontana  writes:

> in preparation of the creation of a new cpregs module,
> fix the style for the to-be-exported code.
>
> Signed-off-by: Claudio Fontana 
> Reviewed-by: Richard Henderson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée



Re: [PATCH v5 04/14] softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd()

2021-04-20 Thread David Hildenbrand

On 20.04.21 12:18, Philippe Mathieu-Daudé wrote:

Hi David,

On 4/20/21 11:52 AM, Philippe Mathieu-Daudé wrote:

On 4/13/21 11:14 AM, David Hildenbrand wrote:

Let's pass in ram flags just like we do with qemu_ram_alloc_from_file(),
to clean up and prepare for more flags.

Simplify the documentation of passed ram flags: Looking at our
documentation of RAM_SHARED and RAM_PMEM is sufficient, no need to be
repetitive.

Reviewed-by: Peter Xu 
Signed-off-by: David Hildenbrand 
---
  backends/hostmem-memfd.c | 7 ---
  hw/misc/ivshmem.c| 5 ++---
  include/exec/memory.h| 9 +++--
  include/exec/ram_addr.h  | 6 +-
  softmmu/memory.c | 7 +++
  5 files changed, 13 insertions(+), 21 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 



Actually it would be clearer to define the 0 value, maybe:

#define RAM_NOFLAG (0 << 0)



This will also turn some code into

ram_flags = backend->share ? RAM_SHARED : RAM_NOFLAG;
ram_flags |= backend->reserve ? RAM_NOFLAG : RAM_NORESERVE;


Looking at other flag users, I barely see any such usage. 
XKB_CONTEXT_NO_FLAGS, ALLOC_NO_FLAGS, and MEM_AFFINITY_NOFLAGS seem to 
be the only ones. That's why I tend to not do it unless there are strong 
opinions.


Thanks!

--
Thanks,

David / dhildenb




[RFC v2 04/13] target/s390x: remove tcg-stub.c

2021-04-20 Thread Claudio Fontana
now that we protect all calls to the tcg-specific functions
with if (tcg_enabled()), we do not need the TCG stub anymore.

Signed-off-by: Claudio Fontana 
---
 target/s390x/tcg-stub.c  | 30 --
 target/s390x/meson.build |  2 +-
 2 files changed, 1 insertion(+), 31 deletions(-)
 delete mode 100644 target/s390x/tcg-stub.c

diff --git a/target/s390x/tcg-stub.c b/target/s390x/tcg-stub.c
deleted file mode 100644
index d22c898802..00
--- a/target/s390x/tcg-stub.c
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * QEMU TCG support -- s390x specific function stubs.
- *
- * Copyright (C) 2018 Red Hat Inc
- *
- * Authors:
- *   David Hildenbrand 
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#include "qemu/osdep.h"
-#include "qemu-common.h"
-#include "cpu.h"
-#include "tcg_s390x.h"
-
-void tcg_s390_tod_updated(CPUState *cs, run_on_cpu_data opaque)
-{
-}
-void QEMU_NORETURN tcg_s390_program_interrupt(CPUS390XState *env,
-  uint32_t code, uintptr_t ra)
-{
-g_assert_not_reached();
-}
-void QEMU_NORETURN tcg_s390_data_exception(CPUS390XState *env, uint32_t dxc,
-   uintptr_t ra)
-{
-g_assert_not_reached();
-}
diff --git a/target/s390x/meson.build b/target/s390x/meson.build
index 1219f64112..a5e1ded93f 100644
--- a/target/s390x/meson.build
+++ b/target/s390x/meson.build
@@ -21,7 +21,7 @@ s390x_ss.add(when: 'CONFIG_TCG', if_true: files(
   'vec_helper.c',
   'vec_int_helper.c',
   'vec_string_helper.c',
-), if_false: files('tcg-stub.c'))
+))
 
 s390x_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: 
files('kvm-stub.c'))
 
-- 
2.26.2




[RFC v2 01/13] hw/s390x: only build tod-qemu from the CONFIG_TCG build

2021-04-20 Thread Claudio Fontana
this will allow in later patches to remove unneeded stubs
in target/s390x.

Signed-off-by: Claudio Fontana 
---
 hw/s390x/meson.build | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/hw/s390x/meson.build b/hw/s390x/meson.build
index 327e9c93af..a4d355b4db 100644
--- a/hw/s390x/meson.build
+++ b/hw/s390x/meson.build
@@ -16,7 +16,6 @@ s390x_ss.add(files(
   'sclp.c',
   'sclpcpu.c',
   'sclpquiesce.c',
-  'tod-qemu.c',
   'tod.c',
 ))
 s390x_ss.add(when: 'CONFIG_KVM', if_true: files(
@@ -25,6 +24,10 @@ s390x_ss.add(when: 'CONFIG_KVM', if_true: files(
   's390-stattrib-kvm.c',
   'pv.c',
 ))
+s390x_ss.add(when: 'CONFIG_TCG', if_true: files(
+  'tod-qemu.c',
+))
+
 s390x_ss.add(when: 'CONFIG_S390_CCW_VIRTIO', if_true: 
files('s390-virtio-ccw.c'))
 s390x_ss.add(when: 'CONFIG_TERMINAL3270', if_true: files('3270-ccw.c'))
 s390x_ss.add(when: 'CONFIG_VFIO', if_true: files('s390-pci-vfio.c'))
-- 
2.26.2




[RFC v2 00/13] s390x cleanup

2021-04-20 Thread Claudio Fontana
Hi,

this is the next version of a cleanup series for s390x.

v1 -> v2: split more, stubs removal for KVM, kvm/ move, sysemu cpu models

* "hw/s390x: rename tod-qemu.c to tod-tcg.c"
  - new patch (Cornelia)

* "hw/s390x: tod: make explicit checks for accelerators when initializing"
  - now error out and abort() for an unknown accelerator. (Cornelia)

* "target/s390x: remove tcg-stub.c" : new patch split from
  "target/s390x: start moving TCG-only code to tcg/" (Cornelia)

* "target/s390x: use kvm_enabled() to wrap call to kvm_s390_get_hpage_1m"
  - new patch, allows the removal of kvm stubs

* "target/s390x: remove kvm-stub.c"
  - new patch, we do not need stubs, as all calls are wrapped by
kvm_enabled(), and all prototypes are visible.

* "target/s390x: move kvm files into kvm/"
  - new patch

* "target/s390x: split sysemu part of cpu models"
  - new patch

* "MAINTAINERS: update s390x directories"
  - new patch


Pre-requisite series (not really needed for now, only in further work down the 
line):

https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg07461.html

Motivation and higher level steps:

https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04628.html

Comments welcome, thanks,

Claudio

Claudio Fontana (13):
  hw/s390x: only build tod-qemu from the CONFIG_TCG build
  hw/s390x: rename tod-qemu.c to tod-tcg.c
  hw/s390x: tod: make explicit checks for accelerators when initializing
  target/s390x: remove tcg-stub.c
  target/s390x: start moving TCG-only code to tcg/
  target/s390x: move sysemu-only code out to cpu-sysemu.c
  target/s390x: split cpu-dump from helper.c
  target/s390x: make helper.c sysemu-only
  target/s390x: use kvm_enabled() to wrap call to kvm_s390_get_hpage_1m
  target/s390x: remove kvm-stub.c
  target/s390x: move kvm files into kvm/
  target/s390x: split sysemu part of cpu models
  MAINTAINERS: update s390x directories

 include/hw/s390x/tod.h|   2 +-
 target/s390x/{ => kvm}/kvm_s390x.h|   0
 target/s390x/{internal.h => s390x-internal.h} |   8 +
 target/s390x/{ => tcg}/s390-tod.h |   0
 target/s390x/{ => tcg}/tcg_s390x.h|   0
 target/s390x/{ => tcg}/vec.h  |   0
 hw/intc/s390_flic_kvm.c   |   2 +-
 hw/s390x/s390-stattrib-kvm.c  |   2 +-
 hw/s390x/tod-kvm.c|   2 +-
 hw/s390x/{tod-qemu.c => tod-tcg.c}|   2 +-
 hw/s390x/tod.c|   9 +-
 hw/vfio/ap.c  |   2 +-
 target/s390x/arch_dump.c  |   2 +-
 target/s390x/cpu-dump.c   | 131 ++
 target/s390x/cpu-sysemu.c | 304 +
 target/s390x/cpu.c| 287 +---
 target/s390x/cpu_models.c | 421 +
 target/s390x/cpu_models_sysemu.c  | 426 ++
 target/s390x/cpu_models_user.c|  20 +
 target/s390x/diag.c   |   7 +-
 target/s390x/gdbstub.c|   2 +-
 target/s390x/helper.c | 113 +
 target/s390x/interrupt.c  |   6 +-
 target/s390x/ioinst.c |   2 +-
 target/s390x/kvm-stub.c   | 126 --
 target/s390x/{ => kvm}/kvm.c  |   4 +-
 target/s390x/machine.c|   6 +-
 target/s390x/mmu_helper.c |   4 +-
 target/s390x/sigp.c   |   2 +-
 target/s390x/tcg-stub.c   |  30 --
 target/s390x/{ => tcg}/cc_helper.c|   2 +-
 target/s390x/{ => tcg}/crypto_helper.c|   2 +-
 target/s390x/{ => tcg}/excp_helper.c  |   2 +-
 target/s390x/{ => tcg}/fpu_helper.c   |   2 +-
 target/s390x/{ => tcg}/int_helper.c   |   2 +-
 target/s390x/{ => tcg}/mem_helper.c   |   2 +-
 target/s390x/{ => tcg}/misc_helper.c  |   2 +-
 target/s390x/{ => tcg}/translate.c|   2 +-
 target/s390x/{ => tcg}/vec_fpu_helper.c   |   2 +-
 target/s390x/{ => tcg}/vec_helper.c   |   2 +-
 target/s390x/{ => tcg}/vec_int_helper.c   |   0
 target/s390x/{ => tcg}/vec_string_helper.c|   2 +-
 target/s390x/{ => tcg}/translate_vx.c.inc |   0
 MAINTAINERS   |   8 +-
 hw/s390x/meson.build  |   5 +-
 target/s390x/kvm/meson.build  |  17 +
 target/s390x/meson.build  |  41 +-
 target/s390x/{ => tcg}/insn-data.def  |   0
 target/s390x/{ => tcg}/insn-format.def|   0
 target/s390x/tcg/meson.build  |  14 +
 target/s390x/trace-events |   2 +-
 51 files changed, 999 insertions(+), 1032 deletions(-)
 rename target/s390x/{ => kvm}/kvm_s390x.h (100%)
 rename target/s390x/{internal.h => s390x-internal.h} (97%)
 rename target/s390x/{ => tcg}/s390-tod.h (100%)

[RFC v2 03/13] hw/s390x: tod: make explicit checks for accelerators when initializing

2021-04-20 Thread Claudio Fontana
replace general "else" with specific checks for each possible accelerator.

Handle qtest as a NOP,
and error out for an unknown accelerator used in combination with tod.

Signed-off-by: Claudio Fontana 
---
 hw/s390x/tod.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/s390x/tod.c b/hw/s390x/tod.c
index 3c2979175e..fd5a36bf24 100644
--- a/hw/s390x/tod.c
+++ b/hw/s390x/tod.c
@@ -14,6 +14,8 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "sysemu/kvm.h"
+#include "sysemu/tcg.h"
+#include "sysemu/qtest.h"
 #include "migration/qemu-file-types.h"
 #include "migration/register.h"
 
@@ -23,8 +25,13 @@ void s390_init_tod(void)
 
 if (kvm_enabled()) {
 obj = object_new(TYPE_KVM_S390_TOD);
-} else {
+} else if (tcg_enabled()) {
 obj = object_new(TYPE_QEMU_S390_TOD);
+} else if (qtest_enabled()) {
+return;
+} else {
+error_report("current accelerator not handled in s390_init_tod!");
+abort();
 }
 object_property_add_child(qdev_get_machine(), TYPE_S390_TOD, obj);
 object_unref(obj);
-- 
2.26.2




[RFC v2 07/13] target/s390x: split cpu-dump from helper.c

2021-04-20 Thread Claudio Fontana
Signed-off-by: Claudio Fontana 
---
 target/s390x/cpu-dump.c  | 131 +++
 target/s390x/helper.c| 107 
 target/s390x/meson.build |   1 +
 3 files changed, 132 insertions(+), 107 deletions(-)
 create mode 100644 target/s390x/cpu-dump.c

diff --git a/target/s390x/cpu-dump.c b/target/s390x/cpu-dump.c
new file mode 100644
index 00..4170dec01a
--- /dev/null
+++ b/target/s390x/cpu-dump.c
@@ -0,0 +1,131 @@
+/*
+ * S/390 CPU dump to FILE
+ *
+ *  Copyright (c) 2009 Ulrich Hecht
+ *  Copyright (c) 2011 Alexander Graf
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see .
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "s390x-internal.h"
+#include "qemu/qemu-print.h"
+
+void s390_cpu_dump_state(CPUState *cs, FILE *f, int flags)
+{
+S390CPU *cpu = S390_CPU(cs);
+CPUS390XState *env = &cpu->env;
+int i;
+
+if (env->cc_op > 3) {
+qemu_fprintf(f, "PSW=mask %016" PRIx64 " addr %016" PRIx64 " cc 
%15s\n",
+ env->psw.mask, env->psw.addr, cc_name(env->cc_op));
+} else {
+qemu_fprintf(f, "PSW=mask %016" PRIx64 " addr %016" PRIx64 " cc 
%02x\n",
+ env->psw.mask, env->psw.addr, env->cc_op);
+}
+
+for (i = 0; i < 16; i++) {
+qemu_fprintf(f, "R%02d=%016" PRIx64, i, env->regs[i]);
+if ((i % 4) == 3) {
+qemu_fprintf(f, "\n");
+} else {
+qemu_fprintf(f, " ");
+}
+}
+
+if (flags & CPU_DUMP_FPU) {
+if (s390_has_feat(S390_FEAT_VECTOR)) {
+for (i = 0; i < 32; i++) {
+qemu_fprintf(f, "V%02d=%016" PRIx64 "%016" PRIx64 "%c",
+ i, env->vregs[i][0], env->vregs[i][1],
+ i % 2 ? '\n' : ' ');
+}
+} else {
+for (i = 0; i < 16; i++) {
+qemu_fprintf(f, "F%02d=%016" PRIx64 "%c",
+ i, *get_freg(env, i),
+ (i % 4) == 3 ? '\n' : ' ');
+}
+}
+}
+
+#ifndef CONFIG_USER_ONLY
+for (i = 0; i < 16; i++) {
+qemu_fprintf(f, "C%02d=%016" PRIx64, i, env->cregs[i]);
+if ((i % 4) == 3) {
+qemu_fprintf(f, "\n");
+} else {
+qemu_fprintf(f, " ");
+}
+}
+#endif
+
+#ifdef DEBUG_INLINE_BRANCHES
+for (i = 0; i < CC_OP_MAX; i++) {
+qemu_fprintf(f, "  %15s = %10ld\t%10ld\n", cc_name(i),
+ inline_branch_miss[i], inline_branch_hit[i]);
+}
+#endif
+
+qemu_fprintf(f, "\n");
+}
+
+const char *cc_name(enum cc_op cc_op)
+{
+static const char * const cc_names[] = {
+[CC_OP_CONST0]= "CC_OP_CONST0",
+[CC_OP_CONST1]= "CC_OP_CONST1",
+[CC_OP_CONST2]= "CC_OP_CONST2",
+[CC_OP_CONST3]= "CC_OP_CONST3",
+[CC_OP_DYNAMIC]   = "CC_OP_DYNAMIC",
+[CC_OP_STATIC]= "CC_OP_STATIC",
+[CC_OP_NZ]= "CC_OP_NZ",
+[CC_OP_ADDU]  = "CC_OP_ADDU",
+[CC_OP_SUBU]  = "CC_OP_SUBU",
+[CC_OP_LTGT_32]   = "CC_OP_LTGT_32",
+[CC_OP_LTGT_64]   = "CC_OP_LTGT_64",
+[CC_OP_LTUGTU_32] = "CC_OP_LTUGTU_32",
+[CC_OP_LTUGTU_64] = "CC_OP_LTUGTU_64",
+[CC_OP_LTGT0_32]  = "CC_OP_LTGT0_32",
+[CC_OP_LTGT0_64]  = "CC_OP_LTGT0_64",
+[CC_OP_ADD_64]= "CC_OP_ADD_64",
+[CC_OP_SUB_64]= "CC_OP_SUB_64",
+[CC_OP_ABS_64]= "CC_OP_ABS_64",
+[CC_OP_NABS_64]   = "CC_OP_NABS_64",
+[CC_OP_ADD_32]= "CC_OP_ADD_32",
+[CC_OP_SUB_32]= "CC_OP_SUB_32",
+[CC_OP_ABS_32]= "CC_OP_ABS_32",
+[CC_OP_NABS_32]   = "CC_OP_NABS_32",
+[CC_OP_COMP_32]   = "CC_OP_COMP_32",
+[CC_OP_COMP_64]   = "CC_OP_COMP_64",
+[CC_OP_TM_32] = "CC_OP_TM_32",
+[CC_OP_TM_64] = "CC_OP_TM_64",
+[CC_OP_NZ_F32]= "CC_OP_NZ_F32",
+[CC_OP_NZ_F64]= "CC_OP_NZ_F64",
+[CC_OP_NZ_F128]   = "CC_OP_NZ_F128",
+[CC_OP_ICM]   = "CC_OP_ICM",
+[CC_OP_SLA_32]= "CC_OP_SLA_32",
+[CC_OP_SLA_64]= "CC_OP_SLA_64",
+[CC_OP_FLOGR] = "CC_OP_FLOGR",
+[CC_OP_LCBB]  = "CC_OP_LCBB",
+[CC_OP_VC]= "CC_OP_VC",
+[CC_OP_MULS_32]   = "CC_OP_MULS_32",
+   

[RFC v2 09/13] target/s390x: use kvm_enabled() to wrap call to kvm_s390_get_hpage_1m

2021-04-20 Thread Claudio Fontana
this will allow to remove the kvm stubs.

Signed-off-by: Claudio Fontana 
---
 target/s390x/diag.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/s390x/diag.c b/target/s390x/diag.c
index 86b7032b5b..311e22b4ea 100644
--- a/target/s390x/diag.c
+++ b/target/s390x/diag.c
@@ -21,6 +21,7 @@
 #include "hw/s390x/ipl.h"
 #include "hw/s390x/s390-virtio-ccw.h"
 #include "hw/s390x/pv.h"
+#include "sysemu/kvm.h"
 #include "kvm_s390x.h"
 
 int handle_diag_288(CPUS390XState *env, uint64_t r1, uint64_t r3)
@@ -169,7 +170,7 @@ out:
 return;
 }
 
-if (kvm_s390_get_hpage_1m()) {
+if (kvm_enabled() && kvm_s390_get_hpage_1m()) {
 error_report("Protected VMs can currently not be backed with "
  "huge pages");
 env->regs[r1 + 1] = DIAG_308_RC_INVAL_FOR_PV;
-- 
2.26.2




[RFC v2 02/13] hw/s390x: rename tod-qemu.c to tod-tcg.c

2021-04-20 Thread Claudio Fontana
we stop short of renaming the actual qom object though,
so type remains TYPE_QEMU_S390_TOD, ie "s390-tod-qemu".

Signed-off-by: Claudio Fontana 
---
 hw/s390x/{tod-qemu.c => tod-tcg.c} | 0
 hw/s390x/meson.build   | 2 +-
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename hw/s390x/{tod-qemu.c => tod-tcg.c} (100%)

diff --git a/hw/s390x/tod-qemu.c b/hw/s390x/tod-tcg.c
similarity index 100%
rename from hw/s390x/tod-qemu.c
rename to hw/s390x/tod-tcg.c
diff --git a/hw/s390x/meson.build b/hw/s390x/meson.build
index a4d355b4db..72f48635cb 100644
--- a/hw/s390x/meson.build
+++ b/hw/s390x/meson.build
@@ -25,7 +25,7 @@ s390x_ss.add(when: 'CONFIG_KVM', if_true: files(
   'pv.c',
 ))
 s390x_ss.add(when: 'CONFIG_TCG', if_true: files(
-  'tod-qemu.c',
+  'tod-tcg.c',
 ))
 
 s390x_ss.add(when: 'CONFIG_S390_CCW_VIRTIO', if_true: 
files('s390-virtio-ccw.c'))
-- 
2.26.2




[RFC v2 05/13] target/s390x: start moving TCG-only code to tcg/

2021-04-20 Thread Claudio Fontana
move everything related to translate, as well as HELPER code in tcg/

mmu_helper.c stays put for now, as it contains both TCG and KVM code.

The internal.h file is renamed to s390x-internal.h, because of the
risk of collision with other files with the same name.

Signed-off-by: Claudio Fontana 
---
 include/hw/s390x/tod.h|  2 +-
 target/s390x/{internal.h => s390x-internal.h} |  0
 target/s390x/{ => tcg}/s390-tod.h |  0
 target/s390x/{ => tcg}/tcg_s390x.h|  0
 target/s390x/{ => tcg}/vec.h  |  0
 hw/s390x/tod-tcg.c|  2 +-
 target/s390x/arch_dump.c  |  2 +-
 target/s390x/cpu.c|  2 +-
 target/s390x/cpu_models.c |  2 +-
 target/s390x/diag.c   |  2 +-
 target/s390x/gdbstub.c|  2 +-
 target/s390x/helper.c |  2 +-
 target/s390x/interrupt.c  |  4 ++--
 target/s390x/ioinst.c |  2 +-
 target/s390x/kvm.c|  2 +-
 target/s390x/machine.c|  4 ++--
 target/s390x/mmu_helper.c |  2 +-
 target/s390x/sigp.c   |  2 +-
 target/s390x/{ => tcg}/cc_helper.c|  2 +-
 target/s390x/{ => tcg}/crypto_helper.c|  2 +-
 target/s390x/{ => tcg}/excp_helper.c  |  2 +-
 target/s390x/{ => tcg}/fpu_helper.c   |  2 +-
 target/s390x/{ => tcg}/int_helper.c   |  2 +-
 target/s390x/{ => tcg}/mem_helper.c   |  2 +-
 target/s390x/{ => tcg}/misc_helper.c  |  2 +-
 target/s390x/{ => tcg}/translate.c|  2 +-
 target/s390x/{ => tcg}/vec_fpu_helper.c   |  2 +-
 target/s390x/{ => tcg}/vec_helper.c   |  2 +-
 target/s390x/{ => tcg}/vec_int_helper.c   |  0
 target/s390x/{ => tcg}/vec_string_helper.c|  2 +-
 target/s390x/{ => tcg}/translate_vx.c.inc |  0
 target/s390x/meson.build  | 17 ++---
 target/s390x/{ => tcg}/insn-data.def  |  0
 target/s390x/{ => tcg}/insn-format.def|  0
 target/s390x/tcg/meson.build  | 14 ++
 35 files changed, 43 insertions(+), 42 deletions(-)
 rename target/s390x/{internal.h => s390x-internal.h} (100%)
 rename target/s390x/{ => tcg}/s390-tod.h (100%)
 rename target/s390x/{ => tcg}/tcg_s390x.h (100%)
 rename target/s390x/{ => tcg}/vec.h (100%)
 rename target/s390x/{ => tcg}/cc_helper.c (99%)
 rename target/s390x/{ => tcg}/crypto_helper.c (98%)
 rename target/s390x/{ => tcg}/excp_helper.c (99%)
 rename target/s390x/{ => tcg}/fpu_helper.c (99%)
 rename target/s390x/{ => tcg}/int_helper.c (99%)
 rename target/s390x/{ => tcg}/mem_helper.c (99%)
 rename target/s390x/{ => tcg}/misc_helper.c (99%)
 rename target/s390x/{ => tcg}/translate.c (99%)
 rename target/s390x/{ => tcg}/vec_fpu_helper.c (99%)
 rename target/s390x/{ => tcg}/vec_helper.c (99%)
 rename target/s390x/{ => tcg}/vec_int_helper.c (100%)
 rename target/s390x/{ => tcg}/vec_string_helper.c (99%)
 rename target/s390x/{ => tcg}/translate_vx.c.inc (100%)
 rename target/s390x/{ => tcg}/insn-data.def (100%)
 rename target/s390x/{ => tcg}/insn-format.def (100%)
 create mode 100644 target/s390x/tcg/meson.build

diff --git a/include/hw/s390x/tod.h b/include/hw/s390x/tod.h
index ff3195a4bf..0935e85089 100644
--- a/include/hw/s390x/tod.h
+++ b/include/hw/s390x/tod.h
@@ -12,7 +12,7 @@
 #define HW_S390_TOD_H
 
 #include "hw/qdev-core.h"
-#include "target/s390x/s390-tod.h"
+#include "tcg/s390-tod.h"
 #include "qom/object.h"
 
 typedef struct S390TOD {
diff --git a/target/s390x/internal.h b/target/s390x/s390x-internal.h
similarity index 100%
rename from target/s390x/internal.h
rename to target/s390x/s390x-internal.h
diff --git a/target/s390x/s390-tod.h b/target/s390x/tcg/s390-tod.h
similarity index 100%
rename from target/s390x/s390-tod.h
rename to target/s390x/tcg/s390-tod.h
diff --git a/target/s390x/tcg_s390x.h b/target/s390x/tcg/tcg_s390x.h
similarity index 100%
rename from target/s390x/tcg_s390x.h
rename to target/s390x/tcg/tcg_s390x.h
diff --git a/target/s390x/vec.h b/target/s390x/tcg/vec.h
similarity index 100%
rename from target/s390x/vec.h
rename to target/s390x/tcg/vec.h
diff --git a/hw/s390x/tod-tcg.c b/hw/s390x/tod-tcg.c
index e91b9590f5..4b3e65050a 100644
--- a/hw/s390x/tod-tcg.c
+++ b/hw/s390x/tod-tcg.c
@@ -16,7 +16,7 @@
 #include "qemu/cutils.h"
 #include "qemu/module.h"
 #include "cpu.h"
-#include "tcg_s390x.h"
+#include "tcg/tcg_s390x.h"
 
 static void qemu_s390_tod_get(const S390TODState *td, S390TOD *tod,
   Error **errp)
diff --git a/target/s390x/arch_dump.c b/target/s390x/arch_dump.c
index cc1330876b..08daf93ae1 100644
--- a/target/s390x/arch_dump.c
+++ b/target/s390x/arch_dump.c
@@ -13,7 +13,7 @@
 
 #include "qemu/osdep.h"
 #include "cpu.h"
-#include "internal.h"
+#include "s390x-internal.h"
 #include "elf.h"
 #incl

[RFC v2 11/13] target/s390x: move kvm files into kvm/

2021-04-20 Thread Claudio Fontana
Signed-off-by: Claudio Fontana 
---
 target/s390x/{ => kvm}/kvm_s390x.h |  0
 hw/intc/s390_flic_kvm.c|  2 +-
 hw/s390x/s390-stattrib-kvm.c   |  2 +-
 hw/s390x/tod-kvm.c |  2 +-
 hw/vfio/ap.c   |  2 +-
 target/s390x/cpu-sysemu.c  |  2 +-
 target/s390x/cpu.c |  2 +-
 target/s390x/cpu_models.c  |  2 +-
 target/s390x/diag.c|  2 +-
 target/s390x/interrupt.c   |  2 +-
 target/s390x/{ => kvm}/kvm.c   |  2 +-
 target/s390x/machine.c |  2 +-
 target/s390x/mmu_helper.c  |  2 +-
 target/s390x/kvm/meson.build   | 17 +
 target/s390x/meson.build   | 16 +---
 15 files changed, 30 insertions(+), 27 deletions(-)
 rename target/s390x/{ => kvm}/kvm_s390x.h (100%)
 rename target/s390x/{ => kvm}/kvm.c (99%)
 create mode 100644 target/s390x/kvm/meson.build

diff --git a/target/s390x/kvm_s390x.h b/target/s390x/kvm/kvm_s390x.h
similarity index 100%
rename from target/s390x/kvm_s390x.h
rename to target/s390x/kvm/kvm_s390x.h
diff --git a/hw/intc/s390_flic_kvm.c b/hw/intc/s390_flic_kvm.c
index b3fb9f8395..91987b0951 100644
--- a/hw/intc/s390_flic_kvm.c
+++ b/hw/intc/s390_flic_kvm.c
@@ -12,7 +12,7 @@
 
 #include "qemu/osdep.h"
 #include "cpu.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include 
 #include "qemu/error-report.h"
 #include "qemu/module.h"
diff --git a/hw/s390x/s390-stattrib-kvm.c b/hw/s390x/s390-stattrib-kvm.c
index f89d8d9d16..6664345fb1 100644
--- a/hw/s390x/s390-stattrib-kvm.c
+++ b/hw/s390x/s390-stattrib-kvm.c
@@ -17,7 +17,7 @@
 #include "sysemu/kvm.h"
 #include "exec/ram_addr.h"
 #include "cpu.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 
 Object *kvm_s390_stattrib_create(void)
 {
diff --git a/hw/s390x/tod-kvm.c b/hw/s390x/tod-kvm.c
index 0b94477486..ec855811ae 100644
--- a/hw/s390x/tod-kvm.c
+++ b/hw/s390x/tod-kvm.c
@@ -13,7 +13,7 @@
 #include "qemu/module.h"
 #include "sysemu/runstate.h"
 #include "hw/s390x/tod.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 
 static void kvm_s390_get_tod_raw(S390TOD *tod, Error **errp)
 {
diff --git a/hw/vfio/ap.c b/hw/vfio/ap.c
index 9571c2f91f..56a33b1277 100644
--- a/hw/vfio/ap.c
+++ b/hw/vfio/ap.c
@@ -23,7 +23,7 @@
 #include "qemu/option.h"
 #include "qemu/config-file.h"
 #include "cpu.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include "migration/vmstate.h"
 #include "hw/qdev-properties.h"
 #include "hw/s390x/ap-bridge.h"
diff --git a/target/s390x/cpu-sysemu.c b/target/s390x/cpu-sysemu.c
index 6081b7ef32..f3c1b4845a 100644
--- a/target/s390x/cpu-sysemu.c
+++ b/target/s390x/cpu-sysemu.c
@@ -24,7 +24,7 @@
 #include "qapi/error.h"
 #include "cpu.h"
 #include "s390x-internal.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include "sysemu/kvm.h"
 #include "sysemu/reset.h"
 #include "qemu/timer.h"
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 59efe48bcd..6e82ba73cc 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -24,7 +24,7 @@
 #include "qapi/error.h"
 #include "cpu.h"
 #include "s390x-internal.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include "sysemu/kvm.h"
 #include "sysemu/reset.h"
 #include "qemu/module.h"
diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 4ff8cba7e5..0ed1c23774 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -13,7 +13,7 @@
 #include "qemu/osdep.h"
 #include "cpu.h"
 #include "s390x-internal.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include "sysemu/kvm.h"
 #include "sysemu/tcg.h"
 #include "qapi/error.h"
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
index 311e22b4ea..5b75853a7e 100644
--- a/target/s390x/diag.c
+++ b/target/s390x/diag.c
@@ -22,7 +22,7 @@
 #include "hw/s390x/s390-virtio-ccw.h"
 #include "hw/s390x/pv.h"
 #include "sysemu/kvm.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 
 int handle_diag_288(CPUS390XState *env, uint64_t r1, uint64_t r3)
 {
diff --git a/target/s390x/interrupt.c b/target/s390x/interrupt.c
index d0e58d6e8d..4e64ee705f 100644
--- a/target/s390x/interrupt.c
+++ b/target/s390x/interrupt.c
@@ -10,7 +10,7 @@
 #include "qemu/osdep.h"
 #include "qemu/log.h"
 #include "cpu.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include "s390x-internal.h"
 #include "exec/exec-all.h"
 #include "sysemu/kvm.h"
diff --git a/target/s390x/kvm.c b/target/s390x/kvm/kvm.c
similarity index 99%
rename from target/s390x/kvm.c
rename to target/s390x/kvm/kvm.c
index 2a22cc69f6..4e47563faf 100644
--- a/target/s390x/kvm.c
+++ b/target/s390x/kvm/kvm.c
@@ -27,7 +27,7 @@
 #include "qemu-common.h"
 #include "cpu.h"
 #include "s390x-internal.h"
-#include "kvm_s390x.h"
+#include "kvm/kvm_s390x.h"
 #include "sysemu/kvm_int.h"
 #include "qemu/cutils.h"
 #include "qapi/error.h"
diff --git a/target/s390x/machine.c b/target/s390x/machine.c
index 81a8a7ff99..37a076858c 100644
--- a/target/s390x/machine.c
+++ b/target/s390x/mach

[RFC v2 06/13] target/s390x: move sysemu-only code out to cpu-sysemu.c

2021-04-20 Thread Claudio Fontana
Signed-off-by: Claudio Fontana 
---
 target/s390x/s390x-internal.h |   6 +
 target/s390x/cpu-sysemu.c | 304 ++
 target/s390x/cpu.c| 283 ++-
 target/s390x/meson.build  |   1 +
 target/s390x/trace-events |   2 +-
 5 files changed, 324 insertions(+), 272 deletions(-)
 create mode 100644 target/s390x/cpu-sysemu.c

diff --git a/target/s390x/s390x-internal.h b/target/s390x/s390x-internal.h
index 11515bb617..171ecd59fb 100644
--- a/target/s390x/s390x-internal.h
+++ b/target/s390x/s390x-internal.h
@@ -244,6 +244,12 @@ void load_psw(CPUS390XState *env, uint64_t mask, uint64_t 
addr);
 #ifndef CONFIG_USER_ONLY
 unsigned int s390_cpu_halt(S390CPU *cpu);
 void s390_cpu_unhalt(S390CPU *cpu);
+void s390_cpu_init_sysemu(Object *obj);
+bool s390_cpu_realize_sysemu(DeviceState *dev, Error **errp);
+void s390_cpu_finalize(Object *obj);
+void s390_cpu_class_init_sysemu(CPUClass *cc);
+void s390_cpu_machine_reset_cb(void *opaque);
+
 #else
 static inline unsigned int s390_cpu_halt(S390CPU *cpu)
 {
diff --git a/target/s390x/cpu-sysemu.c b/target/s390x/cpu-sysemu.c
new file mode 100644
index 00..6081b7ef32
--- /dev/null
+++ b/target/s390x/cpu-sysemu.c
@@ -0,0 +1,304 @@
+/*
+ * QEMU S/390 CPU - System Emulation-only code
+ *
+ * Copyright (c) 2009 Ulrich Hecht
+ * Copyright (c) 2011 Alexander Graf
+ * Copyright (c) 2012 SUSE LINUX Products GmbH
+ * Copyright (c) 2012 IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see .
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "cpu.h"
+#include "s390x-internal.h"
+#include "kvm_s390x.h"
+#include "sysemu/kvm.h"
+#include "sysemu/reset.h"
+#include "qemu/timer.h"
+#include "trace.h"
+#include "qapi/qapi-visit-run-state.h"
+#include "sysemu/hw_accel.h"
+
+#include "hw/s390x/pv.h"
+#include "hw/boards.h"
+#include "sysemu/arch_init.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/tcg.h"
+
+/* S390CPUClass::load_normal() */
+static void s390_cpu_load_normal(CPUState *s)
+{
+S390CPU *cpu = S390_CPU(s);
+uint64_t spsw;
+
+if (!s390_is_pv()) {
+spsw = ldq_phys(s->as, 0);
+cpu->env.psw.mask = spsw & PSW_MASK_SHORT_CTRL;
+/*
+ * Invert short psw indication, so SIE will report a specification
+ * exception if it was not set.
+ */
+cpu->env.psw.mask ^= PSW_MASK_SHORTPSW;
+cpu->env.psw.addr = spsw & PSW_MASK_SHORT_ADDR;
+} else {
+/*
+ * Firmware requires us to set the load state before we set
+ * the cpu to operating on protected guests.
+ */
+s390_cpu_set_state(S390_CPU_STATE_LOAD, cpu);
+}
+s390_cpu_set_state(S390_CPU_STATE_OPERATING, cpu);
+}
+
+void s390_cpu_machine_reset_cb(void *opaque)
+{
+S390CPU *cpu = opaque;
+
+run_on_cpu(CPU(cpu), s390_do_cpu_full_reset, RUN_ON_CPU_NULL);
+}
+
+static GuestPanicInformation *s390_cpu_get_crash_info(CPUState *cs)
+{
+GuestPanicInformation *panic_info;
+S390CPU *cpu = S390_CPU(cs);
+
+cpu_synchronize_state(cs);
+panic_info = g_malloc0(sizeof(GuestPanicInformation));
+
+panic_info->type = GUEST_PANIC_INFORMATION_TYPE_S390;
+panic_info->u.s390.core = cpu->env.core_id;
+panic_info->u.s390.psw_mask = cpu->env.psw.mask;
+panic_info->u.s390.psw_addr = cpu->env.psw.addr;
+panic_info->u.s390.reason = cpu->env.crash_reason;
+
+return panic_info;
+}
+
+static void s390_cpu_get_crash_info_qom(Object *obj, Visitor *v,
+const char *name, void *opaque,
+Error **errp)
+{
+CPUState *cs = CPU(obj);
+GuestPanicInformation *panic_info;
+
+if (!cs->crash_occurred) {
+error_setg(errp, "No crash occurred");
+return;
+}
+
+panic_info = s390_cpu_get_crash_info(cs);
+
+visit_type_GuestPanicInformation(v, "crash-information", &panic_info,
+ errp);
+qapi_free_GuestPanicInformation(panic_info);
+}
+
+void s390_cpu_init_sysemu(Object *obj)
+{
+CPUState *cs = CPU(obj);
+S390CPU *cpu = S390_CPU(obj);
+
+cs->start_powered_off = true;
+object_property_add(obj, "crash-information", "GuestPanicInformation",
+s390_cpu_get_crash_info_qom, NULL, NULL, NULL);
+cpu->env.tod_timer =
+timer_new_ns(QEMU_C

[RFC v2 08/13] target/s390x: make helper.c sysemu-only

2021-04-20 Thread Claudio Fontana
Now that we have moved cpu-dump functionality out of helper.c,
we can make the module sysemu-only.

Signed-off-by: Claudio Fontana 
---
 target/s390x/helper.c| 4 
 target/s390x/meson.build | 2 +-
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/target/s390x/helper.c b/target/s390x/helper.c
index 41ccc83d11..f246bec353 100644
--- a/target/s390x/helper.c
+++ b/target/s390x/helper.c
@@ -27,11 +27,8 @@
 #include "hw/s390x/pv.h"
 #include "sysemu/hw_accel.h"
 #include "sysemu/runstate.h"
-#ifndef CONFIG_USER_ONLY
 #include "sysemu/tcg.h"
-#endif
 
-#ifndef CONFIG_USER_ONLY
 void s390x_tod_timer(void *opaque)
 {
 cpu_inject_clock_comparator((S390CPU *) opaque);
@@ -322,4 +319,3 @@ int s390_store_adtl_status(S390CPU *cpu, hwaddr addr, 
hwaddr len)
 cpu_physical_memory_unmap(sa, len, 1, len);
 return 0;
 }
-#endif /* CONFIG_USER_ONLY */
diff --git a/target/s390x/meson.build b/target/s390x/meson.build
index 6e1aa3b0cd..bbcaede384 100644
--- a/target/s390x/meson.build
+++ b/target/s390x/meson.build
@@ -4,7 +4,6 @@ s390x_ss.add(files(
   'cpu_features.c',
   'cpu_models.c',
   'gdbstub.c',
-  'helper.c',
   'interrupt.c',
   'cpu-dump.c',
 ))
@@ -23,6 +22,7 @@ s390x_ss.add(gen_features_h)
 
 s390x_softmmu_ss = ss.source_set()
 s390x_softmmu_ss.add(files(
+  'helper.c',
   'arch_dump.c',
   'diag.c',
   'ioinst.c',
-- 
2.26.2




[RFC v2 13/13] MAINTAINERS: update s390x directories

2021-04-20 Thread Claudio Fontana
After the reshuffling, update MAINTAINERS accordingly.
Make use of the new directories:

target/s390x/kvm/
target/s390x/tcg/

Signed-off-by: Claudio Fontana 
---
 MAINTAINERS | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7de873c9f5..3d738bd4fc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -315,7 +315,7 @@ S390 TCG CPUs
 M: Richard Henderson 
 M: David Hildenbrand 
 S: Maintained
-F: target/s390x/
+F: target/s390x/tcg
 F: hw/s390x/
 F: disas/s390.c
 F: tests/tcg/s390x/
@@ -417,14 +417,12 @@ M: Halil Pasic 
 M: Cornelia Huck 
 M: Christian Borntraeger 
 S: Supported
-F: target/s390x/kvm.c
-F: target/s390x/kvm_s390x.h
-F: target/s390x/kvm-stub.c
+F: target/s390x/kvm/
 F: target/s390x/ioinst.[ch]
 F: target/s390x/machine.c
 F: target/s390x/sigp.c
 F: target/s390x/cpu_features*.[ch]
-F: target/s390x/cpu_models.[ch]
+F: target/s390x/cpu_models*.[ch]
 F: hw/s390x/pv.c
 F: include/hw/s390x/pv.h
 F: hw/intc/s390_flic.c
-- 
2.26.2




[RFC v2 12/13] target/s390x: split sysemu part of cpu models

2021-04-20 Thread Claudio Fontana
also create a tiny _user.c with just the (at least for now),
empty implementation of apply_cpu_model.

Signed-off-by: Claudio Fontana 
---
 target/s390x/s390x-internal.h|   2 +
 target/s390x/cpu_models.c| 417 +-
 target/s390x/cpu_models_sysemu.c | 426 +++
 target/s390x/cpu_models_user.c   |  20 ++
 target/s390x/meson.build |   4 +
 5 files changed, 453 insertions(+), 416 deletions(-)
 create mode 100644 target/s390x/cpu_models_sysemu.c
 create mode 100644 target/s390x/cpu_models_user.c

diff --git a/target/s390x/s390x-internal.h b/target/s390x/s390x-internal.h
index 171ecd59fb..e4ec5e55bb 100644
--- a/target/s390x/s390x-internal.h
+++ b/target/s390x/s390x-internal.h
@@ -265,6 +265,8 @@ static inline void s390_cpu_unhalt(S390CPU *cpu)
 /* cpu_models.c */
 void s390_cpu_model_class_register_props(ObjectClass *oc);
 void s390_realize_cpu_model(CPUState *cs, Error **errp);
+S390CPUModel *get_max_cpu_model(Error **errp);
+void apply_cpu_model(const S390CPUModel *model, Error **errp);
 ObjectClass *s390_cpu_class_by_name(const char *name);
 
 
diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 0ed1c23774..30a192590d 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -18,18 +18,11 @@
 #include "sysemu/tcg.h"
 #include "qapi/error.h"
 #include "qapi/visitor.h"
-#include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "qemu/qemu-print.h"
-#include "qapi/qmp/qerror.h"
-#include "qapi/qobject-input-visitor.h"
-#include "qapi/qmp/qdict.h"
 #ifndef CONFIG_USER_ONLY
-#include "sysemu/arch_init.h"
 #include "sysemu/sysemu.h"
-#include "hw/pci/pci.h"
 #endif
-#include "qapi/qapi-commands-machine-target.h"
 #include "hw/s390x/pv.h"
 
 #define CPUDEF_INIT(_type, _gen, _ec_ga, _mha_pow, _hmfai, _name, _desc) \
@@ -414,381 +407,6 @@ void s390_cpu_list(void)
 }
 }
 
-static S390CPUModel *get_max_cpu_model(Error **errp);
-
-#ifndef CONFIG_USER_ONLY
-static void list_add_feat(const char *name, void *opaque);
-
-static void check_unavailable_features(const S390CPUModel *max_model,
-   const S390CPUModel *model,
-   strList **unavailable)
-{
-S390FeatBitmap missing;
-
-/* check general model compatibility */
-if (max_model->def->gen < model->def->gen ||
-(max_model->def->gen == model->def->gen &&
- max_model->def->ec_ga < model->def->ec_ga)) {
-list_add_feat("type", unavailable);
-}
-
-/* detect missing features if any to properly report them */
-bitmap_andnot(missing, model->features, max_model->features,
-  S390_FEAT_MAX);
-if (!bitmap_empty(missing, S390_FEAT_MAX)) {
-s390_feat_bitmap_to_ascii(missing, unavailable, list_add_feat);
-}
-}
-
-struct CpuDefinitionInfoListData {
-CpuDefinitionInfoList *list;
-S390CPUModel *model;
-};
-
-static void create_cpu_model_list(ObjectClass *klass, void *opaque)
-{
-struct CpuDefinitionInfoListData *cpu_list_data = opaque;
-CpuDefinitionInfoList **cpu_list = &cpu_list_data->list;
-CpuDefinitionInfo *info;
-char *name = g_strdup(object_class_get_name(klass));
-S390CPUClass *scc = S390_CPU_CLASS(klass);
-
-/* strip off the -s390x-cpu */
-g_strrstr(name, "-" TYPE_S390_CPU)[0] = 0;
-info = g_new0(CpuDefinitionInfo, 1);
-info->name = name;
-info->has_migration_safe = true;
-info->migration_safe = scc->is_migration_safe;
-info->q_static = scc->is_static;
-info->q_typename = g_strdup(object_class_get_name(klass));
-/* check for unavailable features */
-if (cpu_list_data->model) {
-Object *obj;
-S390CPU *sc;
-obj = object_new_with_class(klass);
-sc = S390_CPU(obj);
-if (sc->model) {
-info->has_unavailable_features = true;
-check_unavailable_features(cpu_list_data->model, sc->model,
-   &info->unavailable_features);
-}
-object_unref(obj);
-}
-
-QAPI_LIST_PREPEND(*cpu_list, info);
-}
-
-CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
-{
-struct CpuDefinitionInfoListData list_data = {
-.list = NULL,
-};
-
-list_data.model = get_max_cpu_model(NULL);
-
-object_class_foreach(create_cpu_model_list, TYPE_S390_CPU, false,
- &list_data);
-
-return list_data.list;
-}
-
-static void cpu_model_from_info(S390CPUModel *model, const CpuModelInfo *info,
-Error **errp)
-{
-Error *err = NULL;
-const QDict *qdict = NULL;
-const QDictEntry *e;
-Visitor *visitor;
-ObjectClass *oc;
-S390CPU *cpu;
-Object *obj;
-
-if (info->props) {
-qdict = qobject_to(QDict, info->props);
-if (!qdict) {
-error_setg(errp, QERR_INVALID_PARAMETER_TYPE, "props", "dict");
-return;
-}
-}
-
-

[RFC v2 10/13] target/s390x: remove kvm-stub.c

2021-04-20 Thread Claudio Fontana
all function calls are protected by kvm_enabled(),
so we should not need the stubs.

Signed-off-by: Claudio Fontana 
---
 target/s390x/kvm-stub.c  | 126 ---
 target/s390x/meson.build |   2 +-
 2 files changed, 1 insertion(+), 127 deletions(-)
 delete mode 100644 target/s390x/kvm-stub.c

diff --git a/target/s390x/kvm-stub.c b/target/s390x/kvm-stub.c
deleted file mode 100644
index 9970b5a8c7..00
--- a/target/s390x/kvm-stub.c
+++ /dev/null
@@ -1,126 +0,0 @@
-/*
- * QEMU KVM support -- s390x specific function stubs.
- *
- * Copyright (c) 2009 Ulrich Hecht
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#include "qemu/osdep.h"
-#include "cpu.h"
-#include "kvm_s390x.h"
-
-void kvm_s390_access_exception(S390CPU *cpu, uint16_t code, uint64_t te_code)
-{
-}
-
-int kvm_s390_mem_op(S390CPU *cpu, vaddr addr, uint8_t ar, void *hostbuf,
-int len, bool is_write)
-{
-return -ENOSYS;
-}
-
-void kvm_s390_program_interrupt(S390CPU *cpu, uint16_t code)
-{
-}
-
-int kvm_s390_set_cpu_state(S390CPU *cpu, uint8_t cpu_state)
-{
-return -ENOSYS;
-}
-
-void kvm_s390_vcpu_interrupt_pre_save(S390CPU *cpu)
-{
-}
-
-int kvm_s390_vcpu_interrupt_post_load(S390CPU *cpu)
-{
-return 0;
-}
-
-int kvm_s390_get_hpage_1m(void)
-{
-return 0;
-}
-
-int kvm_s390_get_ri(void)
-{
-return 0;
-}
-
-int kvm_s390_get_gs(void)
-{
-return 0;
-}
-
-int kvm_s390_get_clock(uint8_t *tod_high, uint64_t *tod_low)
-{
-return -ENOSYS;
-}
-
-int kvm_s390_get_clock_ext(uint8_t *tod_high, uint64_t *tod_low)
-{
-return -ENOSYS;
-}
-
-int kvm_s390_set_clock(uint8_t tod_high, uint64_t tod_low)
-{
-return -ENOSYS;
-}
-
-int kvm_s390_set_clock_ext(uint8_t tod_high, uint64_t tod_low)
-{
-return -ENOSYS;
-}
-
-void kvm_s390_enable_css_support(S390CPU *cpu)
-{
-}
-
-int kvm_s390_assign_subch_ioeventfd(EventNotifier *notifier, uint32_t sch,
-int vq, bool assign)
-{
-return -ENOSYS;
-}
-
-void kvm_s390_cmma_reset(void)
-{
-}
-
-void kvm_s390_reset_vcpu_initial(S390CPU *cpu)
-{
-}
-
-void kvm_s390_reset_vcpu_clear(S390CPU *cpu)
-{
-}
-
-void kvm_s390_reset_vcpu_normal(S390CPU *cpu)
-{
-}
-
-int kvm_s390_set_mem_limit(uint64_t new_limit, uint64_t *hw_limit)
-{
-return 0;
-}
-
-void kvm_s390_set_max_pagesize(uint64_t pagesize, Error **errp)
-{
-}
-
-void kvm_s390_crypto_reset(void)
-{
-}
-
-void kvm_s390_stop_interrupt(S390CPU *cpu)
-{
-}
-
-void kvm_s390_restart_interrupt(S390CPU *cpu)
-{
-}
-
-void kvm_s390_set_diag318(CPUState *cs, uint64_t diag318_info)
-{
-}
diff --git a/target/s390x/meson.build b/target/s390x/meson.build
index bbcaede384..6c8e03b8fb 100644
--- a/target/s390x/meson.build
+++ b/target/s390x/meson.build
@@ -8,7 +8,7 @@ s390x_ss.add(files(
   'cpu-dump.c',
 ))
 
-s390x_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: 
files('kvm-stub.c'))
+s390x_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'))
 
 gen_features = executable('gen-features', 'gen-features.c', native: true,
   build_by_default: false)
-- 
2.26.2




Re: [PATCH v5 14/14] hmp: Print "reserve" property of memory backends with "info memdev"

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/13/21 11:14 AM, David Hildenbrand wrote:
> Let's print the new property.
> 
> Reviewed-by: Dr. David Alan Gilbert 
> Cc: Markus Armbruster 
> Cc: Eric Blake 
> Cc: Igor Mammedov 
> Signed-off-by: David Hildenbrand 
> ---
>  hw/core/machine-hmp-cmds.c | 2 ++
>  1 file changed, 2 insertions(+)

Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH v5 13/14] qmp: Include "reserve" property of memory backends

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/13/21 11:14 AM, David Hildenbrand wrote:
> Let's include the new property.
> 
> Cc: Eric Blake 
> Cc: Markus Armbruster 
> Cc: Igor Mammedov 
> Signed-off-by: David Hildenbrand 
> ---
>  hw/core/machine-qmp-cmds.c | 1 +
>  qapi/machine.json  | 4 
>  2 files changed, 5 insertions(+)

Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH v5 05/14] softmmu/memory: Pass ram_flags to memory_region_init_ram_shared_nomigrate()

2021-04-20 Thread Philippe Mathieu-Daudé
On 4/20/21 12:27 PM, David Hildenbrand wrote:
> On 20.04.21 12:20, Philippe Mathieu-Daudé wrote:
>> Hi David,
>>
>> On 4/13/21 11:14 AM, David Hildenbrand wrote:
>>> Let's forward ram_flags instead, renaming
>>> memory_region_init_ram_shared_nomigrate() into
>>> memory_region_init_ram_flags_nomigrate(). Forward flags to
>>> qemu_ram_alloc() and qemu_ram_alloc_internal().
>>>
>>> Reviewed-by: Peter Xu 
>>> Signed-off-by: David Hildenbrand 
>>> ---
>>>   backends/hostmem-ram.c    |  6 +++--
>>>   hw/m68k/next-cube.c   |  4 ++--
>>>   include/exec/memory.h | 24 +--
>>>   include/exec/ram_addr.h   |  2 +-
>>>   .../memory-region-housekeeping.cocci  |  8 +++
>>>   softmmu/memory.c  | 20 
>>
>> OK up to here, but the qemu_ram_alloc_internal() changes
>> in softmmu/physmem.c belong to a different patch (except
>> the line adding "new_block->flags = ram_flags").
>> Do you mind splitting it?
>>
> 
> Can you elaborate? Temporarily passing both "ram_flags" and "bool
> resizeable, bool share" to qemu_ram_alloc_internal()?
> 
> I don't see a big benefit in doing that besides even more effective
> changes in two individual patches. But maybe if you elaborate, i can see
> what you would like to see :)

In this patch I see

1/ change a parameter and propagate it
2/ adapt assertions

I'd rather review the assertions modified / cleaned in another patch,
simply because it required me 2 different mental efforts to review the
first part and the second part. But maybe it is not possible, so I'll
review the 2nd part here.

> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index cc59f05593..fdcd38ba61 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -2108,12 +2108,14 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t
size, ram_addr_t max_size,
>void (*resized)(const char*,
>uint64_t length,
>void *host),
> -  void *host, bool resizeable, bool
share,
> +  void *host, uint32_t ram_flags,
>MemoryRegion *mr, Error **errp)
>  {
>  RAMBlock *new_block;
>  Error *local_err = NULL;
>

Maybe also:

   assert(!host || (ram_flags & RAM_PREALLOC));

> +assert((ram_flags & ~(RAM_SHARED | RAM_RESIZEABLE)) == 0);
> +
>  size = HOST_PAGE_ALIGN(size);
>  max_size = HOST_PAGE_ALIGN(max_size);
>  new_block = g_malloc0(sizeof(*new_block));
> @@ -2125,15 +2127,10 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t
size, ram_addr_t max_size,
>  new_block->fd = -1;
>  new_block->page_size = qemu_real_host_page_size;
>  new_block->host = host;
> +new_block->flags = ram_flags;
>  if (host) {
>  new_block->flags |= RAM_PREALLOC;
>  }

We could also remove this statement ...

> -if (share) {
> -new_block->flags |= RAM_SHARED;
> -}
> -if (resizeable) {
> -new_block->flags |= RAM_RESIZEABLE;
> -}
>  ram_block_add(new_block, &local_err);
>  if (local_err) {
>  g_free(new_block);
> @@ -2146,15 +2143,14 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t
size, ram_addr_t max_size,
>  RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
> MemoryRegion *mr, Error **errp)
>  {
> -return qemu_ram_alloc_internal(size, size, NULL, host, false,
> -   false, mr, errp);

... by passing RAM_PREALLOC here.

> +return qemu_ram_alloc_internal(size, size, NULL, host, 0, mr, errp);
>  }
>
> -RAMBlock *qemu_ram_alloc(ram_addr_t size, bool share,
> +RAMBlock *qemu_ram_alloc(ram_addr_t size, uint32_t ram_flags,
>   MemoryRegion *mr, Error **errp)
>  {
> -return qemu_ram_alloc_internal(size, size, NULL, NULL, false,
> -   share, mr, errp);
> +assert((ram_flags & ~RAM_SHARED) == 0);
> +return qemu_ram_alloc_internal(size, size, NULL, NULL, ram_flags,
mr, errp);
>  }
>
>  RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t size, ram_addr_t maxsz,
> @@ -2163,8 +2159,8 @@ RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t
size, ram_addr_t maxsz,
>   void *host),
>   MemoryRegion *mr, Error **errp)
>  {
> -return qemu_ram_alloc_internal(size, maxsz, resized, NULL, true,
> -   false, mr, errp);
> +return qemu_ram_alloc_internal(size, maxsz, resized, NULL,
> +   RAM_RESIZEABLE, mr, errp);
>  }
>
>  static void reclaim_ramblock(RAMBlock *block)
>




  1   2   3   >