Re: [PATCH v14 03/11] tests: Add test for QAPI builtin type time

2019-11-08 Thread Markus Armbruster
Tao Xu  writes:

> On 11/7/2019 9:31 PM, Eduardo Habkost wrote:
>> On Thu, Nov 07, 2019 at 02:24:52PM +0800, Tao Xu wrote:
>>> On 11/7/2019 4:53 AM, Eduardo Habkost wrote:
 On Mon, Oct 28, 2019 at 03:52:12PM +0800, Tao Xu wrote:
> Add tests for time input such as zero, around limit of precision,
> signed upper limit, actual upper limit, beyond limits, time suffixes,
> and etc.
>
> Signed-off-by: Tao Xu 
> ---
 [...]
> +/* Close to signed upper limit 0x7c00 (53 msbs set) */
> +qdict = keyval_parse("time1=9223372036854774784," /* 
> 7c00 */
> + "time2=9223372036854775295", /* 
> 7dff */
> + NULL, &error_abort);
> +v = qobject_input_visitor_new_keyval(QOBJECT(qdict));
> +qobject_unref(qdict);
> +visit_start_struct(v, NULL, NULL, 0, &error_abort);
> +visit_type_time(v, "time1", &time, &error_abort);
> +g_assert_cmphex(time, ==, 0x7c00);
> +visit_type_time(v, "time2", &time, &error_abort);
> +g_assert_cmphex(time, ==, 0x7c00);

 I'm confused by this test case and the one below[1].  Are these
 known bugs?  Shouldn't we document them as known bugs?
>>>
>>> Because do_strtosz() or do_strtomul() actually parse with strtod(), so the
>>> precision is 53 bits, so in these cases, 7dff and
>>> fbff are rounded.
>>
>> My questions remain: why isn't this being treated like a bug?
>>
> Hi Markus,
>
> I am confused about the code here too. Because in do_strtosz(), the
> upper limit is
>
> val * mul >= 0xfc00
>
> So some data near 53 bit may be rounded. Is there a bug?

No, but the design is surprising, and the functions lack written
contracts, except for the do_strtosz() helper, which has one that sucks.

qemu_strtosz() & friends are designed to accept fraction * unit
multiplier.  Example: 1.5M means 1.5 * 1024 * 1024 with qemu_strtosz()
and qemu_strtosz_MiB(), and 1.5 * 1000 * 1000 with
qemu_strtosz_metric().  Whether supporting fractions is a good idea is
debatable, but it's what we've got.

The implementation limits the numeric part to the precision of double,
i.e. 53 bits.  "8PiB should be enough for anybody."

Switching it from double to long double raises the limit to the
precision of long double.  At least 64 bit on common hosts, but hosts
exist where it's the same 53 bits.  Do we support any such hosts?  If
yes, then we'd make the precision depend on the host, which feels like a
bad idea.

A possible alternative is to parse the numeric part both as a double and
as a 64 bit unsigned integer, then use whatever consumes more
characters.  This enables providing full 64 bits unless you actually use
a fraction.

As far as I remember, the only problem we've ever had with the 53 bits
limit is developer confusion :)

Patches welcome.




Re: [PATCH v21 3/6] ACPI: Add APEI GHES table generation support

2019-11-08 Thread gengdongjiu
On 2019/11/4 20:14, Xiang Zheng wrote:
> From: Dongjiu Geng 
> 
> This patch implements APEI GHES Table generation via fw_cfg blobs. Now
> it only supports ARMv8 SEA, a type of GHESv2 error source. Afterwards,
> we can extend the supported types if needed. For the CPER section,
> currently it is memory section because kernel mainly wants userspace to
> handle the memory errors.
> 
> This patch follows the spec ACPI 6.2 to build the Hardware Error Source
> table. For more detailed information, please refer to document:
> docs/specs/acpi_hest_ghes.rst
> 
> Suggested-by: Laszlo Ersek 
> Signed-off-by: Dongjiu Geng 
> Signed-off-by: Xiang Zheng 

Hi Xiang,
   please add "Reviewed-by: Michael S. Tsirkin  " which from 
Michael, thanks.




Re: [Qemu-devel] Exposing feature deprecation to machine clients

2019-11-08 Thread Max Reitz
On 07.11.19 20:13, Vladimir Sementsov-Ogievskiy wrote:
> 07.11.2019 21:52, Philippe Mathieu-Daudé wrote:
>> Hi Markus,
>>
>> On 8/15/19 7:40 PM, John Snow wrote:
>>> On 8/15/19 10:16 AM, Markus Armbruster wrote:
 John Snow  writes:
>> [...]
> I asked Markus this not too long ago; do we want to amend the QAPI
> schema specification to allow commands to return with "Warning" strings,
> or "Deprecated" stings to allow in-band deprecation notices for cases
> like these?
>
> example:
>
> { "return": {},
>    "deprecated": True,
>    "warning": "Omitting filter-node-name parameter is deprecated, it will
> be required in the future"
> }
>
> There's no "error" key, so this should be recognized as success by
> compatible clients, but they'll definitely see the extra information.

 This is a compatible evolution of the QMP protocol.

> Part of my motivation is to facilitate a more aggressive deprecation of
> legacy features by ensuring that we are able to rigorously notify users
> through any means that they need to adjust their scripts.

 Yes, we should help libvirt etc. with detecting use of deprecated
 features.  We discussed this at the KVM Forum 2018 BoF on deprecating
 stuff.  Minutes:

  Message-ID: <87mur0ls8o@dusky.pond.sub.org>
  https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg05828.html

 Last item is relevant here.

 Adding deprecation information to QMP's success response belongs to "We
 can also pass the buck to the next layer up", next to "emit a QMP
 event".

 Let's compare the two, i.e. "deprecation info in success response"
 vs. "deprecation event".

 1. Possible triggers

 Anything we put in the success response should only ever apply to the
 (successful) command.  So this one's limited to QMP commands.

 A QMP event is not limited to QMP commands.  For instance, it could be
 emitted for deprecated CLI features (long after the fact, in addition to
 human-readable warnings on stderr), or when we detect use of a
 deprecated feature only after we sent the success response, say in a
 job.  Neither use case is particularly convincing.  Reporting use of
 deprecated CLI in QMP feels like a work-around for the CLI's
 machine-unfriendliness.  Job-like commands should really check their
 arguments upfront.

 2. Connection to trigger

 Connecting responses to commands is the QMP protocol's responsibility.
 Transmitting deprecation information in the response trivially ties it
 to the offending command.

 The QMP protocol doesn't tie events to anything.  Thus, a deprecation
 event needs an event-specific tie to its trigger.

 The obvious way to tie it to a command mirrors how the QMP protocol ties
 responses to commands: by command ID.  The event either has to be sent
 just to the offending monitor (currently, all events are broadcast to
 all monitors), or include a suitable monitor ID.

 For non-command triggers, we'd have to invent some other tie.

 3. Interface complexity

 Tying the event to some arbitrary trigger adds complexity.

 Do we need non-command triggers, and badly enough to justify the
 additional complexity?

 4. Implementation complexity

 Emitting an event could be as simple as

  qapi_event_send_deprecated(qmp_command_id(),
     "Omitting 'filter-node-name'");

 where qmp_command_id() returns the ID of the currently executing
 command.  Making qmp_command_id() work is up to the QMP core.  Simple
 enough as long as each QMP command runs to completion before its monitor
 starts the next one.

 The event is "fire and forget".  There is no warning object propagated
 up the call chain into the QMP core like errors objects are.

 "Fire and forget" is ideal for letting arbitrary code decide "this is
 deprecated".

 Note the QAPI schema remains untouched.

 Unlike an event, which can be emitted anywhere, the success response
 gets built in the QMP core.  To have the core add deprecation info to
 it, we need to get the info to the core.

 If deprecation info originates in command code, like errors do, we need
 to propagate it up the call chain into the QMP core like errors.

 Propagating errors is painful.  It has caused massive churn all over the
 place.

 I don't think we can hitch deprecation info to the existing error
 propagation, since we need to take the success path back to the QMP
 core, not an error path.

 Propagating a second object for warnings... thanks, but no thanks.

>>>
>>> Probably the best argument against it. Fire-and-forget avoids the
>>> problem. Events might work just fine, but the

Re: QEMU HTML documentation now on qemu.org

2019-11-08 Thread Stefan Hajnoczi
On Thu, Nov 07, 2019 at 04:01:42PM +, Daniel P. Berrangé wrote:
> On Thu, Nov 07, 2019 at 04:44:34PM +0100, Stefan Hajnoczi wrote:
> > On Thu, Nov 7, 2019 at 11:07 AM Daniel P. Berrangé  
> > wrote:
> > >
> > > On Wed, Nov 06, 2019 at 05:19:28PM +0100, Stefan Hajnoczi wrote:
> > > > Hi,
> > > > You can now access the latest QEMU HTML documentation built from
> > > > qemu.git/master nightly at:
> > > >
> > > >   https://wiki.qemu.org/docs/qemu-doc.html
> > > >   https://wiki.qemu.org/docs/qemu-qmp-ref.html
> > > >   https://wiki.qemu.org/docs/qemu-ga-ref.html
> > > >   ...as well as interop/ and specs/
> > > >
> > > > Feel free to link to the documentation from the QEMU website and/or
> > > > wiki!
> > >
> > > What's the reason for putting on wiki.qemu.org URL ? It feels like
> > > having it under www.qemu.org would be a more natural home, especially
> > > if we can then make it pick up the jekyll theme around the pages.
> > >
> > > Ideally we should publish the docs under versioned URL when we
> > > make a release. eg  /docs/latest/  for current GIT master
> > > which I presume the above is tracking, and then a /docs/$VERSION/...
> > > for each major release we cut.
> > >
> > > That way users can get an accurate view of features in the QEMU
> > > they are actually using.
> > 
> > Versioned release docs should be generated during the release process.
> > I have CCed Mike Roth.  That way the docs are available as soon as the
> > release drops.  This container image only runs once a day and would
> > leave a window when users cannot access the docs yet.
> > 
> > Moving from wiki.qemu.org should be possible.  How does the jekyll
> > theme you mentioned work?
> 
> IIUC, when there's a push to the qemu-web.git repo, some git hook (?)
> runs on the server which invokes jekyll to build the content, and
> then publish it to the webroot.
> 
> To integrate these docs into that we need something along the lines
> of:
> 
>   1. Generate the HTML files as you do now
>   2. Copy them into the qemu-web.git in a /docs/ subdir
>   3. Prepend a magic header to make jeykll process the file
> 
>  ---
>  permalink: /docs/qemu-doc
>  ---
> 
>   4. Trigger the jekyll builder to refresh the generated docs
>   5. Publish the docs to the webroot
> 
> You can see what I did here  as an example where I simply committed
> the generated docs to qemu-web.git:
> 
>   https://www.mail-archive.com/qemu-devel@nongnu.org/msg578110.html
> 
> If we're not storing the generated docs in git, then when
> pushing to qemu-web.git we need to ensure we preserve the
> extra /docs dir content in some manner.

For qemu.git/master the built docs might change every day.  Committing
them to qemu-web.git seems like overkill.  I'll send a documentation.md
patch for qemu-web.git instead that simply links to
wiki.qemu.org/docs/.

For release docs the process you described sounds good.  Peter Maydell
or Mike Roth may be interested in integrating it into the release
scripts.

Stefan


signature.asc
Description: PGP signature


Re: [PATCH v14 03/11] tests: Add test for QAPI builtin type time

2019-11-08 Thread Igor Mammedov
On Fri, 08 Nov 2019 09:05:52 +0100
Markus Armbruster  wrote:

> Tao Xu  writes:
> 
> > On 11/7/2019 9:31 PM, Eduardo Habkost wrote:  
> >> On Thu, Nov 07, 2019 at 02:24:52PM +0800, Tao Xu wrote:  
> >>> On 11/7/2019 4:53 AM, Eduardo Habkost wrote:  
>  On Mon, Oct 28, 2019 at 03:52:12PM +0800, Tao Xu wrote:  
> > Add tests for time input such as zero, around limit of precision,
> > signed upper limit, actual upper limit, beyond limits, time suffixes,
> > and etc.
> >
> > Signed-off-by: Tao Xu 
> > ---  
>  [...]  
> > +/* Close to signed upper limit 0x7c00 (53 msbs set) */
> > +qdict = keyval_parse("time1=9223372036854774784," /* 
> > 7c00 */
> > + "time2=9223372036854775295", /* 
> > 7dff */
> > + NULL, &error_abort);
> > +v = qobject_input_visitor_new_keyval(QOBJECT(qdict));
> > +qobject_unref(qdict);
> > +visit_start_struct(v, NULL, NULL, 0, &error_abort);
> > +visit_type_time(v, "time1", &time, &error_abort);
> > +g_assert_cmphex(time, ==, 0x7c00);
> > +visit_type_time(v, "time2", &time, &error_abort);
> > +g_assert_cmphex(time, ==, 0x7c00);  
> 
>  I'm confused by this test case and the one below[1].  Are these
>  known bugs?  Shouldn't we document them as known bugs?  
> >>>
> >>> Because do_strtosz() or do_strtomul() actually parse with strtod(), so the
> >>> precision is 53 bits, so in these cases, 7dff and
> >>> fbff are rounded.  
> >>
> >> My questions remain: why isn't this being treated like a bug?
> >>  
> > Hi Markus,
> >
> > I am confused about the code here too. Because in do_strtosz(), the
> > upper limit is
> >
> > val * mul >= 0xfc00
> >
> > So some data near 53 bit may be rounded. Is there a bug?  
> 
> No, but the design is surprising, and the functions lack written
> contracts, except for the do_strtosz() helper, which has one that sucks.
> 
> qemu_strtosz() & friends are designed to accept fraction * unit
> multiplier.  Example: 1.5M means 1.5 * 1024 * 1024 with qemu_strtosz()
> and qemu_strtosz_MiB(), and 1.5 * 1000 * 1000 with
> qemu_strtosz_metric().  Whether supporting fractions is a good idea is
> debatable, but it's what we've got.
> 
> The implementation limits the numeric part to the precision of double,
> i.e. 53 bits.  "8PiB should be enough for anybody."
>
> Switching it from double to long double raises the limit to the
> precision of long double.  At least 64 bit on common hosts, but hosts
> exist where it's the same 53 bits.  Do we support any such hosts?  If
> yes, then we'd make the precision depend on the host, which feels like a
> bad idea.
> 
> A possible alternative is to parse the numeric part both as a double and
> as a 64 bit unsigned integer, then use whatever consumes more
> characters.  This enables providing full 64 bits unless you actually use
> a fraction.
> 
> As far as I remember, the only problem we've ever had with the 53 bits
> limit is developer confusion :)

On CLI, we could (a)use full 64bit (-1) lat/bw to mark unreachable nodes.
Also it would be more consistent for both QMP and CLI to be able
handle the same range. This way what was configured over QMP could be
also configured using CLI.

> Patches welcome.




Re: [Qemu-devel] [Fail] tests/test-util-filemonitor fails

2019-11-08 Thread Miguel Arruga Vivas
Hello Daniel,

Daniel P. Berrangé  wrote:
> This reported bug was already fixed in git master with
> 
>   commit bf9e0313c27d8e6ecd7f7de3d63e1cb25d8f6311
>   Author: Daniel P. Berrangé 
>   Date:   Wed Aug 21 16:14:27 2019 +0100
> 
> tests: make filemonitor test more robust to event ordering

Thank you very much for your patch and your quick answer.

Best regards,
Miguel



[PATCH 0/2] block: Fix multiple blockdev-snapshot calls

2019-11-08 Thread Kevin Wolf
Kevin Wolf (2):
  block: Remove 'backing': null from bs->{explicit_,}options
  iotests: Test multiple blockdev-snapshot calls

 block.c|   2 +
 tests/qemu-iotests/273 |  76 +
 tests/qemu-iotests/273.out | 337 +
 tests/qemu-iotests/group   |   1 +
 4 files changed, 416 insertions(+)
 create mode 100755 tests/qemu-iotests/273
 create mode 100644 tests/qemu-iotests/273.out

-- 
2.20.1




[PATCH 1/2] block: Remove 'backing': null from bs->{explicit_, }options

2019-11-08 Thread Kevin Wolf
bs->options and bs->explicit_options shouldn't contain any options for
child nodes. bdrv_open_inherited() takes care to remove any options that
match a child name after opening the image and the same is done when
reopening.

However, we miss the case of 'backing': null, which is a child option,
but results in no child being created. This means that a 'backing': null
remains in bs->options and bs->explicit_options.

A typical use for 'backing': null is in live snapshots: blockdev-add for
the qcow2 overlay makes sure not to open the backing file (because it is
already opened and blockdev-snapshot will attach it). After doing a
blockdev-snapshot, bs->options and bs->explicit_options become
inconsistent with the actual state (bs has a backing file now, but the
options still say null). On the next occasion that the image is
reopened, e.g. switching it from read-write to read-only when another
snapshot is taken, the option will take effect again and the node
incorrectly loses its backing file.

Fix bdrv_open_inherited() to remove the 'backing' option from
bs->options and bs->explicit_options even for the case where it
specifies that no backing file is wanted.

Reported-by: Peter Krempa 
Signed-off-by: Kevin Wolf 
---
 block.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/block.c b/block.c
index dad5a3d8e0..74ba9acb08 100644
--- a/block.c
+++ b/block.c
@@ -3019,6 +3019,8 @@ static BlockDriverState *bdrv_open_inherit(const char 
*filename,
 "use \"backing\": null instead");
 }
 flags |= BDRV_O_NO_BACKING;
+qdict_del(bs->explicit_options, "backing");
+qdict_del(bs->options, "backing");
 qdict_del(options, "backing");
 }
 
-- 
2.20.1




[PATCH 2/2] iotests: Test multiple blockdev-snapshot calls

2019-11-08 Thread Kevin Wolf
Test that doing a second blockdev-snapshot doesn't make the first
overlay's backing file go away.

Signed-off-by: Kevin Wolf 
---
 tests/qemu-iotests/273 |  76 +
 tests/qemu-iotests/273.out | 337 +
 tests/qemu-iotests/group   |   1 +
 3 files changed, 414 insertions(+)
 create mode 100755 tests/qemu-iotests/273
 create mode 100644 tests/qemu-iotests/273.out

diff --git a/tests/qemu-iotests/273 b/tests/qemu-iotests/273
new file mode 100755
index 00..60076de7ce
--- /dev/null
+++ b/tests/qemu-iotests/273
@@ -0,0 +1,76 @@
+#!/usr/bin/env bash
+#
+# Test large write to a qcow2 image
+#
+# Copyright (C) 2019 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see .
+#
+
+seq=$(basename "$0")
+echo "QA output created by $seq"
+
+status=1   # failure is the default!
+
+_cleanup()
+{
+_cleanup_test_img
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+. ./common.rc
+. ./common.filter
+
+# This is a qcow2 regression test
+_supported_fmt qcow2
+_supported_proto file
+_supported_os Linux
+
+do_run_qemu()
+{
+echo Testing: "$@"
+$QEMU -nographic -qmp-pretty stdio -nodefaults "$@"
+echo
+}
+
+run_qemu()
+{
+do_run_qemu "$@" 2>&1 | _filter_testdir | _filter_qemu | _filter_qmp |
+_filter_generated_node_ids | _filter_imgfmt | _filter_actual_image_size
+}
+
+TEST_IMG="$TEST_IMG.base" _make_test_img 64M
+TEST_IMG="$TEST_IMG.mid" _make_test_img -b "$TEST_IMG.base"
+_make_test_img -b "$TEST_IMG.mid"
+
+run_qemu \
+-blockdev file,node-name=base,filename="$TEST_IMG.base" \
+ -blockdev file,node-name=midf,filename="$TEST_IMG.mid" \
+ -blockdev 
'{"driver":"qcow2","node-name":"mid","file":"midf","backing":null}' \
+ -blockdev file,node-name=topf,filename="$TEST_IMG" \
+ -blockdev 
'{"driver":"qcow2","file":"topf","node-name":"top","backing":null}' \
+<

[PATCH] iotests: Fix "no qualified output" error path

2019-11-08 Thread Kevin Wolf
The variable for error messages to be displayed is $results, not
$reason. Fix 'check' to print the "no qualified output" error message
again instead of having a failure without any message telling the user
why it failed.

Signed-off-by: Kevin Wolf 
---
 tests/qemu-iotests/check | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index 588c453a94..5218728470 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check
@@ -876,7 +876,7 @@ do
 if [ ! -f "$reference" ]
 then
 status="fail"
-reason="no qualified output"
+results="no qualified output"
 err=true
 else
 if diff -w "$reference" $tmp.out >/dev/null 2>&1
-- 
2.20.1




Re: [RFC PATCH 18/18] qemu-storage-daemon: Add --monitor option

2019-11-08 Thread Markus Armbruster
Quick observation: --help fails to mention --monitor.




Re: [RFC PATCH 04/18] stubs: Add blk_by_qdev_id()

2019-11-08 Thread Markus Armbruster
Kevin Wolf  writes:

> blockdev.c uses the blk_by_qdev_id() function, so before we can use the
> file in tools (i.e. outside of the system emulator), we need to add a
> stub for it. The function always returns an error.
>
> Signed-off-by: Kevin Wolf 
> ---
>  stubs/blk-by-qdev-id.c | 9 +
>  stubs/Makefile.objs| 1 +
>  2 files changed, 10 insertions(+)
>  create mode 100644 stubs/blk-by-qdev-id.c
>
> diff --git a/stubs/blk-by-qdev-id.c b/stubs/blk-by-qdev-id.c
> new file mode 100644
> index 00..0b6160fefa
> --- /dev/null
> +++ b/stubs/blk-by-qdev-id.c
> @@ -0,0 +1,9 @@
> +#include "qemu/osdep.h"
> +#include "qapi/error.h"
> +#include "sysemu/block-backend.h"
> +
> +BlockBackend *blk_by_qdev_id(const char *id, Error **errp)
> +{
> +error_setg(errp, "qdev IDs/QOM paths exist only in the system emulator");
> +return NULL;
> +}

Is this function meant to be called?  Let me check...  it seems to be
used just for QMP commands that accept either a qdev ID or QOM path (by
convention 'id') or a block backend name (by convention 'device').
Evidence:

   static BlockBackend *qmp_get_blk(const char *blk_name, const char *qdev_id,
Error **errp)
   {
   BlockBackend *blk;

   if (!blk_name == !qdev_id) {
   error_setg(errp, "Need exactly one of 'device' and 'id'");
   return NULL;
   }

   if (qdev_id) {
   blk = blk_by_qdev_id(qdev_id, errp);
   } else {
   blk = blk_by_name(blk_name);
   if (blk == NULL) {
   error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
 "Device '%s' not found", blk_name);
   }
   }

   return blk;
   }

Users:

* eject, blockdev-open-tray, blockdev-change-medium via do_open_tray()

* blockdev-close-tray

* eject, blockdev-remove-medium via blockdev_remove_medium()

* blockdev-insert-medium via blockdev_insert_medium()

  Aside: blockdev_insert_medium() could be inlined.

* blockdev-change-medium

* block_set_io_throttle

* block-latency-histogram-set

The newer ones them don't provide 'device', and pass a null @blk_name to
qmp_get_blk().  Using blk_by_qdev_id() directly would be clearer.

Where 'device' is provided, it's been deprecated since v2.8.0.  Not
documented in qemu-deprecated.texi, though.

As far as I understand the storage daemon's intended purpose, none of
these commands make sense there.  Testing aha: it provides them all
anyway.  Is that a good idea?

> diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> index 9f4eb25e02..77fbf72576 100644
> --- a/stubs/Makefile.objs
> +++ b/stubs/Makefile.objs
> @@ -1,5 +1,6 @@
>  stub-obj-y += arch_type.o
>  stub-obj-y += bdrv-next-monitor-owned.o
> +stub-obj-y += blk-by-qdev-id.o
>  stub-obj-y += blk-commit-all.o
>  stub-obj-y += blockdev-close-all-bdrv-states.o
>  stub-obj-y += clock-warp.o




Re: Looking for issues/features for my first contribution

2019-11-08 Thread Alex Bennée


Rajath Shashidhara  writes:

> On 07-11-2019 07:33, Aleksandar Markovic wrote:
>> I did a quick Google search on datasheets of existing RTC
>> implemtations, and the result is:
>> DS1338:
>> https://datasheets.maximintegrated.com/en/ds/DS1338-DS1338Z.pdf
>> M41T80: https://www.st.com/resource/en/datasheet/m41t80.pdf
>> M48T59: http://www.elektronikjk.pl/elementy_czynne/IC/M48T59V.pdf
>> MC146818: https://www.nxp.com/docs/en/data-sheet/MC146818.pdf
>> PL031: 
>> http://infocenter.arm.com/help/topic/com.arm.doc.ddi0224c/real_time_clock_pl031_r1p3_technical_reference_manual_DDI0224C.pdf
>> TWL92230: 
>> https://datasheet.octopart.com/TWL92230C-Texas-Instruments-datasheet-150321.pdf
>> Zynq RTC: 
>> https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
>> (chapter 7)
>
> I have a few questions about this:
> [a] Is there any particular reason that you picked DS3231 ? Linux
> kernel has drivers for DS3232/34 only [1]. I did read the datasheets
> of both 3232 & 3231 and found that they are quite similar except for
> the 236 bytes of SRAM support found only in 3232.
>
> [b] As per the datasheet, DS3231 has a built-in temperature sensor.
> Temperature can be read from a dedicated register. There can be two
> approaches to emulating this: (1) Return a constant temperature value
> on every read (2) Throw a not-supported exception/warning. What is the
> qemu convention for handling such features ?

Don't throw an exception. You can at the minimum do a
qemu_log_mask(LOG_UNIMP) to indicate the system is using currently
unimplemented functionality. Alternatively wire-up a device property via
QOM so the user can vary the reported temperature.

QEMU currently doesn't have a decent API for exposing values for dynamic
emulated sensors to the outside world aside from QMP for chaning device
values. It's something we have discussed in the past but the trick is
coming up with something that can cover the wide range of device types.
Maybe QMP is good enough?

>
> [c] DS3231 also has programmable square-wave output + 32 KHz output
> pin. M41T80 chip also supports this feature. However, qemu does not
> support emulation of these features [2]. Do I take the same approach ?
>
> Thanks!
> Rajath Shashidhara
>
> References:
> [1]
> https://elixir.bootlin.com/linux/v5.4-rc6/source/drivers/rtc/rtc-ds3232.c
> [2]
> https://git.qemu.org/?p=qemu.git;a=blob;f=hw/rtc/m41t80.c;h=914ecac8f4db418633d6daf92608cb50f6b89052;hb=HEAD


--
Alex Bennée



Re: [PATCH v2] ivshmem-server: Clean up shmem on shutdown

2019-11-08 Thread Jan Kiszka

On 06.08.19 15:01, Claudio Fontana wrote:

On 8/5/19 7:54 AM, Jan Kiszka wrote:

From: Jan Kiszka 

So far, the server leaves the posix shared memory object behind when
terminating, requiring the user to explicitly remove it in order to
start a new instance.

Signed-off-by: Jan Kiszka 
---

Changes in v2:
  - respect use_shm_open
  - also clean up in ivshmem_server_start error path

  contrib/ivshmem-server/ivshmem-server.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/contrib/ivshmem-server/ivshmem-server.c 
b/contrib/ivshmem-server/ivshmem-server.c
index 77f97b209c..88daee812d 100644
--- a/contrib/ivshmem-server/ivshmem-server.c
+++ b/contrib/ivshmem-server/ivshmem-server.c
@@ -353,6 +353,9 @@ ivshmem_server_start(IvshmemServer *server)
  err_close_sock:
  close(sock_fd);
  err_close_shm:
+if (server->use_shm_open) {
+shm_unlink(server->shm_path);
+}
  close(shm_fd);
  return -1;
  }
@@ -370,6 +373,9 @@ ivshmem_server_close(IvshmemServer *server)
  }

  unlink(server->unix_sock_path);
+if (server->use_shm_open) {
+shm_unlink(server->shm_path);
+}
  close(server->sock_fd);
  close(server->shm_fd);
  server->sock_fd = -1;
--
2.16.4




Reviewed-by: Claudio Fontana 





Markus, would you take this?

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux



[PATCH qemu-web] Add a blog post on "Micro-Optimizing KVM VM-Exits"

2019-11-08 Thread Kashyap Chamarthy
This blog post summarizes the talk "Micro-Optimizing KVM VM-Exits"[1],
given by Andrea Arcangeli at the recently concluded KVM Forum 2019.

[1] 
https://kvmforum2019.sched.com/event/Tmwr/micro-optimizing-kvm-vm-exits-andrea-arcangeli-red-hat-inc

Signed-off-by: Kashyap Chamarthy 
---
 ...019-11-06-micro-optimizing-kvm-vmexits.txt | 115 ++
 1 file changed, 115 insertions(+)
 create mode 100644 _posts/2019-11-06-micro-optimizing-kvm-vmexits.txt

diff --git a/_posts/2019-11-06-micro-optimizing-kvm-vmexits.txt 
b/_posts/2019-11-06-micro-optimizing-kvm-vmexits.txt
new file mode 100644
index 
..f4a28d58ddb40103dd599fdfd861eeb4c41ed976
--- /dev/null
+++ b/_posts/2019-11-06-micro-optimizing-kvm-vmexits.txt
@@ -0,0 +1,115 @@
+---
+layout: post
+title: "Micro-Optimizing KVM VM-Exits"
+date:   2019-11-08
+categories: [kvm, optimization]
+---
+
+Background on VM-Exits
+--
+
+KVM (Kernel-based Virtual Machine) is the Linux kernel module that
+allows a host to run virtualized guests (Linux, Windows, etc).  The KVM
+"guest execution loop", with QEMU (the open source emulator and
+virtualizer) as its user space, is roughly as follows: QEMU issues the
+ioctl(), KVM_RUN, to tell KVM to prepare to enter the CPU's "Guest Mode"
+-- a special processor mode which allows guest code to safely run
+directly on the physical CPU.  The guest code, which is inside a "jail"
+and thus cannot interfere with the rest of the system, keeps running on
+the hardware until it encounters a request it cannot handle.  Then the
+processor gives the control back (referred to as "VM-Exit") either to
+kernel space, or to the user space to handle the request.  Once the
+request is handled, native execution of guest code on the processor
+resumes again.  And the loop goes on.
+
+There are dozens of reasons for VM-Exits (Intel's Software Developer
+Manual outlines 64 "Basic Exit Reasons").  For example, when a guest
+needs to emulate the CPUID instruction, it causes a "light-weight exit"
+to kernel space, because CPUID (among a few others) is emulated in the
+kernel itself, for performance reasons.  But when the kernel _cannot_
+handle a request, e.g. to emulate certain hardware, it results in a
+"heavy-weight exit" to QEMU, to perform the emulation.  These VM-Exits
+and subsequent re-entries ("VM-Enters"), even the light-weight ones, can
+be expensive.  What can be done about it?
+
+Guest workloads that are hard to virtualize
+---
+
+At the 2019 edition of the KVM Forum in Lyon, kernel developer, Andrea
+Arcangeli, attempted to address the kernel part of minimizing VM-Exits.
+
+His talk touched on the cost of VM-Exits into the kernel, especially for
+guest workloads (e.g. enterprise databases) that are sensitive to their
+performance penalty.  However, these workloads cannot avoid triggering
+VM-Exits with a high frequency.  Andrea then outlined some of the
+optimizations he's been working on to improve the VM-Exit performance in
+the KVM code path -- especially in light of applying mitigations for
+speculative execution flaws (Spectre v2, MDS, L1TF).
+
+Andrea gave a brief recap of the different kinds of speculative
+execution attacks (retpolines, IBPB, PTI, SSBD, etc).  Followed by that
+he outlined the performance impact of Spectre-v2 mitigations in context
+of KVM.
+
+The microbechmark: CPUID in a one million loop
+--
+
+The synthetic microbenchmark (meaning, focus on measuring the
+performance of a specific area of code) Andrea used was to run the CPUID
+instruction one million times, without any GCC optimizations or caching.
+This was done to test the latency of VM-Exits.
+
+While stressing that the results of these microbenchmarks do not
+represent real-world workloads, he had two goals in mind with it: (a)
+explain how the software mitigation works; and (b) to justify to the
+broader community the value of the software optimizations he's working
+on in KVM.
+
+Andrea then reasoned through several interesting graphs that show how
+CPU computation time gets impacted when you disable or enable the
+various kernel-space mitigations for Spectre v2, L1TF, MDS, et al.
+
+The proposal: "KVM Monolithic"
+--
+
+Based on his investigation, Andrea proposed a patch series, ["KVM
+monolithc"](https://lwn.net/Articles/800870/), to get rid of the KVM
+common module, 'kvm.ko'.  Instead the KVM common code gets linked twice
+into each of the vendor-specific KVM modules, 'kvm-intel.ko' and
+'kvm-amd.ko'.
+
+The reason for doing this is that the 'kvm.ko' module indirectly calls
+(via the "retpoline" technique) the vendor-specific KVM modules at every
+VM-Exit, several times.  These indirect calls were not optimal before,
+but the "retpoline" mitigation (which isolates indirect branches, that
+allow a CPU to execute code from arbitrary locations, from speculative
+execution) for S

Re: [PATCH v2 07/11] block: add x-blockdev-amend qmp command

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-10-04 at 20:53 +0200, Max Reitz wrote:
> On 13.09.19 00:30, Maxim Levitsky wrote:
> > Signed-off-by: Maxim Levitsky 
> > ---
> >  block/Makefile.objs   |   2 +-
> >  block/amend.c | 116 ++
> >  include/block/block_int.h |  23 ++--
> >  qapi/block-core.json  |  26 +
> >  qapi/job.json |   4 +-
> >  5 files changed, 163 insertions(+), 8 deletions(-)
> >  create mode 100644 block/amend.c
> 
> I think I’d move this (and everything to belongs to it) to a different
> series.
I already kind of do this, patches prior to this one fully implment
the existing amend code path, while this and patches after this
one implement the qmp x-blockdev-amend code path.

i don't mind sending this as two separate patch series, now that
first refactoring patch series is committed upsteam.


> 
> > diff --git a/block/Makefile.objs b/block/Makefile.objs
> > index 35f3bca4d9..10d0308792 100644
> > --- a/block/Makefile.objs
> > +++ b/block/Makefile.objs
> > @@ -18,7 +18,7 @@ block-obj-y += block-backend.o snapshot.o qapi.o
> >  block-obj-$(CONFIG_WIN32) += file-win32.o win32-aio.o
> >  block-obj-$(CONFIG_POSIX) += file-posix.o
> >  block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
> > -block-obj-y += null.o mirror.o commit.o io.o create.o
> > +block-obj-y += null.o mirror.o commit.o io.o create.o amend.o
> >  block-obj-y += throttle-groups.o
> >  block-obj-$(CONFIG_LINUX) += nvme.o
> >  
> > diff --git a/block/amend.c b/block/amend.c
> > new file mode 100644
> > index 00..9bd28e08e7
> > --- /dev/null
> > +++ b/block/amend.c
> > @@ -0,0 +1,116 @@
> > +/*
> > + * Block layer code related to image options amend
> > + *
> > + * Copyright (c) 2018 Kevin Wolf 
> > + * Copyright (c) 2019 Maxim Levitsky 
> > + *
> > + * Heavily based on create.c
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a 
> > copy
> > + * of this software and associated documentation files (the "Software"), 
> > to deal
> > + * in the Software without restriction, including without limitation the 
> > rights
> > + * to use, copy, modify, merge, publish, distribute, sublicense, and/or 
> > sell
> > + * copies of the Software, and to permit persons to whom the Software is
> > + * furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included 
> > in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 
> > OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 
> > OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
> > FROM,
> > + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 
> > IN
> > + * THE SOFTWARE.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "block/block_int.h"
> > +#include "qemu/job.h"
> > +#include "qemu/main-loop.h"
> > +#include "qapi/qapi-commands-block-core.h"
> > +#include "qapi/qapi-visit-block-core.h"
> > +#include "qapi/clone-visitor.h"
> > +#include "qapi/error.h"
> > +
> > +typedef struct BlockdevAmendJob {
> > +Job common;
> > +BlockdevCreateOptions *opts;
> > +BlockDriverState *bs;
> > +bool force;
> > +} BlockdevAmendJob;
> > +
> > +static int coroutine_fn blockdev_amend_run(Job *job, Error **errp)
> > +{
> > +BlockdevAmendJob *s = container_of(job, BlockdevAmendJob, common);
> > +int ret;
> > +
> > +job_progress_set_remaining(&s->common, 1);
> > +ret = s->bs->drv->bdrv_co_amend(s->bs, s->opts, s->force, errp);
> > +job_progress_update(&s->common, 1);
> 
> It would be nice if the amend job could make use of the progress
> reporting that we have in place for amend.

I also thought about it, but is it worth it?

I looked through the status reporting of the qcow2 amend
code (which doesn't really allowed to be run through
qmp blockdev-amend, due to complexity of changing 
the qcow2 format on the fly).


> 
> > +
> > +qapi_free_BlockdevCreateOptions(s->opts);
> > +
> > +return ret;
> > +}
> > +
> > +static const JobDriver blockdev_amend_job_driver = {
> > +.instance_size = sizeof(BlockdevAmendJob),
> > +.job_type  = JOB_TYPE_AMEND,
> > +.run   = blockdev_amend_run,
> > +};
> > +
> > +void qmp_x_blockdev_amend(const char *job_id,
> > +const char *node_name,
> > +BlockdevCreateOptions *options,
> > +bool has_force,
> > +bool force,
> > +Error **errp)
> > +{
> > +BlockdevAmendJob *s;
> > +const char *fmt = BlockdevDriver_str(options->driver);
> > +BlockDriver *drv = bdrv_find_format(fmt);
> > +   

Re: [PATCH v2 07/11] block: add x-blockdev-amend qmp command

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-10-04 at 20:53 +0200, Max Reitz wrote:
> On 13.09.19 00:30, Maxim Levitsky wrote:
> > Signed-off-by: Maxim Levitsky 
> > ---
> >  block/Makefile.objs   |   2 +-
> >  block/amend.c | 116 ++
> >  include/block/block_int.h |  23 ++--
> >  qapi/block-core.json  |  26 +
> >  qapi/job.json |   4 +-
> >  5 files changed, 163 insertions(+), 8 deletions(-)
> >  create mode 100644 block/amend.c
> 
> I think I’d move this (and everything to belongs to it) to a different
> series.
I already kind of do this, patches prior to this one fully implement
the existing amend code path, while this and patches after this
one implement the qmp x-blockdev-amend code path.

i don't mind sending this as two separate patch series, now that
first refactoring patch series is committed upsteam.


> 
> > diff --git a/block/Makefile.objs b/block/Makefile.objs
> > index 35f3bca4d9..10d0308792 100644
> > --- a/block/Makefile.objs
> > +++ b/block/Makefile.objs
> > @@ -18,7 +18,7 @@ block-obj-y += block-backend.o snapshot.o qapi.o
> >  block-obj-$(CONFIG_WIN32) += file-win32.o win32-aio.o
> >  block-obj-$(CONFIG_POSIX) += file-posix.o
> >  block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
> > -block-obj-y += null.o mirror.o commit.o io.o create.o
> > +block-obj-y += null.o mirror.o commit.o io.o create.o amend.o
> >  block-obj-y += throttle-groups.o
> >  block-obj-$(CONFIG_LINUX) += nvme.o
> >  
> > diff --git a/block/amend.c b/block/amend.c
> > new file mode 100644
> > index 00..9bd28e08e7
> > --- /dev/null
> > +++ b/block/amend.c
> > @@ -0,0 +1,116 @@
> > +/*
> > + * Block layer code related to image options amend
> > + *
> > + * Copyright (c) 2018 Kevin Wolf 
> > + * Copyright (c) 2019 Maxim Levitsky 
> > + *
> > + * Heavily based on create.c
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a 
> > copy
> > + * of this software and associated documentation files (the "Software"), 
> > to deal
> > + * in the Software without restriction, including without limitation the 
> > rights
> > + * to use, copy, modify, merge, publish, distribute, sublicense, and/or 
> > sell
> > + * copies of the Software, and to permit persons to whom the Software is
> > + * furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included 
> > in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 
> > OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 
> > OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
> > FROM,
> > + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 
> > IN
> > + * THE SOFTWARE.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "block/block_int.h"
> > +#include "qemu/job.h"
> > +#include "qemu/main-loop.h"
> > +#include "qapi/qapi-commands-block-core.h"
> > +#include "qapi/qapi-visit-block-core.h"
> > +#include "qapi/clone-visitor.h"
> > +#include "qapi/error.h"
> > +
> > +typedef struct BlockdevAmendJob {
> > +Job common;
> > +BlockdevCreateOptions *opts;
> > +BlockDriverState *bs;
> > +bool force;
> > +} BlockdevAmendJob;
> > +
> > +static int coroutine_fn blockdev_amend_run(Job *job, Error **errp)
> > +{
> > +BlockdevAmendJob *s = container_of(job, BlockdevAmendJob, common);
> > +int ret;
> > +
> > +job_progress_set_remaining(&s->common, 1);
> > +ret = s->bs->drv->bdrv_co_amend(s->bs, s->opts, s->force, errp);
> > +job_progress_update(&s->common, 1);
> 
> It would be nice if the amend job could make use of the progress
> reporting that we have in place for amend.

I also thought about it, but is it worth it?

I looked through the status reporting of the qcow2 amend
code (which doesn't really allowed to be run through
qmp blockdev-amend, due to complexity of changing 
the qcow2 format on the fly).



> 
> > +
> > +qapi_free_BlockdevCreateOptions(s->opts);
> > +
> > +return ret;
> > +}
> > +
> > +static const JobDriver blockdev_amend_job_driver = {
> > +.instance_size = sizeof(BlockdevAmendJob),
> > +.job_type  = JOB_TYPE_AMEND,
> > +.run   = blockdev_amend_run,
> > +};
> > +
> > +void qmp_x_blockdev_amend(const char *job_id,
> > +const char *node_name,
> > +BlockdevCreateOptions *options,
> > +bool has_force,
> > +bool force,
> > +Error **errp)
> > +{
> > +BlockdevAmendJob *s;
> > +const char *fmt = BlockdevDriver_str(options->driver);
> > +BlockDriver *drv = bdrv_find_format(fmt);
> > + 

Re: [Qemu-devel] [PATCH v2 02/11] qcrypto-luks: extend the create options for upcoming encryption key management

2019-11-08 Thread Maxim Levitsky
On Mon, 2019-10-07 at 09:49 +0200, Markus Armbruster wrote:
> Quick QAPI schema review only.
> 
> Maxim Levitsky  writes:
> 
> > Now you can specify which slot to put the encryption key to
> > Plus add 'active' option which will let  user erase the key secret
> > instead of adding it.
> > Check that active=true it when creating.
> > 
> > Signed-off-by: Maxim Levitsky 
> 
> [...]
> > diff --git a/qapi/crypto.json b/qapi/crypto.json
> > index b2a4cff683..9b83a70634 100644
> > --- a/qapi/crypto.json
> > +++ b/qapi/crypto.json
> > @@ -190,6 +190,20 @@
> 
>##
># @QCryptoBlockCreateOptionsLUKS:
>#
># The options that apply to LUKS encryption format initialization
>#
># @cipher-alg: the cipher algorithm for data encryption
>#  Currently defaults to 'aes-256'.
># @cipher-mode: the cipher mode for data encryption
>#   Currently defaults to 'xts'
># @ivgen-alg: the initialization vector generator
># Currently defaults to 'plain64'
># @ivgen-hash-alg: the initialization vector generator hash
> >  #  Currently defaults to 'sha256'
> >  # @hash-alg: the master key hash algorithm
> >  #Currently defaults to 'sha256'
> > +#
> > +# @active: Should the new secret be added (true) or erased (false)
> > +#  (amend only, since 4.2)
> 
> Is "active" established terminology?  I wouldn't have guessed its
> meaning from its name...

Yea, this is one of the warts of the interface that I wanted to discuss with 
everyone
basically the result of using same API for creation and amend.

blockdev-amend, similiar to qemu-img amend will allow the user to change the 
format driver
settings, and will use the same BlockdevCreateOptions for that, similiar to how 
qemu-img amend works.

For creation of course the 'active' parameter is redundant, and it is forced to 
true.
For amend it allows to add or erase a new key.
We couldn't really think about any better name, so I kind of decided just to 
document
this and leave it like that.

> 
> As far as I can see, QCryptoBlockCreateOptionsLUKS is used just for
> blockdev-create with options.driver \in { luks, qcow, qcow2 }:
> 
>{ 'command': 'blockdev-create',
>  'data': { ...
>'options': 'BlockdevCreateOptions' } }
> 
>{ 'union': 'BlockdevCreateOptions',
>  ...
>  'data': {
>  ...
>  'luks':   'BlockdevCreateOptionsLUKS',
>  ...
>  'qcow':   'BlockdevCreateOptionsQcow',
>  'qcow2':  'BlockdevCreateOptionsQcow2',
>  ... } }
> 
> With luks:
> 
>{ 'struct': 'BlockdevCreateOptionsLUKS',
>  'base': 'QCryptoBlockCreateOptionsLUKS',
>  ... }
> 
> With qcow and qcow2:
> 
> { 'struct': 'BlockdevCreateOptionsQcow',
>   'data': { ...
> '*encrypt': 'QCryptoBlockCreateOptions' } }
> { 'struct': 'BlockdevCreateOptionsQcow2',
>   'data': { ...
> '*encrypt': 'QCryptoBlockCreateOptions',
> ... } }
> 
> { 'union': 'QCryptoBlockCreateOptions',
>   'base': 'QCryptoBlockOptionsBase',
>   'discriminator': 'format',
>   'data': { ...
> 'luks': 'QCryptoBlockCreateOptionsLUKS' } }
> 
> I think I understand why we want blockdev-create to be able to specify a
> new secret.
> 
> Why do we want it to be able to delete an existing secret?  How would
> that even work?  Color me confused...

The BlockdevCreateOptions will now be used in
both creation and amend of a block device. Of course the deletion
of an existing secret doesn't make sense on creation time, and a check
is present to disallow the user to do that.

At the same time, the size and 'file' arguments are made optional,
so that during amend you could change the block device without
specifying those.


> 
> > +#
> > +# @slot: The slot in which to put/erase the secret
> > +#if not given, will select first free slot for secret addtion
> > +#and erase all matching keyslots for erase. except last one
> > +#(optional, since 4.2)
> 
> Excuse my possibly ignorant question: what exactly is a "matching
> keyslot"?
Not ignorant at all, I dropped a word here.
I meant to say that it will erase all the keyslots which match the given
secret, except last one. The 'active' is what decides if to add to to remove
a secret.


> 
> > +#
> > +# @unlock-secret: The secret to use to unlock the image
> > +#If not given, will use the secret that was used
> > +#when opening the image.
> > +#(optional, for amend only, since 4.2)
> 
> More ignorance: what is "amend"?  No mention of it in qapi/*json...
Not ignorant at all again! This interface will be added in the next patch,
and all the changes in this patch (other that specifying the keyslot,
which can be in theory useful anyway) are for it.


> 
> > +#
> >  # @iter-time: number of milliseconds to spend in
> >  # PBKDF passphrase processing. Cu

Re: [PATCH v2 11/11] iotests : add tests for encryption key management

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-10-04 at 21:11 +0200, Max Reitz wrote:
> On 13.09.19 00:30, Maxim Levitsky wrote:
> > Note that currently I add tests 300-302, which are
> > placeholders to ease the rebase. In final version
> > of these patches I will update these.
> > 
> > Signed-off-by: Maxim Levitsky 
> > ---
> >  tests/qemu-iotests/300 | 202 +
> >  tests/qemu-iotests/300.out |  98 +++
> >  tests/qemu-iotests/301 |  90 +
> >  tests/qemu-iotests/301.out |  30 +
> >  tests/qemu-iotests/302 | 252 +
> >  tests/qemu-iotests/302.out |  18 +++
> >  tests/qemu-iotests/303 | 228 +
> >  tests/qemu-iotests/303.out |  28 +
> >  tests/qemu-iotests/group   |   9 ++
> >  9 files changed, 955 insertions(+)
> >  create mode 100755 tests/qemu-iotests/300
> >  create mode 100644 tests/qemu-iotests/300.out
> >  create mode 100755 tests/qemu-iotests/301
> >  create mode 100644 tests/qemu-iotests/301.out
> >  create mode 100644 tests/qemu-iotests/302
> >  create mode 100644 tests/qemu-iotests/302.out
> >  create mode 100644 tests/qemu-iotests/303
> >  create mode 100644 tests/qemu-iotests/303.out
> 
> [...]
> 
> > diff --git a/tests/qemu-iotests/303.out b/tests/qemu-iotests/303.out
> > new file mode 100644
> > index 00..1cf3917208
> > --- /dev/null
> > +++ b/tests/qemu-iotests/303.out
> > @@ -0,0 +1,28 @@
> > +qemu-img: Failed to get shared "consistent read" lock
> > +Is another process using the image 
> > [/home/mlevitsk/USERSPACE/qemu/build-luks/tests/qemu-iotests/scratch/test.img]?
> 
> Ah, this should be filtered.

Ooops, missed this one, thanks!
Fixed now.

> 
> Max
> 

Best regards,
Maxim Levitsky





Re: [PATCH v2 02/11] qcrypto-luks: extend the create options for upcoming encryption key management

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-10-04 at 19:42 +0200, Max Reitz wrote:
> On 13.09.19 00:30, Maxim Levitsky wrote:
> > Now you can specify which slot to put the encryption key to
> > Plus add 'active' option which will let  user erase the key secret
> > instead of adding it.
> > Check that active=true it when creating.
> > 
> > Signed-off-by: Maxim Levitsky 
> > ---
> >  block/crypto.c |  2 ++
> >  block/crypto.h | 16 +++
> >  block/qcow2.c  |  2 ++
> >  crypto/block-luks.c| 26 +++---
> >  qapi/crypto.json   | 19 ++
> >  tests/qemu-iotests/082.out | 54 ++
> >  6 files changed, 115 insertions(+), 4 deletions(-)
> 
> (Just doing a cursory RFC-style review)
> 
> I think we also want to reject unlock-secret if it’s given for creation;
Agree, I'll do this in the next version.

> and I suppose it’d be more important to print which slots are OK than
> the slot the user has given.  (It isn’t like we shouldn’t print that
> slot index, but it’s more likely the user knows that than what the
> limits are.  I think.)
I don't really understand what you mean here :-( 

Since this is qmp interface,
I can't really print anything from it, other that error messages.



> 
> Max
> 

Best regards,
Maxim Levitsky




Re: QEMU HTML documentation now on qemu.org

2019-11-08 Thread Paolo Bonzini
On 08/11/19 09:41, Stefan Hajnoczi wrote:
>> If we're not storing the generated docs in git, then when
>> pushing to qemu-web.git we need to ensure we preserve the
>> extra /docs dir content in some manner.
> For qemu.git/master the built docs might change every day.  Committing
> them to qemu-web.git seems like overkill.  I'll send a documentation.md
> patch for qemu-web.git instead that simply links to
> wiki.qemu.org/docs/.

I think this is a good first step.  Perhaps in the long term we want to
have docs.qemu.org/latest/ link to the latest release,
docs.qemu.org/unstable/ rebuilt from master, etc.

Paolo



Re: [PATCH v2 05/11] block/crypto: implement the encryption key management

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-10-04 at 20:41 +0200, Max Reitz wrote:
> On 13.09.19 00:30, Maxim Levitsky wrote:
> > This implements the encryption key management
> > using the generic code in qcrypto layer
> > (currently only for qemu-img amend)
> > 
> > This code adds another 'write_func' because the initialization
> > write_func works directly on the underlying file,
> > because during the creation, there is no open instance
> > of the luks driver, but during regular use, we have it,
> > and should use it instead.
> > 
> > 
> > This commit also adds a 'hack/workaround' I and Kevin Wolf (thanks)
> > made to make the driver still support write sharing,
> > but be safe against concurrent  metadata update (the keys)
> > Eventually write sharing for luks driver will be deprecated
> > and removed together with this hack.
> > 
> > The hack is that we ask (as a format driver) for
> > BLK_PERM_CONSISTENT_READ always
> > (technically always unless opened with BDRV_O_NO_IO)
> > 
> > and then when we want to update the keys, we
> > unshare that permission. So if someone else
> > has the image open, even readonly, this will fail.
> > 
> > Also thanks to Daniel Berrange for the variant of
> > that hack that involves asking for read,
> > rather that write permission
> > 
> > Signed-off-by: Maxim Levitsky 
> > ---
> >  block/crypto.c | 118 +++--
> >  1 file changed, 115 insertions(+), 3 deletions(-)
> > 
> > diff --git a/block/crypto.c b/block/crypto.c
> > index a6a3e1f1d8..f42fa057e6 100644
> > --- a/block/crypto.c
> > +++ b/block/crypto.c
> > @@ -36,6 +36,7 @@ typedef struct BlockCrypto BlockCrypto;
> >  
> >  struct BlockCrypto {
> >  QCryptoBlock *block;
> > +bool updating_keys;
> >  };
> >  
> >  
> > @@ -70,6 +71,24 @@ static ssize_t block_crypto_read_func(QCryptoBlock 
> > *block,
> >  return ret;
> >  }
> >  
> > +static ssize_t block_crypto_write_func(QCryptoBlock *block,
> > +   size_t offset,
> > +   const uint8_t *buf,
> > +   size_t buflen,
> > +   void *opaque,
> > +   Error **errp)
> 
> There’s already a function of this name for creation.

There is a long story why two write functions are needed.
i tried to use only one, but at the end I and Daniel both agreed
that its just better to have two functions.

The reason is that during creation, the luks BlockDriverState doesn't exist yet,
and so the creation routine basically just writes to the underlying protocol 
driver.

Thats is why the block_crypto_create_write_func receives a BlockBackend pointer,
to which the BlockDriverState of the underlying protocol driver is inserted.


On the other hand, for amend, the luks block device is open, and it only knows
about its own BlockDriverState, and thus the io should be done on bs->file

So instead of trying to coerce a single callback to do both of this,
we decided to just have a little code duplication.


> 
> > +{
> > +BlockDriverState *bs = opaque;
> > +ssize_t ret;
> > +
> > +ret = bdrv_pwrite(bs->file, offset, buf, buflen);
> > +if (ret < 0) {
> > +error_setg_errno(errp, -ret, "Could not write encryption header");
> > +return ret;
> > +}
> > +return ret;
> > +}
> > +
> >  
> >  struct BlockCryptoCreateData {
> >  BlockBackend *blk;
> 
> [...]
> 
> > +static void
> > +block_crypto_child_perms(BlockDriverState *bs, BdrvChild *c,
> > + const BdrvChildRole *role,
> > + BlockReopenQueue *reopen_queue,
> > + uint64_t perm, uint64_t shared,
> > + uint64_t *nperm, uint64_t *nshared)
> > +{
> > +
> > +BlockCrypto *crypto = bs->opaque;
> > +
> > +/*
> > + * Ask for consistent read permission so that if
> > + * someone else tries to open this image with this permission
> > + * neither will be able to edit encryption keys
> > + */
> > +if (!(bs->open_flags & BDRV_O_NO_IO)) {
> > +perm |= BLK_PERM_CONSISTENT_READ;
> > +}
> > +
> > +/*
> > + * This driver doesn't modify LUKS metadata except
> > + * when updating the encryption slots.
> > + * Thus unlike a proper format driver we don't ask for
> > + * shared write permission. However we need it
> > + * when we area updating keys, to ensure that only we
> > + * had opened the device r/w
> > + *
> > + * Encryption update will set the crypto->updating_keys
> > + * during that period and refresh permissions
> > + *
> > + */
> > +
> > +if (crypto->updating_keys) {
> > +/*need exclusive write access for header update  */
> > +perm |= BLK_PERM_WRITE;
> > +shared &= ~(BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE);
> > +}
> > +
> > +bdrv_filter_default_perms(bs, c, role, reopen_queue

Re: Deprecating stuff for 4.2

2019-11-08 Thread Vladimir Sementsov-Ogievskiy
08.11.2019 9:41, Markus Armbruster wrote:
> Vladimir Sementsov-Ogievskiy  writes:
> 
>> 07.11.2019 21:52, Philippe Mathieu-Daudé wrote:
> [...]
>>> Pre-release period, time to deprecate some stuffs :)
>>>
>>> How should we proceed? Do you have something in mind?
>>>
>>> There are older threads about this. Should we start a new thread? Gather 
>>> the different ideas on the Wiki?
>>>
>>> (Obviously you are not the one responsible of this topic, you just happen 
>>> to be the last one worried about it on the list).
>>>
>>> Regards,
>>>
>>> Phil.
> 
> 4.2.0-rc0 has been tagged, i.e. we're in hard freeze already.  Only bug
> fixes are accepted during hard freeze.  We've occasionally bent this
> rule after -rc0 for borderline cases, e.g. to tweak a new external
> interface before the release calcifies it.  Making a case for bending
> the rules becomes harder with each -rc.
> 
> Ideally, we'd double-check new interfaces for gaffes before a release,
> and whether old interfaces that have been replaced now should be
> deprecated.  There's rarely time for that, and pretty much never for
> releases right after KVM Forum.
> 
> So no, I don't have anything in mind for 4.2.
> 
> We intend to tag -rc1 next Tuesday.  To make that deadline, we'd need
> patches, not just ideas.
> 
>> Hi!
>>
>> I wanted to resend, but faced some problems, and understand that I can't do 
>> it in time before soft-freeze..
>> But you say, that we can deprecate something even after hard-freeze?
> 
> See above.
> 
>> Ok, the problem that I faced, is that deprecation warnings breaks some 
>> iotests.. What can we do?
>>
>> 1. Update iotests...
>> 1.1 Just update iotests outputs to show warnings. Then, in next release 
>> cycle, update iotests, to not use deprecated things
> 
> Sounds workable to me, but I'm not the maintainer.
> 
>> or
>> 1.2 Update iotests to not use deprecated things.. Not appropriate for 
>> hard freeze.
> 
> Unnecessarily risky compared to 1.1.
> 
>> or
>> 2. Commit deprecations without warnings.. But how do people find out about 
>> this?
> 
> Not nice.
> 
> We do it for QMP, but only because we still lack the means to warn
> there.
> 
>> Next, what exactly to deprecate? As I understand, we can't deprecate 
>> drive-mirror now?
>> So I propose to:
>>
>> 1. deprecate drive-backup
>> 2. add optional filter-node-name parameter to drive-mirror, to correspond to 
>> commit and mirror
>> 3. deprecate that filter-node-name is optional for commit and mirror.
> 
> To have a chance there, we need patches a.s.a.p.
> 

OK, I'll send today and we'll see, what to do with it.

-- 
Best regards,
Vladimir


[PATCH 4/4 V2] scsi-disk: add FUA support for COMPARE_AND_WRITE

2019-11-08 Thread Yaowei Bai
It is implemented in the blk_aio_pwritev's callback function in a way
similar to its emulation in scsi_write_do_fua function

Signed-off-by: Yaowei Bai 
---
 hw/scsi/scsi-disk.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index f9a0267..0731a3b 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -229,6 +229,7 @@ static bool scsi_is_cmd_fua(SCSICommand *cmd)
 case WRITE_10:
 case WRITE_12:
 case WRITE_16:
+case COMPARE_AND_WRITE:
 return (cmd->buf[1] & 8) != 0;
 
 case VERIFY_10:
@@ -1850,10 +1851,17 @@ static void scsi_compare_and_write_complete(void 
*opaque, int ret)
 }
 
 block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct);
+if (r->need_fua_emulation) {
+block_acct_start(blk_get_stats(s->qdev.conf.blk), &r->acct, 0,
+ BLOCK_ACCT_FLUSH);
+r->req.aiocb = blk_aio_flush(s->qdev.conf.blk, scsi_aio_complete, r);
+goto free;
+}
 scsi_req_complete(&r->req, GOOD);
 
 done:
 scsi_req_unref(&r->req);
+free:
 qemu_vfree(data->iov.iov_base);
 g_free(data);
 aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
@@ -1954,6 +1962,7 @@ static int32_t scsi_disk_emulate_command(SCSIRequest 
*req, uint8_t *buf)
 {
 SCSIDiskReq *r = DO_UPCAST(SCSIDiskReq, req, req);
 SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, req->dev);
+SCSIDiskClass *sdc = (SCSIDiskClass *) object_get_class(OBJECT(s));
 uint64_t nb_sectors;
 uint8_t *outbuf;
 int buflen;
@@ -2209,6 +2218,7 @@ static int32_t scsi_disk_emulate_command(SCSIRequest 
*req, uint8_t *buf)
 return 0;
 }
 assert(!r->req.aiocb);
+r->need_fua_emulation = sdc->need_fua_emulation(&r->req.cmd);
 r->iov.iov_len = MIN(r->buflen, req->cmd.xfer);
 if (r->iov.iov_len == 0) {
 scsi_req_complete(&r->req, GOOD);
-- 
1.8.3.1






[PATCH 1/4 V2] block: add SCSI COMPARE_AND_WRITE support

2019-11-08 Thread Yaowei Bai
Some storages(i.e. librbd) already have interfaces to handle some SCSI
commands directly. This patch adds COMPARE_AND_WRITE command support
through the write path in the block io layer by introducing a new element
BDRV_REQ_COMPARE_AND_WRITE into BdrvRequestFlags which indicates a
COMPARE_AND_WRITE request. In this way we could easily extend to other
SCSI commands support like WRITE_SAME in the future.

Signed-off-by: Yaowei Bai 
---
 block/io.c| 20 
 include/block/block.h |  5 +++--
 include/block/block_int.h |  3 +++
 3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/block/io.c b/block/io.c
index f75777f..3507d71 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1186,6 +1186,26 @@ static int coroutine_fn 
bdrv_driver_pwritev(BlockDriverState *bs,
 goto emulate_flags;
 }
 
+if (drv->bdrv_aio_compare_and_write &&
+  (flags & BDRV_REQ_COMPARE_AND_WRITE)) {
+BlockAIOCB *acb;
+CoroutineIOCompletion co = {
+.coroutine = qemu_coroutine_self(),
+};
+
+acb = drv->bdrv_aio_compare_and_write(bs, offset, bytes, qiov,
+flags & bs->supported_write_flags,
+bdrv_co_io_em_complete, &co);
+flags &= ~bs->supported_write_flags;
+if (acb == NULL) {
+ret = -EIO;
+} else {
+qemu_coroutine_yield();
+ret = co.ret;
+}
+goto emulate_flags;
+}
+
 if (drv->bdrv_aio_pwritev) {
 BlockAIOCB *acb;
 CoroutineIOCompletion co = {
diff --git a/include/block/block.h b/include/block/block.h
index 1df9848..f71aa4f 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -92,9 +92,10 @@ typedef enum {
  * on read request and means that caller doesn't really need data to be
  * written to qiov parameter which may be NULL.
  */
-BDRV_REQ_PREFETCH  = 0x200,
+BDRV_REQ_PREFETCH   = 0x200,
+BDRV_REQ_COMPARE_AND_WRITE  = 0x400,
 /* Mask of valid flags */
-BDRV_REQ_MASK   = 0x3ff,
+BDRV_REQ_MASK   = 0x7ff,
 } BdrvRequestFlags;
 
 typedef struct BlockSizes {
diff --git a/include/block/block_int.h b/include/block/block_int.h
index dd033d0..96096e0 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -189,6 +189,9 @@ struct BlockDriver {
 BlockAIOCB *(*bdrv_aio_pdiscard)(BlockDriverState *bs,
 int64_t offset, int bytes,
 BlockCompletionFunc *cb, void *opaque);
+BlockAIOCB *(*bdrv_aio_compare_and_write)(BlockDriverState *bs,
+uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags,
+BlockCompletionFunc *cb, void *opaque);
 
 int coroutine_fn (*bdrv_co_readv)(BlockDriverState *bs,
 int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
-- 
1.8.3.1






[PATCH 2/4 V2] block/rbd: implement bdrv_aio_compare_and_write interface

2019-11-08 Thread Yaowei Bai
This patch adds librbd's SCSI COMPARE_AND_WRITE command interface
support with bdrv_aio_compare_and_write function pointer. Note
currently when a miscompare happens a mismatch offset of 0 is
always reported rather than the actual mismatch offset. This
should not be a big issue contemporarily and will be fixed
accordingly in the future.

Signed-off-by: Yaowei Bai 
---
 block/raw-format.c |  3 ++-
 block/rbd.c| 45 +++--
 2 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/block/raw-format.c b/block/raw-format.c
index 3a76ec7..e87cd44 100644
--- a/block/raw-format.c
+++ b/block/raw-format.c
@@ -439,7 +439,8 @@ static int raw_open(BlockDriverState *bs, QDict *options, 
int flags,
 
 bs->sg = bs->file->bs->sg;
 bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
-(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
+(BDRV_REQ_FUA & bs->file->bs->supported_write_flags) |
+(BDRV_REQ_COMPARE_AND_WRITE & bs->file->bs->supported_write_flags);
 bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
 ((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
 bs->file->bs->supported_zero_flags);
diff --git a/block/rbd.c b/block/rbd.c
index 027cbcc..0e45bc3 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -73,11 +73,18 @@
 #define LIBRBD_USE_IOVEC 0
 #endif
 
+#ifdef LIBRBD_SUPPORTS_COMPARE_AND_WRITE
+#define LIBRBD_HAVE_COMPARE_AND_WRITE 1
+#else
+#define LIBRBD_HAVE_COMPARE_AND_WRITE 0
+#endif
+
 typedef enum {
 RBD_AIO_READ,
 RBD_AIO_WRITE,
 RBD_AIO_DISCARD,
-RBD_AIO_FLUSH
+RBD_AIO_FLUSH,
+RBD_AIO_COMPARE_AND_WRITE
 } RBDAIOCmd;
 
 typedef struct RBDAIOCB {
@@ -798,6 +805,10 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict 
*options, int flags,
 }
 }
 
+if (LIBRBD_HAVE_COMPARE_AND_WRITE) {
+bs->supported_write_flags = BDRV_REQ_COMPARE_AND_WRITE;
+}
+
 r = 0;
 goto out;
 
@@ -933,7 +944,15 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs,
 
 rcb = g_new(RADOSCB, 1);
 
-if (!LIBRBD_USE_IOVEC) {
+if (cmd == RBD_AIO_COMPARE_AND_WRITE) {
+acb->bounce = qemu_try_blockalign(bs, qiov->size);
+if (acb->bounce == NULL) {
+goto failed;
+}
+
+qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size);
+rcb->buf = acb->bounce;
+} else if (!LIBRBD_USE_IOVEC) {
 if (cmd == RBD_AIO_DISCARD || cmd == RBD_AIO_FLUSH) {
 acb->bounce = NULL;
 } else {
@@ -993,6 +1012,10 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs,
 case RBD_AIO_FLUSH:
 r = rbd_aio_flush_wrapper(s->image, c);
 break;
+case RBD_AIO_COMPARE_AND_WRITE:
+r = rbd_aio_compare_and_write(s->image, off, size / 2, rcb->buf,
+ (rcb->buf + size / 2), c, 0, 0);
+break;
 default:
 r = -EINVAL;
 }
@@ -1056,6 +1079,20 @@ static int qemu_rbd_co_flush(BlockDriverState *bs)
 }
 #endif
 
+#ifdef LIBRBD_SUPPORTS_COMPARE_AND_WRITE
+static BlockAIOCB *qemu_rbd_aio_compare_and_write(BlockDriverState *bs,
+  uint64_t offset,
+  uint64_t bytes,
+  QEMUIOVector *qiov,
+  int flags,
+  BlockCompletionFunc *cb,
+  void *opaque)
+{
+return rbd_start_aio(bs, offset, qiov, bytes, cb, opaque,
+ RBD_AIO_COMPARE_AND_WRITE);
+}
+#endif
+
 static int qemu_rbd_getinfo(BlockDriverState *bs, BlockDriverInfo *bdi)
 {
 BDRVRBDState *s = bs->opaque;
@@ -1310,6 +1347,10 @@ static BlockDriver bdrv_rbd = {
 .bdrv_aio_pdiscard  = qemu_rbd_aio_pdiscard,
 #endif
 
+#ifdef LIBRBD_SUPPORTS_COMPARE_AND_WRITE
+.bdrv_aio_compare_and_write = qemu_rbd_aio_compare_and_write,
+#endif
+
 .bdrv_snapshot_create   = qemu_rbd_snap_create,
 .bdrv_snapshot_delete   = qemu_rbd_snap_remove,
 .bdrv_snapshot_list = qemu_rbd_snap_list,
-- 
1.8.3.1






[PATCH 3/4 V2] hw/scsi: add SCSI COMPARE_AND_WRITE support

2019-11-08 Thread Yaowei Bai
This patch emulates COMPARE_AND_WRITE command with the
BDRV_REQ_COMPARE_AND_WRITE flag introduced by last patch. It matches
the SBC-4 standard except the FUA bit support, it'll be finished in
the next patch.

Note that cmd->xfer is set 2 * the number got by scsi_data_cdb_xfer
so we could touch the least code.

Signed-off-by: Yaowei Bai 
---
 hw/scsi/emulation.c |  1 +
 hw/scsi/scsi-bus.c  |  4 +++
 hw/scsi/scsi-disk.c | 88 +
 hw/scsi/trace-events|  1 +
 include/hw/scsi/emulation.h |  3 ++
 include/scsi/utils.h|  2 ++
 scsi/utils.c|  5 +++
 7 files changed, 104 insertions(+)

diff --git a/hw/scsi/emulation.c b/hw/scsi/emulation.c
index 06d62f3..1f53c4a 100644
--- a/hw/scsi/emulation.c
+++ b/hw/scsi/emulation.c
@@ -9,6 +9,7 @@ int scsi_emulate_block_limits(uint8_t *outbuf, const 
SCSIBlockLimits *bl)
 memset(outbuf, 0, 0x3c);
 
 outbuf[0] = bl->wsnz; /* wsnz */
+outbuf[1] = MAX_COMPARE_AND_WRITE_LENGTH;
 
 if (bl->max_io_sectors) {
 /* optimal transfer length granularity.  This field and the optimal
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 359d50d..a20eb11 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -1003,6 +1003,9 @@ static int scsi_req_xfer(SCSICommand *cmd, SCSIDevice 
*dev, uint8_t *buf)
 case WRITE_VERIFY_16:
 cmd->xfer *= dev->blocksize;
 break;
+case COMPARE_AND_WRITE:
+cmd->xfer *= 2 * dev->blocksize;
+break;
 case READ_6:
 case READ_REVERSE:
 /* length 0 means 256 blocks */
@@ -1222,6 +1225,7 @@ static void scsi_cmd_xfer_mode(SCSICommand *cmd)
 case PERSISTENT_RESERVE_OUT:
 case MAINTENANCE_OUT:
 case SET_WINDOW:
+case COMPARE_AND_WRITE:
 case SCAN:
 /* SCAN conflicts with START_STOP.  START_STOP has cmd->xfer set to 0 
for
  * non-scanner devices, so we only get here for SCAN and not for 
START_STOP.
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 07fb5eb..f9a0267 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -478,6 +478,9 @@ static bool scsi_handle_rw_error(SCSIDiskReq *r, int error, 
bool acct_failed)
 case ENOSPC:
 scsi_check_condition(r, SENSE_CODE(SPACE_ALLOC_FAILED));
 break;
+case EILSEQ:
+scsi_check_condition(r, SENSE_CODE(MISCOMPARE_DURING_VERIFY));
+break;
 default:
 scsi_check_condition(r, SENSE_CODE(IO_ERROR));
 break;
@@ -1825,6 +1828,84 @@ static void scsi_disk_emulate_write_same(SCSIDiskReq *r, 
uint8_t *inbuf)
scsi_write_same_complete, data);
 }
 
+typedef struct CompareAndWriteCBData {
+SCSIDiskReq *r;
+int64_t sector;
+int nb_sectors;
+QEMUIOVector qiov;
+struct iovec iov;
+} CompareAndWriteCBData;
+
+static void scsi_compare_and_write_complete(void *opaque, int ret)
+{
+CompareAndWriteCBData *data = opaque;
+SCSIDiskReq *r = data->r;
+SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, r->req.dev);
+
+assert(r->req.aiocb != NULL);
+r->req.aiocb = NULL;
+aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
+if (scsi_disk_req_check_error(r, ret, true)) {
+goto done;
+}
+
+block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct);
+scsi_req_complete(&r->req, GOOD);
+
+done:
+scsi_req_unref(&r->req);
+qemu_vfree(data->iov.iov_base);
+g_free(data);
+aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
+}
+
+static void scsi_disk_emulate_compare_and_write(SCSIDiskReq *r, uint8_t *inbuf)
+{
+SCSIRequest *req = &r->req;
+SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, req->dev);
+uint32_t nb_sectors = scsi_data_cdb_xfer(r->req.cmd.buf);
+CompareAndWriteCBData *data;
+uint8_t *buf;
+int i;
+
+if (nb_sectors > MAX_COMPARE_AND_WRITE_LENGTH) {
+scsi_check_condition(r, SENSE_CODE(INVALID_FIELD));
+return;
+}
+
+if (blk_is_read_only(s->qdev.conf.blk)) {
+scsi_check_condition(r, SENSE_CODE(WRITE_PROTECTED));
+return;
+}
+
+if (r->req.cmd.lba > s->qdev.max_lba ||
+!check_lba_range(s, r->req.cmd.lba, nb_sectors)) {
+scsi_check_condition(r, SENSE_CODE(LBA_OUT_OF_RANGE));
+return;
+}
+
+data = g_new0(CompareAndWriteCBData, 1);
+data->r = r;
+data->sector = r->req.cmd.lba * (s->qdev.blocksize / 512);
+data->nb_sectors = r->req.cmd.xfer * (s->qdev.blocksize / 512);
+data->iov.iov_len = data->nb_sectors;
+data->iov.iov_base = buf = blk_blockalign(s->qdev.conf.blk,
+  data->iov.iov_len);
+qemu_iovec_init_external(&data->qiov, &data->iov, 1);
+
+for (i = 0; i < data->iov.iov_len; i += s->qdev.blocksize) {
+memcpy(&buf[i], &inbuf[i], s->qdev.blocksize);
+}
+
+scsi_req_ref(&r->req);
+block_acct_start(

[PATCH 0/4 V2] SCSI COMPARE_AND_WRITE command support

2019-11-08 Thread Yaowei Bai
Recently ceph/librbd added several interfaces to handle SCSI commands like
COMPARE_AND_WRITE directly. However they were only be used in special
scenarios, i.e. ISCSI. That involves more software components which makes
the IO path longer and could bring more potential issues. Actually we're
maintaining several ceph clusters with ISCSI protocal being used which,
i have to say, is not an easy job.

So supporting COMPARE_AND_WRITE in scsi-disk would be a reasonable solution
and easy to implement with the help of the SCSI handlers in librbd, which
could leave alone the ISCSI stuff and make the IO path shorter.

This patchset implements it by reusing the blk_aio_pwritev interface and
introducing a new BDRV_REQ_COMPARE_AND_WRITE element into BdrvRequestFlags
to indicate a COMPARE_AND_WRITE request, rather than adding a new interface
into block-backend.c.

The FUA support is implemented in the blk_aio_pwritev's callback function
similar to its emulation in scsi_write_do_fua function, maybe through
BDRV_REQ_FUA is another doable way.

This patchset is tested with the method of sg_compare_and_write.txt from
sg3_utils. Link below:

https://github.com/hreinecke/sg3_utils/blob/master/examples/sg_compare_and_write.txt

v1->v2: fix checkpatch script complaints and cleanup

Yaowei Bai (4):
  block: add SCSI COMPARE_AND_WRITE support
  block/rbd: implement bdrv_aio_compare_and_write interface
  hw/scsi: add SCSI COMPARE_AND_WRITE support
  scsi-disk: add FUA support for COMPARE_AND_WRITE

 block/io.c  | 20 +
 block/raw-format.c  |  3 +-
 block/rbd.c | 45 -
 hw/scsi/emulation.c |  1 +
 hw/scsi/scsi-bus.c  |  4 ++
 hw/scsi/scsi-disk.c | 98 +
 hw/scsi/trace-events|  1 +
 include/block/block.h   |  5 ++-
 include/block/block_int.h   |  3 ++
 include/hw/scsi/emulation.h |  3 ++
 include/scsi/utils.h|  2 +
 scsi/utils.c|  5 +++
 12 files changed, 185 insertions(+), 5 deletions(-)

-- 
1.8.3.1






Re: QEMU HTML documentation now on qemu.org

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 09:41:30AM +0100, Stefan Hajnoczi wrote:
> On Thu, Nov 07, 2019 at 04:01:42PM +, Daniel P. Berrangé wrote:
> > On Thu, Nov 07, 2019 at 04:44:34PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Nov 7, 2019 at 11:07 AM Daniel P. Berrangé  
> > > wrote:
> > > >
> > > > On Wed, Nov 06, 2019 at 05:19:28PM +0100, Stefan Hajnoczi wrote:
> > > > > Hi,
> > > > > You can now access the latest QEMU HTML documentation built from
> > > > > qemu.git/master nightly at:
> > > > >
> > > > >   https://wiki.qemu.org/docs/qemu-doc.html
> > > > >   https://wiki.qemu.org/docs/qemu-qmp-ref.html
> > > > >   https://wiki.qemu.org/docs/qemu-ga-ref.html
> > > > >   ...as well as interop/ and specs/
> > > > >
> > > > > Feel free to link to the documentation from the QEMU website and/or
> > > > > wiki!
> > > >
> > > > What's the reason for putting on wiki.qemu.org URL ? It feels like
> > > > having it under www.qemu.org would be a more natural home, especially
> > > > if we can then make it pick up the jekyll theme around the pages.
> > > >
> > > > Ideally we should publish the docs under versioned URL when we
> > > > make a release. eg  /docs/latest/  for current GIT master
> > > > which I presume the above is tracking, and then a /docs/$VERSION/...
> > > > for each major release we cut.
> > > >
> > > > That way users can get an accurate view of features in the QEMU
> > > > they are actually using.
> > > 
> > > Versioned release docs should be generated during the release process.
> > > I have CCed Mike Roth.  That way the docs are available as soon as the
> > > release drops.  This container image only runs once a day and would
> > > leave a window when users cannot access the docs yet.
> > > 
> > > Moving from wiki.qemu.org should be possible.  How does the jekyll
> > > theme you mentioned work?
> > 
> > IIUC, when there's a push to the qemu-web.git repo, some git hook (?)
> > runs on the server which invokes jekyll to build the content, and
> > then publish it to the webroot.
> > 
> > To integrate these docs into that we need something along the lines
> > of:
> > 
> >   1. Generate the HTML files as you do now
> >   2. Copy them into the qemu-web.git in a /docs/ subdir
> >   3. Prepend a magic header to make jeykll process the file
> > 
> >  ---
> >  permalink: /docs/qemu-doc
> >  ---
> > 
> >   4. Trigger the jekyll builder to refresh the generated docs
> >   5. Publish the docs to the webroot
> > 
> > You can see what I did here  as an example where I simply committed
> > the generated docs to qemu-web.git:
> > 
> >   https://www.mail-archive.com/qemu-devel@nongnu.org/msg578110.html
> > 
> > If we're not storing the generated docs in git, then when
> > pushing to qemu-web.git we need to ensure we preserve the
> > extra /docs dir content in some manner.
> 
> For qemu.git/master the built docs might change every day.  Committing
> them to qemu-web.git seems like overkill.  I'll send a documentation.md
> patch for qemu-web.git instead that simply links to
> wiki.qemu.org/docs/.

Yeah, to be clear I wasn't suggesting committing them to qemu-web.git.
Really we just need to put the generated .html files into some scratch
directory on the web server where there qemu-web.git jekyll build can
automatically find them & process them in the same way it does for
content that is committed.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 00/55] Patch Round-up for stable 4.1.1, freeze on 2019-11-12

2019-11-08 Thread Max Reitz
On 05.11.19 21:51, Michael Roth wrote:
> Hi everyone,
> 
> The following new patches are queued for QEMU stable v4.1.1:
> 
>   https://github.com/mdroth/qemu/commits/stable-4.1-staging
> 
> The release is tentatively planned for 2019-11-14:
> 
>   https://wiki.qemu.org/Planning/4.1
> 
> Please note that the original release date was planned for 2019-11-21,
> but was moved up to address a number of qcow2 corruption issues:
> 
>   https://lists.gnu.org/archive/html/qemu-devel/2019-10/msg07144.html
> 
> Fixes for the XFS issues noted in the thread are still pending, but will
> hopefully be qemu.git master in time for 4.1.1 freeze and the
> currently-scheduled release date for 4.2.0-rc1.
> 
> The list of still-pending patchsets being tracked for inclusion are:
> 
>   qcow2: Fix data corruption on XFS
> https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg00073.html
> (PULL pending)
>   qcow2: Fix QCOW2_COMPRESSED_SECTOR_MASK
> https://lists.gnu.org/archive/html/qemu-devel/2019-10/msg07718.html
>   qcow2-bitmap: Fix uint64_t left-shift overflow
> https://lists.gnu.org/archive/html/qemu-devel/2019-10/msg07989.html

Thanks for tracking these, all of these are in master now.  (So they
should be in time.)

Max



signature.asc
Description: OpenPGP digital signature


Re: Should QEMU's configure script check for bzip2 ?

2019-11-08 Thread Daniel P . Berrangé
On Thu, Nov 07, 2019 at 08:43:27PM +0100, Thomas Huth wrote:
> 
>  Hi,
> 
> I just tried to compile QEMU on a freshly installed system. "configure"
> finished without problems, but during "make" I hit this error:
> 
>   BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
> /bin/sh: bzip2: command not found
> make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
> make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
> make: *** Waiting for unfinished jobs
> 
> Sure, it's easy to fix, but maybe "configure" should already check for the
> availablity of "bzip2", so that we then either skip the installation of the
> edk2 images if "bzip2" is not available, or bail out during "configure"
> already?

The general rule is that if we run a binary we should check for it upfront
so users immediately see any missing pre-requisites, rather than wasting
30 minutes waiting for QEMU to build & then fail.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v1 2/4] virtio: make seg_max virtqueue size dependent

2019-11-08 Thread Michael S. Tsirkin
On Fri, Nov 08, 2019 at 07:43:22AM +, Denis Plotnikov wrote:
> The 1st patch from the series seems to be useless. The patch extending 
> queue length by adding machine type may break vm-s which use seabios 
> with max queue size = 128.
> 
> Looks like only this patch doesn't break anything and helps to express 
> queue size and seg max dependency (the specification constraint) 
> explicitly. So, I would like to re-send this patch as a standalone one 
> and send other patches including the test later, when we all agree on 
> how exactly to deal with issues posted in the thread.

OK, and I think we should make it machine type dependent.

> Any objections are welcome.
> 
> Denis
> 
> On 06.11.2019 14:54, Michael S. Tsirkin wrote:
> > On Wed, Nov 06, 2019 at 10:07:02AM +, Denis Lunev wrote:
> >> On 11/5/19 9:51 PM, Michael S. Tsirkin wrote:
> >>> On Tue, Nov 05, 2019 at 07:11:03PM +0300, Denis Plotnikov wrote:
>  seg_max has a restriction to be less or equal to virtqueue size
>  according to Virtio 1.0 specification
> 
>  Although seg_max can't be set directly, it's worth to express this
>  dependancy directly in the code for sanity purpose.
> 
>  Signed-off-by: Denis Plotnikov 
> >>> This is guest visible so needs to be machine type dependent, right?
> >> we have discussed this verbally with Stefan and think that
> >> there is no need to add that to the machine type as:
> >>
> >> - original default was 126, which matches 128 as queue
> >>    length in old machine types
> >> - queue length > 128 is not observed in the field as
> >>    SeaBios has quirk that asserts
> > Well that's just the SeaBios virtio driver. Not everyone's using that to
> > drive their devices.
> >
> >> - if queue length will be set to something < 128 - linux
> >>    guest will crash
> > Again that's just one guest driver. Not everyone is using that either.
> >
> >
> >> If we really need to preserve original __buggy__ behavior -
> >> we can add boolean property, pls let us know.
> >>
> >> Den
> > Looks like some drivers are buggy but I'm not sure it's
> > the same as saying the behavior is buggy.
> > So yes, I'd say it's preferable to be compatible.
> >
> >
>  ---
>    hw/block/virtio-blk.c | 2 +-
>    hw/scsi/virtio-scsi.c | 2 +-
>    2 files changed, 2 insertions(+), 2 deletions(-)
> 
>  diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>  index 06e57a4d39..21530304cf 100644
>  --- a/hw/block/virtio-blk.c
>  +++ b/hw/block/virtio-blk.c
>  @@ -903,7 +903,7 @@ static void virtio_blk_update_config(VirtIODevice 
>  *vdev, uint8_t *config)
>    blk_get_geometry(s->blk, &capacity);
>    memset(&blkcfg, 0, sizeof(blkcfg));
>    virtio_stq_p(vdev, &blkcfg.capacity, capacity);
>  -virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
>  +virtio_stl_p(vdev, &blkcfg.seg_max, s->conf.queue_size - 2);
>    virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
>    virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
>    virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / 
>  blk_size);
>  diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
>  index 839f120256..f7e5533cd5 100644
>  --- a/hw/scsi/virtio-scsi.c
>  +++ b/hw/scsi/virtio-scsi.c
>  @@ -650,7 +650,7 @@ static void virtio_scsi_get_config(VirtIODevice 
>  *vdev,
>    VirtIOSCSICommon *s = VIRTIO_SCSI_COMMON(vdev);
>    
>    virtio_stl_p(vdev, &scsiconf->num_queues, s->conf.num_queues);
>  -virtio_stl_p(vdev, &scsiconf->seg_max, 128 - 2);
>  +virtio_stl_p(vdev, &scsiconf->seg_max, s->conf.virtqueue_size - 2);
>    virtio_stl_p(vdev, &scsiconf->max_sectors, s->conf.max_sectors);
>    virtio_stl_p(vdev, &scsiconf->cmd_per_lun, s->conf.cmd_per_lun);
>    virtio_stl_p(vdev, &scsiconf->event_info_size, 
>  sizeof(VirtIOSCSIEvent));
>  -- 
>  2.17.0



Re: [PATCH 0/3] Some memory leak fixes

2019-11-08 Thread Michael S. Tsirkin
On Thu, Nov 07, 2019 at 11:27:28PM +0400, Marc-André Lureau wrote:
> Hi,
> 
> Another bunch of fixes spotted by ASAN when running check-qtest-x86_64.

So who's merging this?

> Marc-André Lureau (3):
>   virtio-input: fix memory leak on unrealize
>   qtest: fix qtest_qmp_device_add leak
>   cpu-plug-test: fix leaks
> 
>  hw/input/virtio-input.c | 3 +++
>  tests/cpu-plug-test.c   | 2 ++
>  tests/libqtest.c| 1 +
>  3 files changed, 6 insertions(+)
> 
> -- 
> 2.24.0.rc0.20.gd81542e6f3



Re: [PATCH 1/3] virtio-input: fix memory leak on unrealize

2019-11-08 Thread Michael S. Tsirkin
On Thu, Nov 07, 2019 at 11:27:29PM +0400, Marc-André Lureau wrote:
> Spotted by ASAN + minor stylistic change.
> 
> Signed-off-by: Marc-André Lureau 

Reviewed-by: Michael S. Tsirkin 

> ---
>  hw/input/virtio-input.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/hw/input/virtio-input.c b/hw/input/virtio-input.c
> index 51617a5885..ec54e46ad6 100644
> --- a/hw/input/virtio-input.c
> +++ b/hw/input/virtio-input.c
> @@ -275,6 +275,7 @@ static void virtio_input_finalize(Object *obj)
>  
>  g_free(vinput->queue);
>  }
> +
>  static void virtio_input_device_unrealize(DeviceState *dev, Error **errp)
>  {
>  VirtIOInputClass *vic = VIRTIO_INPUT_GET_CLASS(dev);
> @@ -288,6 +289,8 @@ static void virtio_input_device_unrealize(DeviceState 
> *dev, Error **errp)
>  return;
>  }
>  }
> +virtio_del_queue(vdev, 0);
> +virtio_del_queue(vdev, 1);
>  virtio_cleanup(vdev);
>  }
>  
> -- 
> 2.24.0.rc0.20.gd81542e6f3



Re: [PATCH 2/3] qtest: fix qtest_qmp_device_add leak

2019-11-08 Thread Thomas Huth

On 07/11/2019 20.27, Marc-André Lureau wrote:

Spotted by ASAN.

Signed-off-by: Marc-André Lureau 
---
  tests/libqtest.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/tests/libqtest.c b/tests/libqtest.c
index 3706bccd8d..91e9cb220c 100644
--- a/tests/libqtest.c
+++ b/tests/libqtest.c
@@ -1274,6 +1274,7 @@ void qtest_qmp_device_add(QTestState *qts, const char 
*driver, const char *id,
  qdict_put_str(args, "id", id);
  
  qtest_qmp_device_add_qdict(qts, driver, args);

+qobject_unref(args);
  }
  
  static void device_deleted_cb(void *opaque, const char *name, QDict *data)




Fixes: b4510bb4109f5f ("tests: add qtest_qmp_device_add_qdict() helper")

Reviewed-by: Thomas Huth 

I can queue this via the qtest tree if nobody else wants to take it.




Re: [PATCH v3 00/22] iotests: Allow ./check -o data_file

2019-11-08 Thread Max Reitz
On 07.11.19 22:10, no-re...@patchew.org wrote:
> Patchew URL: 
> https://patchew.org/QEMU/20191107163708.833192-1-mre...@redhat.com/
> 
> 
> 
> Hi,
> 
> This series seems to have some coding style problems. See output below for
> more information:
> 
> Subject: [PATCH v3 00/22] iotests: Allow ./check -o data_file
> Type: series
> Message-id: 20191107163708.833192-1-mre...@redhat.com
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> git rev-parse base > /dev/null || exit 0
> git config --local diff.renamelimit 0
> git config --local diff.renames True
> git config --local diff.algorithm histogram
> ./scripts/checkpatch.pl --mailback base..
> === TEST SCRIPT END ===
> 
> Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
> Switched to a new branch 'test'
> 01a2839 iotests: Allow check -o data_file
> 31bc07b iotests: Disable data_file where it cannot be used
> 98a2575 iotests: Make 198 work with data_file
> e8f406f iotests: Make 137 work with data_file
> 46cc09d iotests: Make 110 work with data_file
> 1f7b2e5 iotests: Make 091 work with data_file
> 401d3ef iotests: Avoid cp/mv of test images
> a3746a2 iotests: Use _rm_test_img for deleting test images
> 37a01c8 iotests: Avoid qemu-img create
> a05c5ec iotests: Drop IMGOPTS use in 267
> 44aac70 iotests: Replace IMGOPTS='' by --no-opts
> cb9ee70 iotests: Replace IMGOPTS= by -o
> 3c2893f iotests: Inject space into -ocompat=0.10 in 051
> 8b5f9d4 iotests: Add -o and --no-opts to _make_test_img
> 239f933 iotests: Let _make_test_img parse its parameters
> 405ddde iotests: Drop compat=1.1 in 050
> 527ae22 iotests: Replace IMGOPTS by _unsupported_imgopts
> 77f688d iotests: Filter refcount_order in 036
> 3f29d5f iotests: Add _filter_json_filename
> 58975a8 iotests/qcow2.py: Split feature fields into bits
> 7ea641e iotests/qcow2.py: Add dump-header-exts
> 469af5e iotests: s/qocw2/qcow2/
> 
> === OUTPUT BEGIN ===
> 1/22 Checking commit 469af5ede216 (iotests: s/qocw2/qcow2/)
> 2/22 Checking commit 7ea641ec6b0a (iotests/qcow2.py: Add dump-header-exts)
> ERROR: line over 90 characters
> #33: FILE: tests/qemu-iotests/qcow2.py:237:
> +[ 'dump-header-exts', cmd_dump_header_exts, 0, 'Dump image 
> header extensions' ],

As in v1, I deliberately followed the existing style in this file and
believe it’s for the best.

Max



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 0/3] Some memory leak fixes

2019-11-08 Thread Thomas Huth

On 07/11/2019 22.57, no-re...@patchew.org wrote:

Patchew URL: 
https://patchew.org/QEMU/20191107192731.17330-1-marcandre.lur...@redhat.com/

Hi,

This series failed the docker-quick@centos7 build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

   TESTcheck-unit: tests/test-thread-pool
   TESTcheck-unit: tests/test-hbitmap
**
ERROR:/tmp/qemu-test/src/tests/migration-test.c:903:wait_for_migration_fail: assertion failed: (!strcmp(status, 
"setup") || !strcmp(status, "failed") || (allow_active && !strcmp(status, 
"active")))
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/migration-test.c:903:wait_for_migration_fail: assertion failed: 
(!strcmp(status, "setup") || !strcmp(status, "failed") || (allow_active && 
!strcmp(status, "active")))


I assume this is unrelated to your patches and a generic Patchew problem 
instead?


 Thomas




[PATCH 0/3] docs: build an index page for the HTML docs

2019-11-08 Thread Stefan Hajnoczi
This patch series switches to a single invocation of sphinx that builds
docs/index.rst.  This global index page links to all HTML documentation
produced by "make html", even the non-rST manuals.

This index page will be used as the entrypoint for documentation published on
the qemu.org website.

Stefan Hajnoczi (3):
  docs: fix rst syntax errors in unbuilt docs
  docs: build a global index page
  docs: install CSS and Javascript files for HTML docs

 Makefile | 14 +-
 docs/arm-cpu-features.rst|  6 +++---
 docs/index.rst   | 27 ++-
 docs/virtio-net-failover.rst |  4 ++--
 docs/virtio-pmem.rst | 19 ++-
 5 files changed, 46 insertions(+), 24 deletions(-)

-- 
2.23.0




[PATCH 2/3] docs: build a global index page

2019-11-08 Thread Stefan Hajnoczi
Build docs/ in a single sphinx invocation instead of treating
docs/{devel,interop,specs} separately.  This allows us to build a global
index page that links to documentation across the different manuals.

Some documentation is built outside of sphinx and is not formatted as
reStructuredText.  Link directly to the .html files for the time being.
If they are converted to .rst files in the future they can be included
more elegantly.

Sphinx wants to build all .rst files and complains if they are not
listed in the table of contents.  We have not yet reviewed and
categorized some of our .rst files.  Hide these files so they are always
built (and syntax-checked from now on!) but not visible in the table of
contents.

Signed-off-by: Stefan Hajnoczi 
---
 Makefile   | 13 -
 docs/index.rst | 27 ++-
 2 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/Makefile b/Makefile
index aa9d1a42aa..9487a06bed 100644
--- a/Makefile
+++ b/Makefile
@@ -815,6 +815,7 @@ endef
 install-sphinxdocs: sphinxdocs
$(call install-manual,interop)
$(call install-manual,specs)
+   $(INSTALL_DATA) "$(MANUAL_BUILDDIR)/index.html" 
"$(DESTDIR)$(qemu_docdir)/index.html"
 
 install-doc: $(DOCS) install-sphinxdocs
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
@@ -1001,7 +1002,7 @@ docs/version.texi: $(SRC_PATH)/VERSION config-host.mak
 # and handles "don't rebuild things unless necessary" itself.
 # The '.doctrees' files are cached information to speed this up.
 .PHONY: sphinxdocs
-sphinxdocs: $(MANUAL_BUILDDIR)/devel/index.html 
$(MANUAL_BUILDDIR)/interop/index.html $(MANUAL_BUILDDIR)/specs/index.html
+sphinxdocs: $(MANUAL_BUILDDIR)/index.html
 
 # Canned command to build a single manual
 # Arguments: $1 = manual name, $2 = Sphinx builder ('html' or 'man')
@@ -1012,14 +1013,8 @@ build-manual = $(call 
quiet-command,CONFDIR="$(qemu_confdir)" sphinx-build $(if
 # We assume all RST files in the manual's directory are used in it
 manual-deps = $(wildcard $(SRC_PATH)/docs/$1/*.rst) 
$(SRC_PATH)/docs/$1/conf.py $(SRC_PATH)/docs/conf.py
 
-$(MANUAL_BUILDDIR)/devel/index.html: $(call manual-deps,devel)
-   $(call build-manual,devel,html)
-
-$(MANUAL_BUILDDIR)/interop/index.html: $(call manual-deps,interop)
-   $(call build-manual,interop,html)
-
-$(MANUAL_BUILDDIR)/specs/index.html: $(call manual-deps,specs)
-   $(call build-manual,specs,html)
+$(MANUAL_BUILDDIR)/index.html: $(call manual-deps,)
+   $(call build-manual,,html)
 
 $(MANUAL_BUILDDIR)/interop/qemu-ga.8: $(call manual-deps,interop)
$(call build-manual,interop,man)
diff --git a/docs/index.rst b/docs/index.rst
index baa5791c17..9d6d239561 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -6,11 +6,36 @@
 Welcome to QEMU's documentation!
 
 
+.. Non-rst documentation
+
+`QEMU User Documentation `_
+
+`QEMU QMP Reference Manual `_
+
+`QEMU Guest Agent Protocol Reference `_
+
 .. toctree::
:maxdepth: 2
:caption: Contents:
 
interop/index
-   devel/index
specs/index
 
+.. The QEMU Developer's Guide is not included in the main documentation because
+   users don't need it.
+.. toctree::
+   :hidden:
+
+   devel/index
+
+.. Hidden documents that still need to be reviewed and moved to the appropriate
+   section of the documentation.
+.. toctree::
+   :hidden:
+
+   arm-cpu-features
+   cpu-hotplug
+   microvm
+   pr-manager
+   virtio-net-failover
+   virtio-pmem
-- 
2.23.0




[PATCH 1/3] docs: fix rst syntax errors in unbuilt docs

2019-11-08 Thread Stefan Hajnoczi
The .rst files outside docs/{devel,interop,specs} aren't built yet and
therefore a few syntax errors have slipped through.  Fix them.

Signed-off-by: Stefan Hajnoczi 
---
 docs/arm-cpu-features.rst|  6 +++---
 docs/virtio-net-failover.rst |  4 ++--
 docs/virtio-pmem.rst | 19 ++-
 3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
index 1b367e22e1..9b537a75e6 100644
--- a/docs/arm-cpu-features.rst
+++ b/docs/arm-cpu-features.rst
@@ -41,9 +41,9 @@ CPU type is possible with the `query-cpu-model-expansion` QMP 
command.
 Below are some examples where `scripts/qmp/qmp-shell` (see the top comment
 block in the script for usage) is used to issue the QMP commands.
 
-(1) Determine which CPU features are available for the `max` CPU type
-(Note, we started QEMU with qemu-system-aarch64, so `max` is
- implementing the ARMv8-A reference manual in this case)::
+1. Determine which CPU features are available for the `max` CPU type
+   (Note, we started QEMU with qemu-system-aarch64, so `max` is
+   implementing the ARMv8-A reference manual in this case)::
 
   (QEMU) query-cpu-model-expansion type=full model={"name":"max"}
   { "return": {
diff --git a/docs/virtio-net-failover.rst b/docs/virtio-net-failover.rst
index 22f64c7bc8..6002dc5d96 100644
--- a/docs/virtio-net-failover.rst
+++ b/docs/virtio-net-failover.rst
@@ -1,6 +1,6 @@
-
+==
 QEMU virtio-net standby (net_failover)
-
+==
 
 This document explains the setup and usage of virtio-net standby feature which
 is used to create a net_failover pair of devices.
diff --git a/docs/virtio-pmem.rst b/docs/virtio-pmem.rst
index e77881b26f..4bf5d00443 100644
--- a/docs/virtio-pmem.rst
+++ b/docs/virtio-pmem.rst
@@ -27,17 +27,18 @@ virtio pmem usage
 -
 
   A virtio pmem device backed by a memory-backend-file can be created on
-  the QEMU command line as in the following example:
+  the QEMU command line as in the following example::
 
-  -object memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
-  -device virtio-pmem-pci,memdev=mem1,id=nv1
+-object 
memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
+-device virtio-pmem-pci,memdev=mem1,id=nv1
 
-   where:
-   - "object memory-backend-file,id=mem1,share,mem-path=, size="
- creates a backend file with the specified size.
+  where:
 
-   - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
- pci device whose storage is provided by above memory backend device.
+  - "object memory-backend-file,id=mem1,share,mem-path=, size="
+creates a backend file with the specified size.
+
+  - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
+pci device whose storage is provided by above memory backend device.
 
   Multiple virtio pmem devices can be created if multiple pairs of "-object"
   and "-device" are provided.
@@ -50,7 +51,7 @@ memory backing has to be added via 'object_add'; afterwards, 
the virtio
 pmem device can be added via 'device_add'.
 
 For example, the following commands add another 4GB virtio pmem device to
-the guest:
+the guest::
 
  (qemu) object_add 
memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
  (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
-- 
2.23.0




[PATCH 3/3] docs: install CSS and Javascript files for HTML docs

2019-11-08 Thread Stefan Hajnoczi
Install the sphinx CSS/Javascript support files needed by the HTML
documentation.  The documentation looks ugly without this.

The previous patch switched to only invoking sphinx once so there is
only one _static/ directory that needs to be installed across all manual
sections.

Signed-off-by: Stefan Hajnoczi 
---
 Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Makefile b/Makefile
index 9487a06bed..dd60787d4c 100644
--- a/Makefile
+++ b/Makefile
@@ -813,6 +813,7 @@ endef
 # for QEMU developers, and not interesting to our users.
 .PHONY: install-sphinxdocs
 install-sphinxdocs: sphinxdocs
+   $(call install-manual,_static)
$(call install-manual,interop)
$(call install-manual,specs)
$(INSTALL_DATA) "$(MANUAL_BUILDDIR)/index.html" 
"$(DESTDIR)$(qemu_docdir)/index.html"
-- 
2.23.0




Re: [PATCH] iotests: Fix "no qualified output" error path

2019-11-08 Thread Max Reitz
On 08.11.19 09:57, Kevin Wolf wrote:
> The variable for error messages to be displayed is $results, not
> $reason. Fix 'check' to print the "no qualified output" error message
> again instead of having a failure without any message telling the user
> why it failed.
> 
> Signed-off-by: Kevin Wolf 
> ---
>  tests/qemu-iotests/check | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Max Reitz 



signature.asc
Description: OpenPGP digital signature


[qemu-web PATCH] documentation: link to nightly documentation

2019-11-08 Thread Stefan Hajnoczi
Link to the documentation built from qemu.git/master once a day.

Signed-off-by: Stefan Hajnoczi 
---
This depends on qemu.git "[PATCH 0/3] docs: build an index page for the
HTML docs".  We need that in order to build index.html.

 documentation.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/documentation.md b/documentation.md
index f4ef9f4..662750c 100644
--- a/documentation.md
+++ b/documentation.md
@@ -3,6 +3,8 @@ title: QEMU documentation
 permalink: /documentation/
 ---
 
+The latest development version documentation is available 
[here](https://wiki.qemu.org/docs/index.html).
+
 The [QEMU user manual](https://qemu.weilnetz.de/qemu-doc.html) can be read 
online, courtesy of Stefan Weil.
 More documentation is found in the https://git.qemu.org/?p=qemu.git;a=tree;f=docs;hb=master";>`docs`
 directory of the QEMU git tree.
-- 
2.23.0




Re: [PATCH v2 02/11] qcrypto-luks: extend the create options for upcoming encryption key management

2019-11-08 Thread Maxim Levitsky
On Thu, 2019-10-10 at 15:44 +0200, Kevin Wolf wrote:
> Am 13.09.2019 um 00:30 hat Maxim Levitsky geschrieben:
> > Now you can specify which slot to put the encryption key to
> > Plus add 'active' option which will let  user erase the key secret
> > instead of adding it.
> > Check that active=true it when creating.
> > 
> > Signed-off-by: Maxim Levitsky 
> > diff --git a/qapi/crypto.json b/qapi/crypto.json
> > index b2a4cff683..9b83a70634 100644
> > --- a/qapi/crypto.json
> > +++ b/qapi/crypto.json
> > @@ -190,6 +190,20 @@
> >  #  Currently defaults to 'sha256'
> >  # @hash-alg: the master key hash algorithm
> >  #Currently defaults to 'sha256'
> > +#
> > +# @active: Should the new secret be added (true) or erased (false)
> > +#  (amend only, since 4.2)
> > +#
> > +# @slot: The slot in which to put/erase the secret
> > +#if not given, will select first free slot for secret addtion
> > +#and erase all matching keyslots for erase. except last one
> > +#(optional, since 4.2)
> > +#
> > +# @unlock-secret: The secret to use to unlock the image
> > +#If not given, will use the secret that was used
> > +#when opening the image.
> > +#(optional, for amend only, since 4.2)
> > +#
> >  # @iter-time: number of milliseconds to spend in
> >  # PBKDF passphrase processing. Currently defaults
> >  # to 2000. (since 2.8)
> 
> This approach doesn't look right to me. BlockdevCreateOptions should
> describe the state of the image after the operation. You're describing
> an update instead (and in a way that doesn't allow you to change
> everything that you may want to change, so that you need to call the
> operation multiple times).
> 
> I imagined the syntax of a blockdev-amend QMP command similar to
> x-blockdev-reopen: Describe the full set of options that you want to
> have in effect after the operation; if you don't want to change some
> option, you just specify it again with its old value.

This approach is a compromise trying to create more or less usable interface.
In particular we (I and Daniel) wanted the following to work:

1. ability to add a new password to an empty keyslot and then remove the
old password. This is probably the most common operation and it won't
require the caller to know anything about the keyslots.

2. Allow the user to not know the passwords of some keyslots.
For example if I want to add a new keyslot, I might  not know
some of the other keyslots. Specifying all the active keyslots,
on each amend would force the user to know all the passwords
(you can't 'extract' a password by reading a keyslot, since only
hash of it is stored there for the security reasons)


Thus the amend interface either allows you to add a keyslot (either a specific 
one,
or first free one), and to remove a keyslot (again, either a specific one, or
one that matches given password).




> 
> Specifically for luks, this probably means that you have a @slots, which
> is a list that contains at least the secret for each slot, or JSON null
> for a slot that should be left empty.
> 
> With the same approach, you don't have to make 'size' optional in later
> patches, you can just require that the current size is re-specified. And
> later, blockdev-amend could actually allow changing the size of images
> if you provide a different value.

This can be done IMHO.

> 
> Kevin


Best regards,   
Maxim Levitsky




Re: [PATCH v2] ivshmem-server: Terminate also on SIGINT

2019-11-08 Thread Jan Kiszka

On 03.08.19 15:22, Jan Kiszka wrote:

From: Jan Kiszka 

Allows to shutdown a foreground session via ctrl-c.

Signed-off-by: Jan Kiszka 
---

Changes in v2:
  - adjust error message

  contrib/ivshmem-server/main.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/contrib/ivshmem-server/main.c b/contrib/ivshmem-server/main.c
index 197c79c57e..e4cd35f74c 100644
--- a/contrib/ivshmem-server/main.c
+++ b/contrib/ivshmem-server/main.c
@@ -223,8 +223,9 @@ main(int argc, char *argv[])
  sa_quit.sa_handler = ivshmem_server_quit_cb;
  sa_quit.sa_flags = 0;
  if (sigemptyset(&sa_quit.sa_mask) == -1 ||
-sigaction(SIGTERM, &sa_quit, 0) == -1) {
-perror("failed to add SIGTERM handler; sigaction");
+sigaction(SIGTERM, &sa_quit, 0) == -1 ||
+sigaction(SIGINT, &sa_quit, 0) == -1) {
+perror("failed to add signal handler; sigaction");
  goto err;
  }

--
2.16.4




...and this one for you as well, Markus?

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux



[PATCH v2 1/2] qapi: add filter-node-name option to drive-mirror

2019-11-08 Thread Vladimir Sementsov-Ogievskiy
To correspond to blockdev-mirror command and make it possible to
deprecate implicit filters in the next commit.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
 qapi/block-core.json | 7 +++
 blockdev.c   | 2 +-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index aa97ee2641..93530d3a13 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1992,6 +1992,12 @@
 # @on-target-error: the action to take on an error on the target,
 #   default 'report' (no limitations, since this applies to
 #   a different block device than @device).
+#
+# @filter-node-name: the node name that should be assigned to the
+#filter driver that the mirror job inserts into the graph
+#above @device. If this option is not given, a node name is
+#autogenerated. (Since: 4.2)
+#
 # @unmap: Whether to try to unmap target sectors where source has
 # only zero. If true, and target unallocated sectors will read as zero,
 # target image sectors will be unmapped; otherwise, zeroes will be
@@ -2022,6 +2028,7 @@
 '*speed': 'int', '*granularity': 'uint32',
 '*buf-size': 'int', '*on-source-error': 'BlockdevOnError',
 '*on-target-error': 'BlockdevOnError',
+'*filter-node-name': 'str',
 '*unmap': 'bool', '*copy-mode': 'MirrorCopyMode',
 '*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
 
diff --git a/blockdev.c b/blockdev.c
index 8e029e9c01..2ca614c77f 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -4008,7 +4008,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
arg->has_on_source_error, arg->on_source_error,
arg->has_on_target_error, arg->on_target_error,
arg->has_unmap, arg->unmap,
-   false, NULL,
+   arg->has_filter_node_name, arg->filter_node_name,
arg->has_copy_mode, arg->copy_mode,
arg->has_auto_finalize, arg->auto_finalize,
arg->has_auto_dismiss, arg->auto_dismiss,
-- 
2.21.0




[PATCH v2 0/2] Deprecate implicit filters

2019-11-08 Thread Vladimir Sementsov-Ogievskiy
v2:
Don't deprecate drive-backup, it is unrelated thing and will be resent
in separate.
Don't deprecate drive-mirror. Instead add filter-node-name to
drive-mirror to behave like blockdev-mirror
Fix all broken iotests.

Vladimir Sementsov-Ogievskiy (2):
  qapi: add filter-node-name option to drive-mirror
  qapi: deprecate implicit filters

 qemu-deprecated.texi   |  6 ++
 qapi/block-core.json   | 14 --
 include/block/block_int.h  | 10 +-
 blockdev.c | 12 +++-
 tests/qemu-iotests/094 |  1 +
 tests/qemu-iotests/095 |  6 --
 tests/qemu-iotests/109 |  1 +
 tests/qemu-iotests/127 |  1 +
 tests/qemu-iotests/141 |  5 -
 tests/qemu-iotests/144 |  3 ++-
 tests/qemu-iotests/156 |  1 +
 tests/qemu-iotests/161 |  7 +++
 tests/qemu-iotests/161.out |  1 +
 tests/qemu-iotests/185 |  3 +++
 tests/qemu-iotests/191 |  2 ++
 tests/qemu-iotests/229 |  1 +
 tests/qemu-iotests/247 |  8 +---
 tests/qemu-iotests/249 |  5 +++--
 tests/qemu-iotests/249.out |  2 +-
 19 files changed, 75 insertions(+), 14 deletions(-)

-- 
2.21.0




[PATCH v2 2/2] qapi: deprecate implicit filters

2019-11-08 Thread Vladimir Sementsov-Ogievskiy
To get rid of implicit filters related workarounds in future let's
deprecate them now.

Deprecation warning breaks some bash iotests output, so fix it here
too: in most of cases just add filter-node-name in test.

In 161 add FIXME and deprecation warning into 161.out.

In 249, the test case is changed, as we don't need to test "default"
filter node name anymore.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
 qemu-deprecated.texi   |  6 ++
 qapi/block-core.json   |  9 ++---
 include/block/block_int.h  | 10 +-
 blockdev.c | 10 ++
 tests/qemu-iotests/094 |  1 +
 tests/qemu-iotests/095 |  6 --
 tests/qemu-iotests/109 |  1 +
 tests/qemu-iotests/127 |  1 +
 tests/qemu-iotests/141 |  5 -
 tests/qemu-iotests/144 |  3 ++-
 tests/qemu-iotests/156 |  1 +
 tests/qemu-iotests/161 |  7 +++
 tests/qemu-iotests/161.out |  1 +
 tests/qemu-iotests/185 |  3 +++
 tests/qemu-iotests/191 |  2 ++
 tests/qemu-iotests/229 |  1 +
 tests/qemu-iotests/247 |  8 +---
 tests/qemu-iotests/249 |  5 +++--
 tests/qemu-iotests/249.out |  2 +-
 19 files changed, 68 insertions(+), 14 deletions(-)

diff --git a/qemu-deprecated.texi b/qemu-deprecated.texi
index 296bfc93a3..c969faf55a 100644
--- a/qemu-deprecated.texi
+++ b/qemu-deprecated.texi
@@ -204,6 +204,12 @@ and accurate ``query-qmp-schema'' command.
 Character devices creating sockets in client mode should not specify
 the 'wait' field, which is only applicable to sockets in server mode
 
+@subsection implicit filters in mirror and commit (since 4.2)
+
+Mirror and commit jobs insert filters, which becomes implicit if user
+omitted @option(filter-node-name) parameter. So omitting it is deprecated
+in ``blockdev-mirror'', ``drive-mirror'' and ``block-commit'', set it always.
+
 @section Human Monitor Protocol (HMP) commands
 
 @subsection The hub_id parameter of 'hostfwd_add' / 'hostfwd_remove' (since 
3.1)
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 93530d3a13..37caed775f 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1659,7 +1659,8 @@
 # @filter-node-name: the node name that should be assigned to the
 #filter driver that the commit job inserts into the graph
 #above @top. If this option is not given, a node name is
-#autogenerated. (Since: 2.9)
+#autogenerated. Omitting this option is deprecated, it will
+#be required in future. (Since: 2.9)
 #
 # @auto-finalize: When false, this job will wait in a PENDING state after it 
has
 # finished its work, waiting for @block-job-finalize before
@@ -1996,7 +1997,8 @@
 # @filter-node-name: the node name that should be assigned to the
 #filter driver that the mirror job inserts into the graph
 #above @device. If this option is not given, a node name is
-#autogenerated. (Since: 4.2)
+#autogenerated. Omitting this option is deprecated, it will
+#be required in future. (Since: 4.2)
 #
 # @unmap: Whether to try to unmap target sectors where source has
 # only zero. If true, and target unallocated sectors will read as zero,
@@ -2311,7 +2313,8 @@
 # @filter-node-name: the node name that should be assigned to the
 #filter driver that the mirror job inserts into the graph
 #above @device. If this option is not given, a node name is
-#autogenerated. (Since: 2.9)
+#autogenerated. Omitting this option is deprecated, it will
+#be required in future. (Since: 2.9)
 #
 # @copy-mode: when to copy data to the destination; defaults to 'background'
 # (Since: 3.0)
diff --git a/include/block/block_int.h b/include/block/block_int.h
index dd033d0b37..48ff3af48d 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -793,7 +793,15 @@ struct BlockDriverState {
 bool sg;/* if true, the device is a /dev/sg* */
 bool probed;/* if true, format was probed rather than specified */
 bool force_share; /* if true, always allow all shared permissions */
-bool implicit;  /* if true, this filter node was automatically inserted */
+
+/*
+ * @implicit field is deprecated, don't set it to true for new filters.
+ * If true, this filter node was automatically inserted and user don't
+ * know about it and unprepared for any effects of it. So, implicit
+ * filters are workarounded and skipped in many places of the block
+ * layer code.
+ */
+bool implicit;
 
 BlockDriver *drv; /* NULL means no media */
 void *opaque;
diff --git a/blockdev.c b/blockdev.c
index 2ca614c77f..8c3a409c94 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -,6 +,11 @@ void qmp_block_commit(bool has_job_id, const char 
*job_id, const char *device,

Re: [PATCH 0/3] Some memory leak fixes

2019-11-08 Thread Laurent Vivier
On 08/11/2019 10:57, Thomas Huth wrote:
> On 07/11/2019 22.57, no-re...@patchew.org wrote:
>> Patchew URL:
>> https://patchew.org/QEMU/20191107192731.17330-1-marcandre.lur...@redhat.com/
>>
>>
>> Hi,
>>
>> This series failed the docker-quick@centos7 build test. Please find
>> the testing commands and
>> their output below. If you have Docker installed, you can probably
>> reproduce it
>> locally.
>>
>> === TEST SCRIPT BEGIN ===
>> #!/bin/bash
>> make docker-image-centos7 V=1 NETWORK=1
>> time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
>> === TEST SCRIPT END ===
>>
>>    TEST    check-unit: tests/test-thread-pool
>>    TEST    check-unit: tests/test-hbitmap
>> **
>> ERROR:/tmp/qemu-test/src/tests/migration-test.c:903:wait_for_migration_fail:
>> assertion failed: (!strcmp(status, "setup") || !strcmp(status,
>> "failed") || (allow_active && !strcmp(status, "active")))
>> ERROR - Bail out!
>> ERROR:/tmp/qemu-test/src/tests/migration-test.c:903:wait_for_migration_fail:
>> assertion failed: (!strcmp(status, "setup") || !strcmp(status,
>> "failed") || (allow_active && !strcmp(status, "active")))
> 
> I assume this is unrelated to your patches and a generic Patchew problem
> instead?

Unrelated to patchew too, but the problem has already been reported. I
think dgilbert is looking at this.

Thanks,




Re: [PATCH 3/3] docs: install CSS and Javascript files for HTML docs

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 10:59:42AM +0100, Stefan Hajnoczi wrote:
> Install the sphinx CSS/Javascript support files needed by the HTML
> documentation.  The documentation looks ugly without this.
> 
> The previous patch switched to only invoking sphinx once so there is
> only one _static/ directory that needs to be installed across all manual
> sections.
> 
> Signed-off-by: Stefan Hajnoczi 
> ---
>  Makefile | 1 +
>  1 file changed, 1 insertion(+)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 1/3] docs: fix rst syntax errors in unbuilt docs

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 10:59:40AM +0100, Stefan Hajnoczi wrote:
> The .rst files outside docs/{devel,interop,specs} aren't built yet and
> therefore a few syntax errors have slipped through.  Fix them.
> 
> Signed-off-by: Stefan Hajnoczi 
> ---
>  docs/arm-cpu-features.rst|  6 +++---
>  docs/virtio-net-failover.rst |  4 ++--
>  docs/virtio-pmem.rst | 19 ++-
>  3 files changed, 15 insertions(+), 14 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 2/3] docs: build a global index page

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 10:59:41AM +0100, Stefan Hajnoczi wrote:
> Build docs/ in a single sphinx invocation instead of treating
> docs/{devel,interop,specs} separately.  This allows us to build a global
> index page that links to documentation across the different manuals.
> 
> Some documentation is built outside of sphinx and is not formatted as
> reStructuredText.  Link directly to the .html files for the time being.
> If they are converted to .rst files in the future they can be included
> more elegantly.
> 
> Sphinx wants to build all .rst files and complains if they are not
> listed in the table of contents.  We have not yet reviewed and
> categorized some of our .rst files.  Hide these files so they are always
> built (and syntax-checked from now on!) but not visible in the table of
> contents.

Ah, nice trick.

> 
> Signed-off-by: Stefan Hajnoczi 
> ---
>  Makefile   | 13 -
>  docs/index.rst | 27 ++-
>  2 files changed, 30 insertions(+), 10 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PULL 0/1] Seabios 20191106 patches

2019-11-08 Thread Gerd Hoffmann
On Thu, Nov 07, 2019 at 11:56:03AM +, Peter Maydell wrote:
> On Wed, 6 Nov 2019 at 12:26, Gerd Hoffmann  wrote:
> >
> > The following changes since commit 36609b4fa36f0ac934874371874416f7533a5408:
> >
> >   Merge remote-tracking branch 
> > 'remotes/palmer/tags/palmer-for-master-4.2-sf1' into staging (2019-11-02 
> > 17:59:03 +)
> >
> > are available in the Git repository at:
> >
> >   git://git.kraxel.org/qemu tags/seabios-20191106-pull-request
> >
> > for you to fetch changes up to 58b16e57ded751e2e8be626124aad1d46a408a33:
> >
> >   seabios: update to pre-1.13 snapshot (2019-11-06 13:23:02 +0100)
> >
> > 
> > seabios: update to pre-1.13 snapshot
> >
> > 
> >
> > Gerd Hoffmann (1):
> >   seabios: update to pre-1.13 snapshot
> 
> Hi; this fails 'make check' on at least
> aarch64, aarch32, FreeBSD, NetBSD, s390:
> 
> ERROR:/home/linux1/qemu/tests/boot-sector.c:161:boot_sector_test:
> assertion failed (signature == SIGNATURE): (0x == 0xdead)
> ERROR - Bail out!
> ERROR:/home/linux1/qemu/tests/boot-sector.c:161:boot_sector_test:
> assertion failed (signature == SIGNATURE): (0x == 0xdead)
> PASS 9 bios-tables-test /x86_64/acpi/q35/bridge
> Aborted (core dumped)
> /home/linux1/qemu/tests/Makefile.include:916: recipe for target
> 'check-qtest-i386' failed
> make: *** [check-qtest-i386] Error 1
> 
> the x86-64 bootsector tests seem to fail similarly.

[ full quote for seabios list ]

Re-ran test on x86-64 box -> works.
Tried on aarch64 machine -> fails.

Given the arch list above this pretty much looks like it is tcg-related,
even though a quick check with "qemu -accel tcg -cdrom /some/live/iso"
(on x86_64) doesn't show any obvious problems.

Recompiled seabios with gcc 4.8 instead of gcc 8
  -> Works on both x86-64 and aarch64.
  -> I'll redo the pull request with that.

I'll go try find the root cause next week.  On a quick glance this
looks like a bug in tcg or gcc.  In case anyone has hints what might
have caused this drop me a note.

thanks,
  Gerd




[PULL 0/1] Seabios 20191108 patches

2019-11-08 Thread Gerd Hoffmann
The following changes since commit 59015778f3ec7c01202d46c5dbeeac8ab4225c52:

  Update version for v4.2.0-rc0 release (2019-11-07 18:17:31 +)

are available in the Git repository at:

  git://git.kraxel.org/qemu tags/seabios-20191108-pull-request

for you to fetch changes up to f085bf95a6208b42f1ae01f60c76d936d9fed490:

  seabios: update to pre-1.13 snapshot (2019-11-08 11:18:26 +0100)


seabios: update to pre-1.13 snapshot (gcc 4.8.5 version).



Gerd Hoffmann (1):
  seabios: update to pre-1.13 snapshot

 pc-bios/bios-256k.bin | Bin 262144 -> 262144 bytes
 pc-bios/bios.bin  | Bin 131072 -> 131072 bytes
 pc-bios/vgabios-ati.bin   | Bin 38912 -> 39424 bytes
 pc-bios/vgabios-bochs-display.bin | Bin 27648 -> 28672 bytes
 pc-bios/vgabios-cirrus.bin| Bin 38400 -> 38912 bytes
 pc-bios/vgabios-qxl.bin   | Bin 38912 -> 39424 bytes
 pc-bios/vgabios-ramfb.bin | Bin 28160 -> 28672 bytes
 pc-bios/vgabios-stdvga.bin| Bin 38912 -> 39424 bytes
 pc-bios/vgabios-virtio.bin| Bin 38912 -> 39424 bytes
 pc-bios/vgabios-vmware.bin| Bin 38912 -> 39424 bytes
 pc-bios/vgabios.bin   | Bin 38400 -> 38912 bytes
 roms/Makefile |   2 +-
 roms/seabios  |   2 +-
 13 files changed, 2 insertions(+), 2 deletions(-)

-- 
2.18.1




Re: Should QEMU's configure script check for bzip2 ?

2019-11-08 Thread Philippe Mathieu-Daudé

On 11/8/19 10:48 AM, Daniel P. Berrangé wrote:

On Thu, Nov 07, 2019 at 08:43:27PM +0100, Thomas Huth wrote:


  Hi,

I just tried to compile QEMU on a freshly installed system. "configure"
finished without problems, but during "make" I hit this error:

   BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
/bin/sh: bzip2: command not found
make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
make: *** Waiting for unfinished jobs

Sure, it's easy to fix, but maybe "configure" should already check for the


You found a bug :)


availablity of "bzip2", so that we then either skip the installation of the
edk2 images if "bzip2" is not available, or bail out during "configure"
already?


The general rule is that if we run a binary we should check for it upfront
so users immediately see any missing pre-requisites, rather than wasting
30 minutes waiting for QEMU to build & then fail.


Yes, I'll send a fix.

Regards,

Phil.



[PATCH] configure: Check bzip2 is available

2019-11-08 Thread Philippe Mathieu-Daudé
The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
  /bin/sh: bzip2: command not found
  make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
  make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
  make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.

Fixes: 536d2173b2b
Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
 configure | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/configure b/configure
index efe165edf9..9957e913e8 100755
--- a/configure
+++ b/configure
@@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys; 
print("%d.%d.%d" % (sys.version_info[0]
 # Suppress writing compiled files
 python="$python -B"
 
+# Some firmware binaries are compressed with bzip2
+if has bzip2; then
+  :
+else
+  error_exit "bzip2 program not found. Please install it"
+fi
+
 # Check that the C compiler works. Doing this here before testing
 # the host CPU ensures that we had a valid CC to autodetect the
 # $cpu var (and we should bail right here if that's not the case).
-- 
2.21.0




Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 11:28:05AM +0100, Philippe Mathieu-Daudé wrote:
> The bzip2 tool is not included in default installations.
> On freshly installed systems, ./configure succeeds but 'make'
> might fail later:
> 
> BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
>   /bin/sh: bzip2: command not found
>   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
>   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
>   make: *** Waiting for unfinished jobs
> 
> Add a check in ./configure to warn the user if bzip2 is missing.
> 
> Fixes: 536d2173b2b
> Reported-by: Thomas Huth 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  configure | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/configure b/configure
> index efe165edf9..9957e913e8 100755
> --- a/configure
> +++ b/configure
> @@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys; 
> print("%d.%d.%d" % (sys.version_info[0]
>  # Suppress writing compiled files
>  python="$python -B"
>  
> +# Some firmware binaries are compressed with bzip2
> +if has bzip2; then
> +  :
> +else
> +  error_exit "bzip2 program not found. Please install it"
> +fi

Nitpick, I'd phrase this as

   "The bzip2 program is required for building QEMU"

Either way though

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Thomas Huth

On 08/11/2019 11.28, Philippe Mathieu-Daudé wrote:

The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

 BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
   /bin/sh: bzip2: command not found
   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
   make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.

Fixes: 536d2173b2b
Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
  configure | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/configure b/configure
index efe165edf9..9957e913e8 100755
--- a/configure
+++ b/configure
@@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys; 
print("%d.%d.%d" % (sys.version_info[0]
  # Suppress writing compiled files
  python="$python -B"
  
+# Some firmware binaries are compressed with bzip2

+if has bzip2; then
+  :
+else
+  error_exit "bzip2 program not found. Please install it"
+fi


It's only required for the edk2 binaries, isn't it? So should this maybe 
also check whether we build any of the i386-softmmu, x86_64-softmmu 
arm-softmmu or aarch64-softmmu targets and pass otherwise?


 Thomas




Re: [PATCH v2 07/11] block: add x-blockdev-amend qmp command

2019-11-08 Thread Max Reitz
On 08.11.19 10:26, Maxim Levitsky wrote:
> On Fri, 2019-10-04 at 20:53 +0200, Max Reitz wrote:
>> On 13.09.19 00:30, Maxim Levitsky wrote:
>>> Signed-off-by: Maxim Levitsky 
>>> ---
>>>  block/Makefile.objs   |   2 +-
>>>  block/amend.c | 116 ++
>>>  include/block/block_int.h |  23 ++--
>>>  qapi/block-core.json  |  26 +
>>>  qapi/job.json |   4 +-
>>>  5 files changed, 163 insertions(+), 8 deletions(-)
>>>  create mode 100644 block/amend.c

[...]

>>> +static int coroutine_fn blockdev_amend_run(Job *job, Error **errp)
>>> +{
>>> +BlockdevAmendJob *s = container_of(job, BlockdevAmendJob, common);
>>> +int ret;
>>> +
>>> +job_progress_set_remaining(&s->common, 1);
>>> +ret = s->bs->drv->bdrv_co_amend(s->bs, s->opts, s->force, errp);
>>> +job_progress_update(&s->common, 1);
>>
>> It would be nice if the amend job could make use of the progress
>> reporting that we have in place for amend.
> 
> I also thought about it, but is it worth it?
> 
> I looked through the status reporting of the qcow2 amend
> code (which doesn't really allowed to be run through
> qmp blockdev-amend, due to complexity of changing 
> the qcow2 format on the fly).

True, and we could always add it later.

I suppose I was mostly wondering because bdrv_amend_options already has
all of that infrastructure and I was assuming that qcow2's bdrv_co_amend
implementation would make some use of the existing function.  Well, it
doesn’t, so *shrug*

[...]

>>> +/*
>>> + * Create the block job
>>> + * TODO Running in the main context. Block drivers need to error out 
>>> or add
>>> + * locking when they use a BDS in a different AioContext.
>>
>> Why shouldn’t the job just run in the node’s context?
> 
> This is shameless copy&pasta from the blockdev-create code
> (which I did note in the copyright of the file)
Well, you noted that it’s heavily based on it, not that it’s just C&P.

So I suppose the comment is just wrong here?

Max



signature.asc
Description: OpenPGP digital signature


Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Philippe Mathieu-Daudé

On 11/8/19 11:34 AM, Thomas Huth wrote:

On 08/11/2019 11.28, Philippe Mathieu-Daudé wrote:

The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

 BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
   /bin/sh: bzip2: command not found
   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
   make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.

Fixes: 536d2173b2b
Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
  configure | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/configure b/configure
index efe165edf9..9957e913e8 100755
--- a/configure
+++ b/configure
@@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys; 
print("%d.%d.%d" % (sys.version_info[0]

  # Suppress writing compiled files
  python="$python -B"
+# Some firmware binaries are compressed with bzip2
+if has bzip2; then
+  :
+else
+  error_exit "bzip2 program not found. Please install it"
+fi


It's only required for the edk2 binaries, isn't it? So should this maybe 
also check whether we build any of the i386-softmmu, x86_64-softmmu 
arm-softmmu or aarch64-softmmu targets and pass otherwise?


I have this on my TODO somewhere, because we extract the edk2 firmwares 
even if we only build MIPS/PowerPC.




[PATCH] tests/migration: Print some debug on bad status

2019-11-08 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

We're seeing occasional asserts in 'wait_for_migraiton_fail', that
I can't reliably reproduce, and where the cores don't have any useful
state.  Print the 'status' out, so we can see which unexpected state
we're ending up in.

Signed-off-by: Dr. David Alan Gilbert 
---
 tests/migration-test.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tests/migration-test.c b/tests/migration-test.c
index 59f291c654..ac780dffda 100644
--- a/tests/migration-test.c
+++ b/tests/migration-test.c
@@ -899,8 +899,13 @@ static void wait_for_migration_fail(QTestState *from, bool 
allow_active)
 
 do {
 status = migrate_query_status(from);
-g_assert(!strcmp(status, "setup") || !strcmp(status, "failed") ||
- (allow_active && !strcmp(status, "active")));
+bool result = !strcmp(status, "setup") || !strcmp(status, "failed") ||
+ (allow_active && !strcmp(status, "active"));
+if (!result) {
+fprintf(stderr, "%s: unexpected status status=%s 
allow_active=%d\n",
+__func__, status, allow_active);
+}
+g_assert(result);
 failed = !strcmp(status, "failed");
 g_free(status);
 } while (!failed);
-- 
2.23.0




Re: [qemu-web PATCH] documentation: link to nightly documentation

2019-11-08 Thread Thomas Huth

On 08/11/2019 11.05, Stefan Hajnoczi wrote:

Link to the documentation built from qemu.git/master once a day.

Signed-off-by: Stefan Hajnoczi 
---
This depends on qemu.git "[PATCH 0/3] docs: build an index page for the
HTML docs".  We need that in order to build index.html.

  documentation.md | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/documentation.md b/documentation.md
index f4ef9f4..662750c 100644
--- a/documentation.md
+++ b/documentation.md
@@ -3,6 +3,8 @@ title: QEMU documentation
  permalink: /documentation/
  ---
  
+The latest development version documentation is available [here](https://wiki.qemu.org/docs/index.html).


Why are we using wiki.qemu.org here? Generated docs are not part of the 
Wiki, so this sounds confusing to me... Shouldn't this put somewhere on 
the main www.qemu.org domain instead?


 Thomas




Re: [PATCH v2 02/11] qcrypto-luks: extend the create options for upcoming encryption key management

2019-11-08 Thread Max Reitz
On 08.11.19 10:28, Maxim Levitsky wrote:
> On Fri, 2019-10-04 at 19:42 +0200, Max Reitz wrote:
>> On 13.09.19 00:30, Maxim Levitsky wrote:
>>> Now you can specify which slot to put the encryption key to
>>> Plus add 'active' option which will let  user erase the key secret
>>> instead of adding it.
>>> Check that active=true it when creating.
>>>
>>> Signed-off-by: Maxim Levitsky 
>>> ---
>>>  block/crypto.c |  2 ++
>>>  block/crypto.h | 16 +++
>>>  block/qcow2.c  |  2 ++
>>>  crypto/block-luks.c| 26 +++---
>>>  qapi/crypto.json   | 19 ++
>>>  tests/qemu-iotests/082.out | 54 ++
>>>  6 files changed, 115 insertions(+), 4 deletions(-)
>>
>> (Just doing a cursory RFC-style review)
>>
>> I think we also want to reject unlock-secret if it’s given for creation;
> Agree, I'll do this in the next version.
> 
>> and I suppose it’d be more important to print which slots are OK than
>> the slot the user has given.  (It isn’t like we shouldn’t print that
>> slot index, but it’s more likely the user knows that than what the
>> limits are.  I think.)
> I don't really understand what you mean here :-( 
> 
> Since this is qmp interface,
> I can't really print anything from it, other that error messages.

Exactly, I’m referring to the error message.  Right now it’s:

"Invalid slot %" PRId64 " is specified", luks_opts.slot

I think it should be something like:

"Invalid slot %" PRId64 " specified, must be between 0 and %u",
luks_opt.slot, QCRYPTO_BLOCK_LUKS_NUM_KEY_SLOTS - 1

Max



signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 05/11] block/crypto: implement the encryption key management

2019-11-08 Thread Max Reitz
On 08.11.19 10:30, Maxim Levitsky wrote:
> On Fri, 2019-10-04 at 20:41 +0200, Max Reitz wrote:
>> On 13.09.19 00:30, Maxim Levitsky wrote:
>>> This implements the encryption key management
>>> using the generic code in qcrypto layer
>>> (currently only for qemu-img amend)
>>>
>>> This code adds another 'write_func' because the initialization
>>> write_func works directly on the underlying file,
>>> because during the creation, there is no open instance
>>> of the luks driver, but during regular use, we have it,
>>> and should use it instead.
>>>
>>>
>>> This commit also adds a 'hack/workaround' I and Kevin Wolf (thanks)
>>> made to make the driver still support write sharing,
>>> but be safe against concurrent  metadata update (the keys)
>>> Eventually write sharing for luks driver will be deprecated
>>> and removed together with this hack.
>>>
>>> The hack is that we ask (as a format driver) for
>>> BLK_PERM_CONSISTENT_READ always
>>> (technically always unless opened with BDRV_O_NO_IO)
>>>
>>> and then when we want to update the keys, we
>>> unshare that permission. So if someone else
>>> has the image open, even readonly, this will fail.
>>>
>>> Also thanks to Daniel Berrange for the variant of
>>> that hack that involves asking for read,
>>> rather that write permission
>>>
>>> Signed-off-by: Maxim Levitsky 
>>> ---
>>>  block/crypto.c | 118 +++--
>>>  1 file changed, 115 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/block/crypto.c b/block/crypto.c
>>> index a6a3e1f1d8..f42fa057e6 100644
>>> --- a/block/crypto.c
>>> +++ b/block/crypto.c
>>> @@ -36,6 +36,7 @@ typedef struct BlockCrypto BlockCrypto;
>>>  
>>>  struct BlockCrypto {
>>>  QCryptoBlock *block;
>>> +bool updating_keys;
>>>  };
>>>  
>>>  
>>> @@ -70,6 +71,24 @@ static ssize_t block_crypto_read_func(QCryptoBlock 
>>> *block,
>>>  return ret;
>>>  }
>>>  
>>> +static ssize_t block_crypto_write_func(QCryptoBlock *block,
>>> +   size_t offset,
>>> +   const uint8_t *buf,
>>> +   size_t buflen,
>>> +   void *opaque,
>>> +   Error **errp)
>>
>> There’s already a function of this name for creation.
> 
> There is a long story why two write functions are needed.
> i tried to use only one, but at the end I and Daniel both agreed
> that its just better to have two functions.
> 
> The reason is that during creation, the luks BlockDriverState doesn't exist 
> yet,
> and so the creation routine basically just writes to the underlying protocol 
> driver.
> 
> Thats is why the block_crypto_create_write_func receives a BlockBackend 
> pointer,
> to which the BlockDriverState of the underlying protocol driver is inserted.
> 
> 
> On the other hand, for amend, the luks block device is open, and it only knows
> about its own BlockDriverState, and thus the io should be done on bs->file
> 
> So instead of trying to coerce a single callback to do both of this,
> we decided to just have a little code duplication.

I meant: This doesn’t compile.  There’s already another function of this
name.

Max



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/3] docs: build a global index page

2019-11-08 Thread Peter Maydell
On Fri, 8 Nov 2019 at 09:59, Stefan Hajnoczi  wrote:
>
> Build docs/ in a single sphinx invocation instead of treating
> docs/{devel,interop,specs} separately.  This allows us to build a global
> index page that links to documentation across the different manuals.
>
> Some documentation is built outside of sphinx and is not formatted as
> reStructuredText.  Link directly to the .html files for the time being.
> If they are converted to .rst files in the future they can be included
> more elegantly.
>
> Sphinx wants to build all .rst files and complains if they are not
> listed in the table of contents.  We have not yet reviewed and
> categorized some of our .rst files.  Hide these files so they are always
> built (and syntax-checked from now on!) but not visible in the table of
> contents.

So the reason I went for the odd "run sphinx multiple times"
approach was because we don't want to ship 'devel' to users,
and as far as I could tell there was no way to have a
single-invocation build of the docs not include it in
eg the index/search (which would then be broken when
we don't install devel/ as part of 'make install').
Does this patchset result in a set of installed docs
that don't have dangling references ?

thanks
-- PMM



[qemu-web PATCH v2] documentation: link to nightly documentation

2019-11-08 Thread Stefan Hajnoczi
Link to the documentation built from qemu.git/master once a day.

Signed-off-by: Stefan Hajnoczi 
---
v2:
 * Use a qemu.org URL, not a wiki.qemu.org URL [danpb]
---
 documentation.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/documentation.md b/documentation.md
index f4ef9f4..662750c 100644
--- a/documentation.md
+++ b/documentation.md
@@ -3,6 +3,8 @@ title: QEMU documentation
 permalink: /documentation/
 ---
 
+The latest development version documentation is available 
[here](https://wiki.qemu.org/docs/index.html).
+
 The [QEMU user manual](https://qemu.weilnetz.de/qemu-doc.html) can be read 
online, courtesy of Stefan Weil.
 More documentation is found in the https://git.qemu.org/?p=qemu.git;a=tree;f=docs;hb=master";>`docs`
 directory of the QEMU git tree.
-- 
2.23.0




[qemu-web PATCH v3] documentation: link to nightly documentation

2019-11-08 Thread Stefan Hajnoczi
Link to the documentation built from qemu.git/master once a day.

Signed-off-by: Stefan Hajnoczi 
---
v3:
 * Use a qemu.org URL, not a wiki.qemu.org URL [danpb]
v2:
 * This revision was broken - please ignore! :)
---
 documentation.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/documentation.md b/documentation.md
index f4ef9f4..5198e02 100644
--- a/documentation.md
+++ b/documentation.md
@@ -3,6 +3,8 @@ title: QEMU documentation
 permalink: /documentation/
 ---
 
+The latest development version documentation is available 
[here](https://qemu.org/docs/master/index.html).
+
 The [QEMU user manual](https://qemu.weilnetz.de/qemu-doc.html) can be read 
online, courtesy of Stefan Weil.
 More documentation is found in the https://git.qemu.org/?p=qemu.git;a=tree;f=docs;hb=master";>`docs`
 directory of the QEMU git tree.
-- 
2.23.0




Re: [PATCH 3/3] cpu-plug-test: fix leaks

2019-11-08 Thread Thomas Huth

On 07/11/2019 20.27, Marc-André Lureau wrote:

Spotted by ASAN.

Signed-off-by: Marc-André Lureau 
---
  tests/cpu-plug-test.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/tests/cpu-plug-test.c b/tests/cpu-plug-test.c
index 058cef5ac1..30e514bbfb 100644
--- a/tests/cpu-plug-test.c
+++ b/tests/cpu-plug-test.c
@@ -99,6 +99,7 @@ static void test_plug_with_device_add(gconstpointer data)
  
  cpu = qobject_to(QDict, e);

  if (qdict_haskey(cpu, "qom-path")) {
+qobject_unref(e);
  continue;
  }
  
@@ -107,6 +108,7 @@ static void test_plug_with_device_add(gconstpointer data)
  
  qtest_qmp_device_add_qdict(qts, td->device_model, props);

  hotplugged++;
+qobject_unref(e);
  }
  
  /* make sure that there were hotplugged CPUs */




Fixes: 021a007efc3 ("cpu-plug-test: fix device_add for pc/q35 machines")

Reviewed-by: Thomas Huth 




Re: [PATCH 3/3] docs: install CSS and Javascript files for HTML docs

2019-11-08 Thread Peter Maydell
On Fri, 8 Nov 2019 at 10:00, Stefan Hajnoczi  wrote:
>
> Install the sphinx CSS/Javascript support files needed by the HTML
> documentation.  The documentation looks ugly without this.
>
> The previous patch switched to only invoking sphinx once so there is
> only one _static/ directory that needs to be installed across all manual
> sections.
>
> Signed-off-by: Stefan Hajnoczi 
> ---
>  Makefile | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/Makefile b/Makefile
> index 9487a06bed..dd60787d4c 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -813,6 +813,7 @@ endef
>  # for QEMU developers, and not interesting to our users.
>  .PHONY: install-sphinxdocs
>  install-sphinxdocs: sphinxdocs
> +   $(call install-manual,_static)
> $(call install-manual,interop)
> $(call install-manual,specs)
> $(INSTALL_DATA) "$(MANUAL_BUILDDIR)/index.html" 
> "$(DESTDIR)$(qemu_docdir)/index.html"

'install-manual' does some complicated stuff to
(a) handle subdirectories and (b) skip things we don't
want to install. It's intended for installing manual
directories (specs, interop, etc). _static is just
a simple single directory with no underlying files,
and it's not a manual, so it seems a bit odd to use
install-manual for it.

Also, this is only needed because we're now building
the docs in a single run (with the 'build manuals
one at a time' approach you get a separate specs/_static,
interop/_static, etc, which are installed by the
relevant install-manual calls for each manual). So
it seems like it ought to be squashed into the commit
that switches to doing the docs build in one run of
sphinx.

thanks
-- PMM



Re: [qemu-web PATCH v3] documentation: link to nightly documentation

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 11:54:35AM +0100, Stefan Hajnoczi wrote:
> Link to the documentation built from qemu.git/master once a day.
> 
> Signed-off-by: Stefan Hajnoczi 
> ---
> v3:
>  * Use a qemu.org URL, not a wiki.qemu.org URL [danpb]
> v2:
>  * This revision was broken - please ignore! :)
> ---
>  documentation.md | 2 ++
>  1 file changed, 2 insertions(+)

Reviewed-by: Daniel P. Berrangé 

(depends on the corresponding patch to qemu.git to generate the main
 index.html to be merged before this can be pushed I presume)

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Laszlo Ersek
On 11/08/19 11:28, Philippe Mathieu-Daudé wrote:
> The bzip2 tool is not included in default installations.
> On freshly installed systems, ./configure succeeds but 'make'
> might fail later:
> 
> BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
>   /bin/sh: bzip2: command not found
>   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
>   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
>   make: *** Waiting for unfinished jobs
> 
> Add a check in ./configure to warn the user if bzip2 is missing.

We've come full circle. Let me explain:

> 
> Fixes: 536d2173b2b

So this makes me kinda grumpy. If you look at the v3 posting of the patch that 
would later become commit 536d2173b2b:

  http://mid.mail-archive.com/20190321113408.19929-8-lersek@redhat.com

you see the following note in the changelog:

- compress FD files with bzip2 rather than xz, so that decompression at
  "make install" time succeed on older build OSes too [Peter]

So I couldn't use xz because that was "too new" for some build OSes, but now we 
also can't take bzip2 for granted because that's "too old" for some other build 
OSes? This is ridiculous.

To be clear, my disagreement is only with the "Fixes" tag. For me, "Fixes" 
stands for something that, in retrospect, can be proven to have been a bug at 
the time the code was *originally* committed. But, at the time, taking "bzip2" 
for granted was *not* a bug. The conditions / circumstances have changed more 
recently, and the assumption about bzip2 has been invalidated *after* adding a 
dependency on bzip2.

Nonetheless, thank you for adapting the code to the potential absence of bzip2. 
Can you perhaps go in some details in the commit message, near "not included in 
default installations" and "freshly installed systems"? If we can, we should 
identify the exact distro release where this problem has been encountered (and 
I wouldn't mind a link to the BZ or ticket under which people agreed to remove 
bzip2 from the default package set).

Reviewed-by: Laszlo Ersek 

Thanks
Laszlo

> Reported-by: Thomas Huth 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  configure | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/configure b/configure
> index efe165edf9..9957e913e8 100755
> --- a/configure
> +++ b/configure
> @@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys; 
> print("%d.%d.%d" % (sys.version_info[0]
>  # Suppress writing compiled files
>  python="$python -B"
>  
> +# Some firmware binaries are compressed with bzip2
> +if has bzip2; then
> +  :
> +else
> +  error_exit "bzip2 program not found. Please install it"
> +fi
> +
>  # Check that the C compiler works. Doing this here before testing
>  # the host CPU ensures that we had a valid CC to autodetect the
>  # $cpu var (and we should bail right here if that's not the case).
> 




Re: [PATCH v2 05/11] block/crypto: implement the encryption key management

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-11-08 at 11:49 +0100, Max Reitz wrote:
> On 08.11.19 10:30, Maxim Levitsky wrote:
> > On Fri, 2019-10-04 at 20:41 +0200, Max Reitz wrote:
> > > On 13.09.19 00:30, Maxim Levitsky wrote:
> > > > This implements the encryption key management
> > > > using the generic code in qcrypto layer
> > > > (currently only for qemu-img amend)
> > > > 
> > > > This code adds another 'write_func' because the initialization
> > > > write_func works directly on the underlying file,
> > > > because during the creation, there is no open instance
> > > > of the luks driver, but during regular use, we have it,
> > > > and should use it instead.
> > > > 
> > > > 
> > > > This commit also adds a 'hack/workaround' I and Kevin Wolf (thanks)
> > > > made to make the driver still support write sharing,
> > > > but be safe against concurrent  metadata update (the keys)
> > > > Eventually write sharing for luks driver will be deprecated
> > > > and removed together with this hack.
> > > > 
> > > > The hack is that we ask (as a format driver) for
> > > > BLK_PERM_CONSISTENT_READ always
> > > > (technically always unless opened with BDRV_O_NO_IO)
> > > > 
> > > > and then when we want to update the keys, we
> > > > unshare that permission. So if someone else
> > > > has the image open, even readonly, this will fail.
> > > > 
> > > > Also thanks to Daniel Berrange for the variant of
> > > > that hack that involves asking for read,
> > > > rather that write permission
> > > > 
> > > > Signed-off-by: Maxim Levitsky 
> > > > ---
> > > >  block/crypto.c | 118 +++--
> > > >  1 file changed, 115 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/block/crypto.c b/block/crypto.c
> > > > index a6a3e1f1d8..f42fa057e6 100644
> > > > --- a/block/crypto.c
> > > > +++ b/block/crypto.c
> > > > @@ -36,6 +36,7 @@ typedef struct BlockCrypto BlockCrypto;
> > > >  
> > > >  struct BlockCrypto {
> > > >  QCryptoBlock *block;
> > > > +bool updating_keys;
> > > >  };
> > > >  
> > > >  
> > > > @@ -70,6 +71,24 @@ static ssize_t block_crypto_read_func(QCryptoBlock 
> > > > *block,
> > > >  return ret;
> > > >  }
> > > >  
> > > > +static ssize_t block_crypto_write_func(QCryptoBlock *block,
> > > > +   size_t offset,
> > > > +   const uint8_t *buf,
> > > > +   size_t buflen,
> > > > +   void *opaque,
> > > > +   Error **errp)
> > > 
> > > There’s already a function of this name for creation.
> > 
> > There is a long story why two write functions are needed.
> > i tried to use only one, but at the end I and Daniel both agreed
> > that its just better to have two functions.
> > 
> > The reason is that during creation, the luks BlockDriverState doesn't exist 
> > yet,
> > and so the creation routine basically just writes to the underlying 
> > protocol driver.
> > 
> > Thats is why the block_crypto_create_write_func receives a BlockBackend 
> > pointer,
> > to which the BlockDriverState of the underlying protocol driver is inserted.
> > 
> > 
> > On the other hand, for amend, the luks block device is open, and it only 
> > knows
> > about its own BlockDriverState, and thus the io should be done on bs->file
> > 
> > So instead of trying to coerce a single callback to do both of this,
> > we decided to just have a little code duplication.
> 
> I meant: This doesn’t compile.  There’s already another function of this
> name.
> 

You probably didn't apply the 'block-crypto: misc refactoring' patch, 
or I forgot to send it.
All that patch does is to rename block_crypto_write_func to 
block_crypto_create_write_func
and same (for consistency) for block_crypto_init_func -> 
block_crypto_create_init_func

And then in this patch I add the block_crypto_write_func, to be used for 
anything
but creation code, together with existing block_crypto_read_func which is 
already
not used for creation.


Best regards,
Maxim Levitsky






Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 12:01:16PM +0100, Laszlo Ersek wrote:
> On 11/08/19 11:28, Philippe Mathieu-Daudé wrote:
> > The bzip2 tool is not included in default installations.
> > On freshly installed systems, ./configure succeeds but 'make'
> > might fail later:
> > 
> > BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
> >   /bin/sh: bzip2: command not found
> >   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
> >   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
> >   make: *** Waiting for unfinished jobs
> > 
> > Add a check in ./configure to warn the user if bzip2 is missing.
> 
> We've come full circle. Let me explain:
> 
> > 
> > Fixes: 536d2173b2b
> 
> So this makes me kinda grumpy. If you look at the v3 posting of the patch 
> that would later become commit 536d2173b2b:
> 
>   http://mid.mail-archive.com/20190321113408.19929-8-lersek@redhat.com
> 
> you see the following note in the changelog:
> 
> - compress FD files with bzip2 rather than xz, so that decompression at
>   "make install" time succeed on older build OSes too [Peter]
> 
> So I couldn't use xz because that was "too new" for some build OSes, but now 
> we also can't take bzip2 for granted because that's "too old" for some other 
> build OSes? This is ridiculous.

We're not saying bzip2 is too old / missing from the OS. Every OS we care
about has bzip2. The problem is that a minimal installation migt not have
installed it. This kind of problem is increasingly common with use of
minimal container images for example. So we're just ensuring we validate
that what we want is actuall present.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH v1 2/2] s390x/cpumodel: Introduce "best" model variants

2019-11-08 Thread David Hildenbrand
For a specific CPU model, we have a lot of feature variability depending on
- The microcode version of the HW
- The hypervisor we're running on (LPAR vs. KVM vs. z/VM)
- The hypervisor version we're running on
- The KVM version
- KVM module parameters (especially, "nested=1")
- The accelerator

Our default models are migration safe, however can only be changed
between QEMU releases (glued to QEMU machine). This somewhat collides
with the feature variability we have. E.g., the z13 model will not run
under TCG. There is the demand from higher levels in the stack to "have the
best CPU model possible on a given accelerator, firmware and HW", which
should especially include all features that fix security issues.
Especially, if we have a new feature due to a security flaw, we want to
have a way to backport this feature to older QEMU versions and a way to
automatically enable it when asked.

This is where "best" CPU models come into play. If upper layers specify
"z14-best" on a z14, they will get the best possible feature set in that
configuration. "best" usually means "maximum features", besides deprecated
features. This will then, for example, include nested virtualization
("SIE" feature) when KVM+HW support is enabled, or fixes via
microcode updates (e.g., spectre)

"best" models are not migration safe. Upper layers can expand these
models to migration-safe and static variants, allowing them to be
migrated.

Signed-off-by: David Hildenbrand 
---
 target/s390x/cpu-qom.h|  1 +
 target/s390x/cpu_models.c | 96 ++-
 2 files changed, 95 insertions(+), 2 deletions(-)

diff --git a/target/s390x/cpu-qom.h b/target/s390x/cpu-qom.h
index b809ec8418..73901d1410 100644
--- a/target/s390x/cpu-qom.h
+++ b/target/s390x/cpu-qom.h
@@ -52,6 +52,7 @@ typedef struct S390CPUClass {
 bool kvm_required;
 bool is_static;
 bool is_migration_safe;
+bool is_best;
 const char *desc;
 
 DeviceRealize parent_realize;
diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 57c06e5ea1..a379b4c15d 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -557,11 +557,16 @@ static void cpu_model_from_info(S390CPUModel *model, 
const CpuModelInfo *info,
 obj = object_new(object_class_get_name(oc));
 cpu = S390_CPU(obj);
 
-if (!cpu->model) {
+if (!cpu->model && !S390_CPU_CLASS(oc)->is_best) {
 error_setg(errp, "Details about the host CPU model are not available, "
  "it cannot be used.");
 object_unref(obj);
 return;
+} else if (!cpu->model) {
+error_setg(errp, "There is not best CPU model that is runnable,"
+ "therefore, it cannot be used.");
+object_unref(obj);
+return;
 }
 
 if (qdict) {
@@ -932,7 +937,7 @@ void s390_realize_cpu_model(CPUState *cs, Error **errp)
 return;
 }
 
-if (!cpu->model) {
+if (!cpu->model && !xcc->is_best) {
 /* no host model support -> perform compatibility stuff */
 apply_cpu_model(NULL, errp);
 return;
@@ -944,6 +949,11 @@ void s390_realize_cpu_model(CPUState *cs, Error **errp)
 return;
 }
 
+if (xcc->is_best && !cpu->model) {
+error_setg(errp, "Selected CPU model is too new.");
+return;
+}
+
 /* copy over properties that can vary */
 cpu->model->lowest_ibc = max_model->lowest_ibc;
 cpu->model->cpu_id = max_model->cpu_id;
@@ -1156,6 +1166,58 @@ static void s390_qemu_cpu_model_initfn(Object *obj)
 memcpy(cpu->model, &s390_qemu_cpu_model, sizeof(*cpu->model));
 }
 
+static void s390_best_cpu_model_initfn(Object *obj)
+{
+const S390CPUModel *max_model;
+S390CPU *cpu = S390_CPU(obj);
+S390CPUClass *xcc = S390_CPU_GET_CLASS(cpu);
+Error *local_err = NULL;
+int i;
+
+if (kvm_enabled() && !kvm_s390_cpu_models_supported()) {
+return;
+}
+
+max_model = get_max_cpu_model(&local_err);
+if (local_err) {
+/* we expect errors only under KVM, when actually querying the kernel 
*/
+g_assert(kvm_enabled());
+error_report_err(local_err);
+return;
+}
+
+/*
+ * Similar to baselining against the "max" model. However, features
+ * are handled differently and are not used for the search for a 
definition.
+ */
+if (xcc->cpu_def->gen == max_model->def->gen) {
+if (xcc->cpu_def->ec_ga > max_model->def->ec_ga) {
+return;
+}
+} else if (xcc->cpu_def->gen > max_model->def->gen) {
+return;
+}
+
+/* The model is theoretically runnable, construct the features. */
+cpu->model = g_new(S390CPUModel, 1);
+cpu->model->def = xcc->cpu_def;
+bitmap_copy(cpu->model->features, xcc->cpu_def->full_feat, S390_FEAT_MAX);
+
+/* Mask of features that are not available in the "max" model */
+bitmap_and(cpu->model->features, cpu->model->features, max_model->features,
+   S390_

[PATCH v1 0/2] s390x/cpumodel: Introduce "best" model variants

2019-11-08 Thread David Hildenbrand
There was recently a discussion regarding CPU model versions. That concept
does not fit s390x where we have a lot of feature variability. I
proposed an alternative approach in [1], which might work for x86 as well
(but I am not sure if x86 still can or wants to switch to that), and
requires no real changes in upper layers.

[1] and patch #2 contains more information on the motivation for this.

E.g., specifying/expanding "z14-best" will result in the "best feature
set possible on this accelerator, hw and, firmware". While a "z13" does
not work under TCG and some z/VM versions, "z13-best" will work.

As I had a couple of spare minutes this week, I hacked a quick prototype.
I did not heavily test this (quick sanity tests under TCG only), but it
should be decent enough to get the idea and play with it. I'll not be
working next week, which is why I sent this out now for discussion.

[1] https://lists.gnu.org/archive/html/qemu-devel/2019-10/msg07222.html

Cc: Daniel P. Berrangé 
Cc: Markus Armbruster 
Cc: Jiri Denemark 
Cc: Eduardo Habkost 
Cc: Michael Mueller 

David Hildenbrand (2):
  s390x/cpumodels: Factor out CPU feature dependencies
  s390x/cpumodel: Introduce "best" model variants

 target/s390x/cpu-qom.h|   1 +
 target/s390x/cpu_models.c | 211 +++---
 2 files changed, 153 insertions(+), 59 deletions(-)

-- 
2.21.0




[PATCH v1 1/2] s390x/cpumodels: Factor out CPU feature dependencies

2019-11-08 Thread David Hildenbrand
Currently we check "if bit X is set, bit Y is required". We want
to perform "if bit Y is not set, bit X must also not be set" when
creating CPU models that will pass all consistency checks.

Signed-off-by: David Hildenbrand 
---
 target/s390x/cpu_models.c | 115 +++---
 1 file changed, 58 insertions(+), 57 deletions(-)

diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 009afc38b9..57c06e5ea1 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -88,6 +88,59 @@ static S390CPUDef s390_cpu_defs[] = {
 CPUDEF_INIT(0x8562, 15, 1, 47, 0x0800U, "gen15b", "IBM 8562 GA1"),
 };
 
+static const int cpu_feature_dependencies[][2] = {
+{ S390_FEAT_IPTE_RANGE, S390_FEAT_DAT_ENH },
+{ S390_FEAT_IDTE_SEGMENT, S390_FEAT_DAT_ENH },
+{ S390_FEAT_IDTE_REGION, S390_FEAT_DAT_ENH },
+{ S390_FEAT_IDTE_REGION, S390_FEAT_IDTE_SEGMENT },
+{ S390_FEAT_LOCAL_TLB_CLEARING, S390_FEAT_DAT_ENH},
+{ S390_FEAT_LONG_DISPLACEMENT_FAST, S390_FEAT_LONG_DISPLACEMENT },
+{ S390_FEAT_DFP_FAST, S390_FEAT_DFP },
+{ S390_FEAT_TRANSACTIONAL_EXE, S390_FEAT_STFLE_49 },
+{ S390_FEAT_EDAT_2, S390_FEAT_EDAT},
+{ S390_FEAT_MSA_EXT_5, S390_FEAT_KIMD_SHA_512 },
+{ S390_FEAT_MSA_EXT_5, S390_FEAT_KLMD_SHA_512 },
+{ S390_FEAT_MSA_EXT_4, S390_FEAT_MSA_EXT_3 },
+{ S390_FEAT_SIE_CMMA, S390_FEAT_CMM },
+{ S390_FEAT_SIE_CMMA, S390_FEAT_SIE_GSLS },
+{ S390_FEAT_SIE_PFMFI, S390_FEAT_EDAT },
+{ S390_FEAT_MSA_EXT_8, S390_FEAT_MSA_EXT_3 },
+{ S390_FEAT_MSA_EXT_9, S390_FEAT_MSA_EXT_3 },
+{ S390_FEAT_MSA_EXT_9, S390_FEAT_MSA_EXT_4 },
+{ S390_FEAT_MULTIPLE_EPOCH, S390_FEAT_TOD_CLOCK_STEERING },
+{ S390_FEAT_VECTOR_PACKED_DECIMAL, S390_FEAT_VECTOR },
+{ S390_FEAT_VECTOR_ENH, S390_FEAT_VECTOR },
+{ S390_FEAT_INSTRUCTION_EXEC_PROT, S390_FEAT_SIDE_EFFECT_ACCESS_ESOP2 },
+{ S390_FEAT_SIDE_EFFECT_ACCESS_ESOP2, S390_FEAT_ESOP },
+{ S390_FEAT_CMM_NT, S390_FEAT_CMM },
+{ S390_FEAT_GUARDED_STORAGE, S390_FEAT_SIDE_EFFECT_ACCESS_ESOP2 },
+{ S390_FEAT_MULTIPLE_EPOCH, S390_FEAT_STORE_CLOCK_FAST },
+{ S390_FEAT_MULTIPLE_EPOCH, S390_FEAT_TOD_CLOCK_STEERING },
+{ S390_FEAT_SEMAPHORE_ASSIST, S390_FEAT_STFLE_49 },
+{ S390_FEAT_KIMD_SHA3_224, S390_FEAT_MSA },
+{ S390_FEAT_KIMD_SHA3_256, S390_FEAT_MSA },
+{ S390_FEAT_KIMD_SHA3_384, S390_FEAT_MSA },
+{ S390_FEAT_KIMD_SHA3_512, S390_FEAT_MSA },
+{ S390_FEAT_KIMD_SHAKE_128, S390_FEAT_MSA },
+{ S390_FEAT_KIMD_SHAKE_256, S390_FEAT_MSA },
+{ S390_FEAT_KLMD_SHA3_224, S390_FEAT_MSA },
+{ S390_FEAT_KLMD_SHA3_256, S390_FEAT_MSA },
+{ S390_FEAT_KLMD_SHA3_384, S390_FEAT_MSA },
+{ S390_FEAT_KLMD_SHA3_512, S390_FEAT_MSA },
+{ S390_FEAT_KLMD_SHAKE_128, S390_FEAT_MSA },
+{ S390_FEAT_KLMD_SHAKE_256, S390_FEAT_MSA },
+{ S390_FEAT_PRNO_TRNG_QRTCR, S390_FEAT_MSA_EXT_5 },
+{ S390_FEAT_PRNO_TRNG, S390_FEAT_MSA_EXT_5 },
+{ S390_FEAT_SIE_KSS, S390_FEAT_SIE_F2 },
+{ S390_FEAT_AP_QUERY_CONFIG_INFO, S390_FEAT_AP },
+{ S390_FEAT_AP_FACILITIES_TEST, S390_FEAT_AP },
+{ S390_FEAT_PTFF_QSIE, S390_FEAT_MULTIPLE_EPOCH },
+{ S390_FEAT_PTFF_QTOUE, S390_FEAT_MULTIPLE_EPOCH },
+{ S390_FEAT_PTFF_STOE, S390_FEAT_MULTIPLE_EPOCH },
+{ S390_FEAT_PTFF_STOUE, S390_FEAT_MULTIPLE_EPOCH },
+{ S390_FEAT_AP_QUEUE_INTERRUPT_CONTROL, S390_FEAT_AP },
+};
+
 #define QEMU_MAX_CPU_TYPE 0x2964
 #define QEMU_MAX_CPU_GEN 13
 #define QEMU_MAX_CPU_EC_GA 2
@@ -768,66 +821,14 @@ CpuModelBaselineInfo 
*qmp_query_cpu_model_baseline(CpuModelInfo *infoa,
 
 static void check_consistency(const S390CPUModel *model)
 {
-static int dep[][2] = {
-{ S390_FEAT_IPTE_RANGE, S390_FEAT_DAT_ENH },
-{ S390_FEAT_IDTE_SEGMENT, S390_FEAT_DAT_ENH },
-{ S390_FEAT_IDTE_REGION, S390_FEAT_DAT_ENH },
-{ S390_FEAT_IDTE_REGION, S390_FEAT_IDTE_SEGMENT },
-{ S390_FEAT_LOCAL_TLB_CLEARING, S390_FEAT_DAT_ENH},
-{ S390_FEAT_LONG_DISPLACEMENT_FAST, S390_FEAT_LONG_DISPLACEMENT },
-{ S390_FEAT_DFP_FAST, S390_FEAT_DFP },
-{ S390_FEAT_TRANSACTIONAL_EXE, S390_FEAT_STFLE_49 },
-{ S390_FEAT_EDAT_2, S390_FEAT_EDAT},
-{ S390_FEAT_MSA_EXT_5, S390_FEAT_KIMD_SHA_512 },
-{ S390_FEAT_MSA_EXT_5, S390_FEAT_KLMD_SHA_512 },
-{ S390_FEAT_MSA_EXT_4, S390_FEAT_MSA_EXT_3 },
-{ S390_FEAT_SIE_CMMA, S390_FEAT_CMM },
-{ S390_FEAT_SIE_CMMA, S390_FEAT_SIE_GSLS },
-{ S390_FEAT_SIE_PFMFI, S390_FEAT_EDAT },
-{ S390_FEAT_MSA_EXT_8, S390_FEAT_MSA_EXT_3 },
-{ S390_FEAT_MSA_EXT_9, S390_FEAT_MSA_EXT_3 },
-{ S390_FEAT_MSA_EXT_9, S390_FEAT_MSA_EXT_4 },
-{ S390_FEAT_MULTIPLE_EPOCH, S390_FEAT_TOD_CLOCK_STEERING },
-{ S390_FEAT_VECTOR_PACKED_DECIMAL, S390_FEAT_VECTOR },
-{ S390_FEAT_VECTOR_ENH, S390_FEAT_VECTOR },
-{ S390_FEAT_INSTRUCTION_EXEC_PROT, S390_FEAT_SIDE_EFFECT_ACCESS_ESOP2 
},
-{ S390_FEAT_SIDE_EFFECT_ACCESS_ES

Re: [PATCH v1 0/2] s390x/cpumodel: Introduce "best" model variants

2019-11-08 Thread Peter Maydell
On Fri, 8 Nov 2019 at 11:08, David Hildenbrand  wrote:
>
> There was recently a discussion regarding CPU model versions. That concept
> does not fit s390x where we have a lot of feature variability. I
> proposed an alternative approach in [1], which might work for x86 as well
> (but I am not sure if x86 still can or wants to switch to that), and
> requires no real changes in upper layers.
>
> [1] and patch #2 contains more information on the motivation for this.
>
> E.g., specifying/expanding "z14-best" will result in the "best feature
> set possible on this accelerator, hw and, firmware". While a "z13" does
> not work under TCG and some z/VM versions, "z13-best" will work.

I think other architectures call this concept "max", not "best".
If we can manage some cross-architecture consistency that would
be helpful, but is s390x using 'max' already for something else?

thanks
-- PMM



Re: [RFC v4 PATCH 49/49] multi-process: add configure and usage information

2019-11-08 Thread Stefan Hajnoczi
On Thu, Nov 07, 2019 at 10:53:27AM -0500, Jag Raman wrote:
> 
> 
> On 11/7/2019 9:39 AM, Daniel P. Berrangé wrote:
> > On Thu, Nov 07, 2019 at 03:02:20PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Oct 24, 2019 at 05:09:30AM -0400, Jagannathan Raman wrote:
> > > > From: Elena Ufimtseva 
> > > > 
> > > > Signed-off-by: Elena Ufimtseva 
> > > > Signed-off-by: Jagannathan Raman 
> > > > Signed-off-by: John G Johnson 
> > > > ---
> > > >   docs/qemu-multiprocess.txt | 86 
> > > > ++
> > > >   1 file changed, 86 insertions(+)
> > > >   create mode 100644 docs/qemu-multiprocess.txt
> > > > 
> > > > diff --git a/docs/qemu-multiprocess.txt b/docs/qemu-multiprocess.txt
> > > > new file mode 100644
> > > > index 000..c29f4df
> > > > --- /dev/null
> > > > +++ b/docs/qemu-multiprocess.txt
> > > > @@ -0,0 +1,86 @@
> > > > +Multi-process QEMU
> > > > +==
> > > > +
> > > > +This document describes how to configure and use multi-process qemu.
> > > > +For the design document refer to docs/devel/qemu-multiprocess.
> > > > +
> > > > +1) Configuration
> > > > +
> > > > +
> > > > +To enable support for multi-process add --enable-mpqemu
> > > > +to the list of options for the "configure" script.
> > > > +
> > > > +
> > > > +2) Usage
> > > > +
> > > > +
> > > > +To start qemu with devices intended to run in a separate emulation
> > > > +process without libvirtd support, the following should be used on QEMU
> > > > +command line. As of now, we only support the emulation of lsi53c895a
> > > > +in a separate process
> > > > +
> > > > +* Since parts of the RAM are shared between QEMU & remote process, a
> > > > +  memory-backend-file is required to facilitate this, as follows:
> > > > +
> > > > +  -object 
> > > > memory-backend-file,id=mem,mem-path=/dev/shm/,size=4096M,share=on
> > > > +
> > > > +* The devices to be emulated in the separate process are defined as
> > > > +  before with addition of "rid" suboption that serves as a remote group
> > > > +  identificator.
> > > > +
> > > > +  -device ,rid="remote process id"
> > > > +
> > > > +  For exmaple, for non multi-process qemu:
> > > 
> > > s/exmaple/example/
> > > 
> > > > +-device lsi53c895a,id=scsi0 device
> > > > +-device scsi-hd,drive=drive0,bus=scsi0.0,scsi-id=0
> > > > +-drive id=drive0,file=data-disk.img
> > > > +
> > > > +  and for multi-process qemu and no libvirt
> > > > +  support (i.e. QEMU forks child processes):
> > > > +-device lsi53c895a,id=scsi0,rid=0
> > > > +-device scsi-hd,drive=drive0,bus=scsi0.0,scsi-id=0,rid="0"
> > > > +
> > > > +* The command-line options for the remote process is added to the 
> > > > "command"
> > > 
> > > s/is added/are added/
> > > 
> > > > +  suboption of the newly added "-remote" option.
> > > > +
> > > > +   -remote [socket],rid=,command="..."
> > > > +
> > > > +  The drives to be emulated by the remote process are specified as 
> > > > part of
> > > > +  this command sub-option. The device to be used to connect to the 
> > > > monitor
> > > > +  is also specified as part of this suboption.
> > > > +
> > > > +  For example, the following option adds a drive and monitor to the 
> > > > remote
> > > > +  process:
> > > > +  -remote rid=0,command="-drive id=drive0,,file=data-disk.img -monitor 
> > > > unix:/home/qmp-sock,,server,,nowait"
> > > > +
> > > > +  Note: There's an issue with this "command" subtion which we are in 
> > > > the
> > > 
> > > s/subtion/sub-option/
> > > 
> > > > +  process of fixing. To work around this issue, it requires additional
> > > > +  "comma" characters as illustrated above, and in the example below.
> > > > +
> > > > +* Example QEMU command-line to launch lsi53c895a in a remote process
> > > > +
> > > > +  #/bin/sh
> > > > +  qemu-system-x86_64 \
> > > > +  -name "OL7.4" \
> > > > +  -machine q35,accel=kvm \
> > > > +  -smp sockets=1,cores=1,threads=1 \
> > > > +  -cpu host \
> > > > +  -m 2048 \
> > > > +  -object 
> > > > memory-backend-file,id=mem,mem-path=/dev/shm/,size=2G,share=on \
> > > > +  -numa node,memdev=mem \
> > > > +  -device virtio-scsi-pci,id=virtio_scsi_pci0 \
> > > > +  -drive id=drive_image1,if=none,format=raw,file=/root/ol7.qcow2 \
> > > > +  -device scsi-hd,id=image1,drive=drive_image1,bus=virtio_scsi_pci0.0 \
> > > > +  -boot d \
> > > > +  -monitor stdio \
> > > > +  -vnc :0 \
> > > > +  -device lsi53c895a,id=lsi0,remote,rid=8,command="qemu-scsi-dev" \
> > > > +  -device 
> > > > scsi-hd,id=drive2,drive=drive_image2,bus=lsi0.0,scsi-id=0,remote,rid=8,command="qemu-scsi-dev"\
> > > > +  -remote rid=8,command="-drive 
> > > > id=drive_image2,,file=/root/remote-process-disk.img -monitor 
> > > > unix:/home/qmp-sock,,server,,nowait"
> > > > +
> > > > +  We could connect to the monitor using the following command:
> > > > +  socat /home/qmp-sock stdio
> > > > +
> > > > +  After hotplugging disks to the remote process, please execute the
> > > > +  following command in the g

Re: [RFC v4 PATCH 49/49] multi-process: add configure and usage information

2019-11-08 Thread Stefan Hajnoczi
On Thu, Nov 07, 2019 at 09:33:45AM -0500, Michael S. Tsirkin wrote:
> On Thu, Nov 07, 2019 at 03:02:20PM +0100, Stefan Hajnoczi wrote:
> > This documentation suggests that QEMU spawns the remote processes.  How
> > do this work with unprivileged QEMU?  Is there an additional step where
> > QEMU drops privileges after having spawned remote processes?
> > 
> > Remote processes require accesses to resources that the main QEMU
> > process does not need access to, so I'm wondering how this process model
> > ensures that each process has only the privileges it needs.
> 
> I guess you have something like capabilities in mind?

Or namespaces (unshare(2)).

> When using something like selinux, priviledges are per binary
> so the order of startup doesn't matter.

For static SELinux policies that make sense, thanks for explaining.

Does libvirt also perform dynamic (i.e. per-instance) SELinux
configuration?  I guess that cannot be associated with a specific binary
because multiple QEMU instances launch the same binary yet need to be
differentiated.

Stefan


signature.asc
Description: PGP signature


Re: [qemu-web PATCH] documentation: link to nightly documentation

2019-11-08 Thread Stefan Hajnoczi
On Fri, Nov 8, 2019 at 11:45 AM Thomas Huth  wrote:
> On 08/11/2019 11.05, Stefan Hajnoczi wrote:
> > Link to the documentation built from qemu.git/master once a day.
> >
> > Signed-off-by: Stefan Hajnoczi 
> > ---
> > This depends on qemu.git "[PATCH 0/3] docs: build an index page for the
> > HTML docs".  We need that in order to build index.html.
> >
> >   documentation.md | 2 ++
> >   1 file changed, 2 insertions(+)
> >
> > diff --git a/documentation.md b/documentation.md
> > index f4ef9f4..662750c 100644
> > --- a/documentation.md
> > +++ b/documentation.md
> > @@ -3,6 +3,8 @@ title: QEMU documentation
> >   permalink: /documentation/
> >   ---
> >
> > +The latest development version documentation is available 
> > [here](https://wiki.qemu.org/docs/index.html).
>
> Why are we using wiki.qemu.org here? Generated docs are not part of the
> Wiki, so this sounds confusing to me... Shouldn't this put somewhere on
> the main www.qemu.org domain instead?

Thanks for bringing this up.  The docs are now available at
https://qemu.org/docs/master/.

Stefan



Re: [PATCH] iotests: Fix "no qualified output" error path

2019-11-08 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20191108085713.27551-1-kw...@redhat.com/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

  TESTiotest-qcow2: 268
Failures: 192
Failed 1 of 108 iotests
make: *** [check-tests/check-block.sh] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 662, in 
sys.exit(main())
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
'--label', 'com.qemu.instance.uuid=414a771dbd814a1183fd7732b0c1f2c5', '-u', 
'1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
'/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
'/var/tmp/patchew-tester-tmp-g2muu7fm/src/docker-src.2019-11-08-06.10.38.10992:/var/tmp/qemu:z,ro',
 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit 
status 2.
filter=--filter=label=com.qemu.instance.uuid=414a771dbd814a1183fd7732b0c1f2c5
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-g2muu7fm/src'
make: *** [docker-run-test-quick@centos7] Error 2

real13m23.714s
user0m9.033s


The full log is available at
http://patchew.org/logs/20191108085713.27551-1-kw...@redhat.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [RFC v4 PATCH 49/49] multi-process: add configure and usage information

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 12:17:41PM +0100, Stefan Hajnoczi wrote:
> On Thu, Nov 07, 2019 at 09:33:45AM -0500, Michael S. Tsirkin wrote:
> > On Thu, Nov 07, 2019 at 03:02:20PM +0100, Stefan Hajnoczi wrote:
> > > This documentation suggests that QEMU spawns the remote processes.  How
> > > do this work with unprivileged QEMU?  Is there an additional step where
> > > QEMU drops privileges after having spawned remote processes?
> > > 
> > > Remote processes require accesses to resources that the main QEMU
> > > process does not need access to, so I'm wondering how this process model
> > > ensures that each process has only the privileges it needs.
> > 
> > I guess you have something like capabilities in mind?
> 
> Or namespaces (unshare(2)).
> 
> > When using something like selinux, priviledges are per binary
> > so the order of startup doesn't matter.
> 
> For static SELinux policies that make sense, thanks for explaining.
> 
> Does libvirt also perform dynamic (i.e. per-instance) SELinux
> configuration?  I guess that cannot be associated with a specific binary
> because multiple QEMU instances launch the same binary yet need to be
> differentiated.

In a traditional SELinux approach, the SELinux context used for any
process is determined by a combination of the label on the binary
and a transition rule.

eg if the qemu-system-x86_64 file is labelled qemu_exec_t, and
there's a context qemu_t for the QEMU process, a transition
rule is defined  "virtd_t + qemu_exec_t ->  qemu_t". This says
that when a process with context "vird_t" execs a binary labelled
qemu_exec_t, the new process gets qemu_t.

We sVirt, however, we can't rely on automatic transitions, because
we need to assign a unique MCS tag for each VM. Thus libvird will
explicitly tell SELinux what label to apply.

In the case of multiprocess QEMU, if using sVirt from libvirt, then
we'll need to continue setting the explicit labels as we'll still
need the MCS tags for each helper process.

If not using libvirt and sVirt, and wanting automatic SELinux
transitions for QEMU helper processes, then each helper would
need to be a separate binary on disk so that each helper can
be given a distinct file label, which in turns lets you define
a set of transitions for each helper according to its expected
access needs.

Having said all that I don't think its worth worrying about
this. Anyone who cares about SELinux with QEMU will want to
be using sVirt  or an equivalent approach to assign unique
MCS per VM. And thus automatic transitions are not possible
even if we had distinct binaries for each helper.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 2/3] docs: build a global index page

2019-11-08 Thread Stefan Hajnoczi
On Fri, Nov 8, 2019 at 11:52 AM Peter Maydell  wrote:
> On Fri, 8 Nov 2019 at 09:59, Stefan Hajnoczi  wrote:
> > Build docs/ in a single sphinx invocation instead of treating
> > docs/{devel,interop,specs} separately.  This allows us to build a global
> > index page that links to documentation across the different manuals.
> >
> > Some documentation is built outside of sphinx and is not formatted as
> > reStructuredText.  Link directly to the .html files for the time being.
> > If they are converted to .rst files in the future they can be included
> > more elegantly.
> >
> > Sphinx wants to build all .rst files and complains if they are not
> > listed in the table of contents.  We have not yet reviewed and
> > categorized some of our .rst files.  Hide these files so they are always
> > built (and syntax-checked from now on!) but not visible in the table of
> > contents.
>
> So the reason I went for the odd "run sphinx multiple times"
> approach was because we don't want to ship 'devel' to users,
> and as far as I could tell there was no way to have a
> single-invocation build of the docs not include it in
> eg the index/search (which would then be broken when
> we don't install devel/ as part of 'make install').
> Does this patchset result in a set of installed docs
> that don't have dangling references ?

You are right:
 * The hidden documents are included in the navigation bar (different
from the table of contents).
 * The search index (which install-doc omits!) includes content from
the hidden documents.

I have asked on #sphinx-doc.  Let's see if there is a solution.

It might be possible to hack docs/config.py to exclude devel/ when run
from make install-sphinxdocs
(https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-exclude_patterns).
This requires building the docs again at install time but the
advantage is we get a single searchable set of documentation.

Stefan



Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Laszlo Ersek
On 11/08/19 11:42, Philippe Mathieu-Daudé wrote:
> On 11/8/19 11:34 AM, Thomas Huth wrote:
>> On 08/11/2019 11.28, Philippe Mathieu-Daudé wrote:
>>> The bzip2 tool is not included in default installations.
>>> On freshly installed systems, ./configure succeeds but 'make'
>>> might fail later:
>>>
>>>  BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
>>>    /bin/sh: bzip2: command not found
>>>    make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
>>>    make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
>>>    make: *** Waiting for unfinished jobs
>>>
>>> Add a check in ./configure to warn the user if bzip2 is missing.
>>>
>>> Fixes: 536d2173b2b
>>> Reported-by: Thomas Huth 
>>> Signed-off-by: Philippe Mathieu-Daudé 
>>> ---
>>>   configure | 7 +++
>>>   1 file changed, 7 insertions(+)
>>>
>>> diff --git a/configure b/configure
>>> index efe165edf9..9957e913e8 100755
>>> --- a/configure
>>> +++ b/configure
>>> @@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys;
>>> print("%d.%d.%d" % (sys.version_info[0]
>>>   # Suppress writing compiled files
>>>   python="$python -B"
>>> +# Some firmware binaries are compressed with bzip2
>>> +if has bzip2; then
>>> +  :
>>> +else
>>> +  error_exit "bzip2 program not found. Please install it"
>>> +fi
>>
>> It's only required for the edk2 binaries, isn't it? So should this
>> maybe also check whether we build any of the i386-softmmu,
>> x86_64-softmmu arm-softmmu or aarch64-softmmu targets and pass otherwise?
> 
> I have this on my TODO somewhere, because we extract the edk2 firmwares
> even if we only build MIPS/PowerPC.

But that applies to all of "BLOBS" in the root-level Makefile, doesn't
it? We also install, say, "vgabios-qxl.bin", when only the MIPS/PowerPC
system emulators are built. The only aspect that's specific to edk2
binaries is that they are kept compressed until installation time, to
save space in the git repo and in the source tarball.

I'm reminded of the "external QEMU blob / firmware repo" idea.

Thanks
Laszlo




Re: [PATCH v15 06/12] numa: Extend CLI to provide memory latency and bandwidth information

2019-11-08 Thread Igor Mammedov
On Thu,  7 Nov 2019 15:45:05 +0800
Tao Xu  wrote:

> From: Liu Jingqi 
> 
> Add -numa hmat-lb option to provide System Locality Latency and
> Bandwidth Information. These memory attributes help to build
> System Locality Latency and Bandwidth Information Structure(s)
> in ACPI Heterogeneous Memory Attribute Table (HMAT).
> 
> Signed-off-by: Liu Jingqi 
> Signed-off-by: Tao Xu 
> ---
> 
> Changes in v15:
> - Change the QAPI version tag to 5.0 (Eric)
> 
> Changes in v14:
> - Use qemu ctz64 and clz64 instead of builtin function
> - Improve help message in qemu-options.hx
> 
> Changes in v13:
> - Reuse Garray to store the raw bandwidth and bandwidth data
> - Calculate common base unit using range bitmap (Igor)
> ---
>  hw/core/numa.c| 136 ++
>  include/sysemu/numa.h |  68 +
>  qapi/machine.json |  94 -
>  qemu-options.hx   |  49 ++-
>  4 files changed, 344 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/core/numa.c b/hw/core/numa.c
> index eba66ab768..f391760c20 100644
> --- a/hw/core/numa.c
> +++ b/hw/core/numa.c
> @@ -23,6 +23,7 @@
>   */
>  
>  #include "qemu/osdep.h"
> +#include "qemu/units.h"
>  #include "sysemu/hostmem.h"
>  #include "sysemu/numa.h"
>  #include "sysemu/sysemu.h"
> @@ -198,6 +199,128 @@ void parse_numa_distance(MachineState *ms, 
> NumaDistOptions *dist, Error **errp)
>  ms->numa_state->have_numa_distance = true;
>  }
>  
> +void parse_numa_hmat_lb(NumaState *numa_state, NumaHmatLBOptions *node,
> +Error **errp)
> +{
> +int first_bit, last_bit;
> +uint64_t max_latency, temp_base_la;
> +NodeInfo *numa_info = numa_state->nodes;
> +HMAT_LB_Info *hmat_lb =
> +numa_state->hmat_lb[node->hierarchy][node->data_type];
> +HMAT_LB_Data lb_data;
maybe
  HMAT_LB_Data lb_data = {};
to make sure that no members are left un-initialized 

> +
> +/* Error checking */
> +if (node->initiator >= numa_state->num_nodes) {
> +error_setg(errp, "Invalid initiator=%d, it should be less than %d.",
error_setg(), shouldn't have "." at the end
see include/qapi/error.h:error_setg() description

this applies to all such uses with this series.

> +   node->initiator, numa_state->num_nodes);
> +return;
> +}
> +if (node->target >= numa_state->num_nodes) {
> +error_setg(errp, "Invalid target=%d, it should be less than %d.",
> +   node->target, numa_state->num_nodes);
> +return;
> +}
> +if (!numa_info[node->initiator].has_cpu) {
> +error_setg(errp, "Invalid initiator=%d, it isn't an "
> +   "initiator proximity domain.", node->initiator);
> +return;
> +}
> +if (!numa_info[node->target].present) {
> +error_setg(errp, "Invalid target=%d, it hasn't a valid NUMA node.",
s/\..*\./the target should point to an existing node"/

> +   node->target);
> +return;
> +}
> +
> +if (!hmat_lb) {
> +hmat_lb = g_malloc0(sizeof(*hmat_lb));
> +numa_state->hmat_lb[node->hierarchy][node->data_type] = hmat_lb;
> +hmat_lb->latency = g_array_new(false, true, sizeof(HMAT_LB_Data));
> +hmat_lb->bandwidth = g_array_new(false, true, sizeof(HMAT_LB_Data));
> +}
> +hmat_lb->hierarchy = node->hierarchy;
> +hmat_lb->data_type = node->data_type;
> +lb_data.initiator = node->initiator;
> +lb_data.target = node->target;
> +
> +/* Input latency data */
> +if (node->data_type <= HMATLB_DATA_TYPE_WRITE_LATENCY) {
> +if (!node->has_latency) {
> +error_setg(errp, "Missing 'latency' option.");
> +return;
> +}
> +if (node->has_bandwidth) {
> +error_setg(errp, "Invalid option 'bandwidth' since "
> +   "the data type is latency.");
> +return;
> +}
> +
> +if (hmat_lb->base_latency == 0) {
> +hmat_lb->base_latency = UINT64_MAX;
> +}
> +
> +/* Calculate the temporary base and compressed latency */
> +max_latency = node->latency;
> +temp_base_la = 1;
> +while (QEMU_IS_ALIGNED(max_latency, 10)) {
> +max_latency /= 10;
> +temp_base_la *= 10;
> +}
> +
> +/* Calculate the max compressed latency */
> +hmat_lb->base_latency = MIN(hmat_lb->base_latency, temp_base_la);
> +max_latency = node->latency / hmat_lb->base_latency;
> +hmat_lb->max_entry_la = MAX(hmat_lb->max_entry_la, max_latency);
> +
> +if (hmat_lb->max_entry_la >= UINT16_MAX) {
> +error_setg(errp, "Latency %" PRIu64 " between initiator=%d and "
> +   "target=%d should not differ from previously entered "
> +   "values on more than %d.", node->latency,
s/values/min/max values/

> +   node->initiator, node->t

[PATCH v2 0/2] configure: Only decompress EDK2 blobs and check for bzip2 for X86/ARM

2019-11-08 Thread Philippe Mathieu-Daudé
This series fixes a bug reported by Thomas: bzip2 is (only) required
to decompress the EDK2 blobs, which target the ARM/X86 archs.

First restrict the blobs decompression to their targets (we don't
need to decompress them when not building X86/ARM).

Then check bzip2 is available.

Philippe Mathieu-Daudé (2):
  configure: Only decompress EDK2 blobs for X86/ARM targets
  configure: Check bzip2 is available

 Makefile  |  2 ++
 configure | 17 +
 2 files changed, 19 insertions(+)

-- 
2.21.0




[PATCH v2 2/2] configure: Check bzip2 is available

2019-11-08 Thread Philippe Mathieu-Daudé
The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
  /bin/sh: bzip2: command not found
  make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
  make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
  make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.

Fixes: 536d2173b2b
Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
v2: use better English (Daniel)
(Not taking Daniel Reviewed-by because logic changed)
---
 configure | 4 
 1 file changed, 4 insertions(+)

diff --git a/configure b/configure
index 9b322284c3..2b419a8039 100755
--- a/configure
+++ b/configure
@@ -2147,6 +2147,7 @@ case " $target_list " in
   ;;
 esac
 
+# Some firmware binaries are compressed with bzip2
 for target in $target_list; do
   case "$target" in
 arm-softmmu | aarch64-softmmu | i386-softmmu | x86_64-softmmu)
@@ -2154,6 +2155,9 @@ for target in $target_list; do
   ;;
   esac
 done
+if test "$edk2_blobs" = "yes" && ! has bzip2; then
+  error_exit "The bzip2 program is required for building QEMU"
+fi
 
 feature_not_found() {
   feature=$1
-- 
2.21.0




[PATCH v2 1/2] configure: Only decompress EDK2 blobs for X86/ARM targets

2019-11-08 Thread Philippe Mathieu-Daudé
The EDK2 firmware blobs only target the X86/ARM architectures.
Define the DECOMPRESS_EDK2_BLOBS variable and only decompress
the blobs when the variable exists.

Fixes: 536d2173b2b
Suggested-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
v2: new
---
 Makefile  |  2 ++
 configure | 13 +
 2 files changed, 15 insertions(+)

diff --git a/Makefile b/Makefile
index aa9d1a42aa..3430eca532 100644
--- a/Makefile
+++ b/Makefile
@@ -480,7 +480,9 @@ $(SOFTMMU_ALL_RULES): $(chardev-obj-y)
 $(SOFTMMU_ALL_RULES): $(crypto-obj-y)
 $(SOFTMMU_ALL_RULES): $(io-obj-y)
 $(SOFTMMU_ALL_RULES): config-all-devices.mak
+ifdef DECOMPRESS_EDK2_BLOBS
 $(SOFTMMU_ALL_RULES): $(edk2-decompressed)
+endif
 
 .PHONY: $(TARGET_DIRS_RULES)
 # The $(TARGET_DIRS_RULES) are of the form SUBDIR/GOAL, so that
diff --git a/configure b/configure
index efe165edf9..9b322284c3 100755
--- a/configure
+++ b/configure
@@ -427,6 +427,7 @@ softmmu="yes"
 linux_user="no"
 bsd_user="no"
 blobs="yes"
+edk2_blobs="no"
 pkgversion=""
 pie=""
 qom_cast_debug="yes"
@@ -2146,6 +2147,14 @@ case " $target_list " in
   ;;
 esac
 
+for target in $target_list; do
+  case "$target" in
+arm-softmmu | aarch64-softmmu | i386-softmmu | x86_64-softmmu)
+  edk2_blobs="yes"
+  ;;
+  esac
+done
+
 feature_not_found() {
   feature=$1
   remedy=$2
@@ -7526,6 +7535,10 @@ if test "$libudev" != "no"; then
 echo "LIBUDEV_LIBS=$libudev_libs" >> $config_host_mak
 fi
 
+if test "$edk2_blobs" = "yes" ; then
+  echo "DECOMPRESS_EDK2_BLOBS=y" >> $config_host_mak
+fi
+
 # use included Linux headers
 if test "$linux" = "yes" ; then
   mkdir -p linux-headers
-- 
2.21.0




Re: [PATCH v2 1/2] configure: Only decompress EDK2 blobs for X86/ARM targets

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 12:45:30PM +0100, Philippe Mathieu-Daudé wrote:
> The EDK2 firmware blobs only target the X86/ARM architectures.
> Define the DECOMPRESS_EDK2_BLOBS variable and only decompress
> the blobs when the variable exists.
> 
> Fixes: 536d2173b2b
> Suggested-by: Thomas Huth 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
> v2: new
> ---
>  Makefile  |  2 ++
>  configure | 13 +
>  2 files changed, 15 insertions(+)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v2 02/11] qcrypto-luks: extend the create options for upcoming encryption key management

2019-11-08 Thread Maxim Levitsky
On Fri, 2019-11-08 at 11:48 +0100, Max Reitz wrote:
> On 08.11.19 10:28, Maxim Levitsky wrote:
> > On Fri, 2019-10-04 at 19:42 +0200, Max Reitz wrote:
> > > On 13.09.19 00:30, Maxim Levitsky wrote:
> > > > Now you can specify which slot to put the encryption key to
> > > > Plus add 'active' option which will let  user erase the key secret
> > > > instead of adding it.
> > > > Check that active=true it when creating.
> > > > 
> > > > Signed-off-by: Maxim Levitsky 
> > > > ---
> > > >  block/crypto.c |  2 ++
> > > >  block/crypto.h | 16 +++
> > > >  block/qcow2.c  |  2 ++
> > > >  crypto/block-luks.c| 26 +++---
> > > >  qapi/crypto.json   | 19 ++
> > > >  tests/qemu-iotests/082.out | 54 ++
> > > >  6 files changed, 115 insertions(+), 4 deletions(-)
> > > 
> > > (Just doing a cursory RFC-style review)
> > > 
> > > I think we also want to reject unlock-secret if it’s given for creation;
> > 
> > Agree, I'll do this in the next version.
> > 
> > > and I suppose it’d be more important to print which slots are OK than
> > > the slot the user has given.  (It isn’t like we shouldn’t print that
> > > slot index, but it’s more likely the user knows that than what the
> > > limits are.  I think.)
> > 
> > I don't really understand what you mean here :-( 
> > 
> > Since this is qmp interface,
> > I can't really print anything from it, other that error messages.
> 
> Exactly, I’m referring to the error message.  Right now it’s:
> 
> "Invalid slot %" PRId64 " is specified", luks_opts.slot
> 
> I think it should be something like:
> 
> "Invalid slot %" PRId64 " specified, must be between 0 and %u",
> luks_opt.slot, QCRYPTO_BLOCK_LUKS_NUM_KEY_SLOTS - 1

This is a very good idea! implemented now and will
post in the next version.

Best regards,
Maxim Levitsky






Re: [PATCH v2 2/2] configure: Check bzip2 is available

2019-11-08 Thread Daniel P . Berrangé
On Fri, Nov 08, 2019 at 12:45:31PM +0100, Philippe Mathieu-Daudé wrote:
> The bzip2 tool is not included in default installations.
> On freshly installed systems, ./configure succeeds but 'make'
> might fail later:
> 
> BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
>   /bin/sh: bzip2: command not found
>   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
>   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
>   make: *** Waiting for unfinished jobs
> 
> Add a check in ./configure to warn the user if bzip2 is missing.
> 
> Fixes: 536d2173b2b
> Reported-by: Thomas Huth 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
> v2: use better English (Daniel)
> (Not taking Daniel Reviewed-by because logic changed)
> ---
>  configure | 4 
>  1 file changed, 4 insertions(+)

> diff --git a/configure b/configure
> index 9b322284c3..2b419a8039 100755
> --- a/configure
> +++ b/configure
> @@ -2147,6 +2147,7 @@ case " $target_list " in
>;;
>  esac
>  
> +# Some firmware binaries are compressed with bzip2

Squash into previous patch

>  for target in $target_list; do
>case "$target" in
>  arm-softmmu | aarch64-softmmu | i386-softmmu | x86_64-softmmu)
> @@ -2154,6 +2155,9 @@ for target in $target_list; do
>;;
>esac
>  done
> +if test "$edk2_blobs" = "yes" && ! has bzip2; then
> +  error_exit "The bzip2 program is required for building QEMU"
> +fi
>  
>  feature_not_found() {
>feature=$1

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v2 1/2] configure: Only decompress EDK2 blobs for X86/ARM targets

2019-11-08 Thread Philippe Mathieu-Daudé

On 11/8/19 12:45 PM, Philippe Mathieu-Daudé wrote:

The EDK2 firmware blobs only target the X86/ARM architectures.
Define the DECOMPRESS_EDK2_BLOBS variable and only decompress
the blobs when the variable exists.

Fixes: 536d2173b2b
Suggested-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
v2: new
---
  Makefile  |  2 ++
  configure | 13 +


Oops sorry, new workspace, I hadn't installed scripts/git.orderfile.


  2 files changed, 15 insertions(+)

diff --git a/Makefile b/Makefile
index aa9d1a42aa..3430eca532 100644
--- a/Makefile
+++ b/Makefile
@@ -480,7 +480,9 @@ $(SOFTMMU_ALL_RULES): $(chardev-obj-y)
  $(SOFTMMU_ALL_RULES): $(crypto-obj-y)
  $(SOFTMMU_ALL_RULES): $(io-obj-y)
  $(SOFTMMU_ALL_RULES): config-all-devices.mak
+ifdef DECOMPRESS_EDK2_BLOBS
  $(SOFTMMU_ALL_RULES): $(edk2-decompressed)
+endif
  
  .PHONY: $(TARGET_DIRS_RULES)

  # The $(TARGET_DIRS_RULES) are of the form SUBDIR/GOAL, so that
diff --git a/configure b/configure
index efe165edf9..9b322284c3 100755
--- a/configure
+++ b/configure
@@ -427,6 +427,7 @@ softmmu="yes"
  linux_user="no"
  bsd_user="no"
  blobs="yes"
+edk2_blobs="no"
  pkgversion=""
  pie=""
  qom_cast_debug="yes"
@@ -2146,6 +2147,14 @@ case " $target_list " in
;;
  esac
  
+for target in $target_list; do

+  case "$target" in
+arm-softmmu | aarch64-softmmu | i386-softmmu | x86_64-softmmu)
+  edk2_blobs="yes"
+  ;;
+  esac
+done
+
  feature_not_found() {
feature=$1
remedy=$2
@@ -7526,6 +7535,10 @@ if test "$libudev" != "no"; then
  echo "LIBUDEV_LIBS=$libudev_libs" >> $config_host_mak
  fi
  
+if test "$edk2_blobs" = "yes" ; then

+  echo "DECOMPRESS_EDK2_BLOBS=y" >> $config_host_mak
+fi
+
  # use included Linux headers
  if test "$linux" = "yes" ; then
mkdir -p linux-headers





Re: [PATCH] tests/migration: Print some debug on bad status

2019-11-08 Thread Thomas Huth

On 08/11/2019 11.43, Dr. David Alan Gilbert (git) wrote:

From: "Dr. David Alan Gilbert" 

We're seeing occasional asserts in 'wait_for_migraiton_fail', that
I can't reliably reproduce, and where the cores don't have any useful
state.  Print the 'status' out, so we can see which unexpected state
we're ending up in.

Signed-off-by: Dr. David Alan Gilbert 
---
  tests/migration-test.c | 9 +++--
  1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tests/migration-test.c b/tests/migration-test.c
index 59f291c654..ac780dffda 100644
--- a/tests/migration-test.c
+++ b/tests/migration-test.c
@@ -899,8 +899,13 @@ static void wait_for_migration_fail(QTestState *from, bool 
allow_active)
  
  do {

  status = migrate_query_status(from);
-g_assert(!strcmp(status, "setup") || !strcmp(status, "failed") ||
- (allow_active && !strcmp(status, "active")));
+bool result = !strcmp(status, "setup") || !strcmp(status, "failed") ||
+ (allow_active && !strcmp(status, "active"));
+if (!result) {
+fprintf(stderr, "%s: unexpected status status=%s 
allow_active=%d\n",
+__func__, status, allow_active);
+}
+g_assert(result);
  failed = !strcmp(status, "failed");
  g_free(status);
  } while (!failed);



Reviewed-by: Thomas Huth 




Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Philippe Mathieu-Daudé

On 11/8/19 12:01 PM, Laszlo Ersek wrote:

On 11/08/19 11:28, Philippe Mathieu-Daudé wrote:

The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

 BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
   /bin/sh: bzip2: command not found
   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
   make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.


We've come full circle. Let me explain:



Fixes: 536d2173b2b


So this makes me kinda grumpy. If you look at the v3 posting of the patch that 
would later become commit 536d2173b2b:

   http://mid.mail-archive.com/20190321113408.19929-8-lersek@redhat.com

you see the following note in the changelog:

 - compress FD files with bzip2 rather than xz, so that decompression at
   "make install" time succeed on older build OSes too [Peter]

So I couldn't use xz because that was "too new" for some build OSes, but now we also 
can't take bzip2 for granted because that's "too old" for some other build OSes? This is 
ridiculous.

To be clear, my disagreement is only with the "Fixes" tag. For me, "Fixes" stands for 
something that, in retrospect, can be proven to have been a bug at the time the code was *originally* 
committed. But, at the time, taking "bzip2" for granted was *not* a bug. The conditions / 
circumstances have changed more recently, and the assumption about bzip2 has been invalidated *after* adding 
a dependency on bzip2.

Nonetheless, thank you for adapting the code to the potential absence of bzip2. Can you perhaps go 
in some details in the commit message, near "not included in default installations" and 
"freshly installed systems"? If we can, we should identify the exact distro release where 
this problem has been encountered (and I wouldn't mind a link to the BZ or ticket under which 
people agreed to remove bzip2 from the default package set).


I am just reading this and already sent a v2.

I can amend these details. Thomas, what distro release were you using?



Reviewed-by: Laszlo Ersek 

Thanks
Laszlo


Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
  configure | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/configure b/configure
index efe165edf9..9957e913e8 100755
--- a/configure
+++ b/configure
@@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys; 
print("%d.%d.%d" % (sys.version_info[0]
  # Suppress writing compiled files
  python="$python -B"
  
+# Some firmware binaries are compressed with bzip2

+if has bzip2; then
+  :
+else
+  error_exit "bzip2 program not found. Please install it"
+fi
+
  # Check that the C compiler works. Doing this here before testing
  # the host CPU ensures that we had a valid CC to autodetect the
  # $cpu var (and we should bail right here if that's not the case).







Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Thomas Huth

On 08/11/2019 12.54, Philippe Mathieu-Daudé wrote:

On 11/8/19 12:01 PM, Laszlo Ersek wrote:

On 11/08/19 11:28, Philippe Mathieu-Daudé wrote:

The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

 BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
   /bin/sh: bzip2: command not found
   make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
   make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
   make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.


We've come full circle. Let me explain:



Fixes: 536d2173b2b


So this makes me kinda grumpy. If you look at the v3 posting of the 
patch that would later become commit 536d2173b2b:


   http://mid.mail-archive.com/20190321113408.19929-8-lersek@redhat.com

you see the following note in the changelog:

 - compress FD files with bzip2 rather than xz, so that 
decompression at

   "make install" time succeed on older build OSes too [Peter]

So I couldn't use xz because that was "too new" for some build OSes, 
but now we also can't take bzip2 for granted because that's "too old" 
for some other build OSes? This is ridiculous.


To be clear, my disagreement is only with the "Fixes" tag. For me, 
"Fixes" stands for something that, in retrospect, can be proven to 
have been a bug at the time the code was *originally* committed. But, 
at the time, taking "bzip2" for granted was *not* a bug. The 
conditions / circumstances have changed more recently, and the 
assumption about bzip2 has been invalidated *after* adding a 
dependency on bzip2.


Nonetheless, thank you for adapting the code to the potential absence 
of bzip2. Can you perhaps go in some details in the commit message, 
near "not included in default installations" and "freshly installed 
systems"? If we can, we should identify the exact distro release where 
this problem has been encountered (and I wouldn't mind a link to the 
BZ or ticket under which people agreed to remove bzip2 from the 
default package set).


I am just reading this and already sent a v2.

I can amend these details. Thomas, what distro release were you using?


I encountered this problem with a freshly installed Fedora 31.

 Thomas




Re: [PATCH 2/3] docs: build a global index page

2019-11-08 Thread Peter Maydell
On Fri, 8 Nov 2019 at 11:39, Stefan Hajnoczi  wrote:
>
> On Fri, Nov 8, 2019 at 11:52 AM Peter Maydell  
> wrote:
> > So the reason I went for the odd "run sphinx multiple times"
> > approach was because we don't want to ship 'devel' to users,
> > and as far as I could tell there was no way to have a
> > single-invocation build of the docs not include it in
> > eg the index/search (which would then be broken when
> > we don't install devel/ as part of 'make install').
> > Does this patchset result in a set of installed docs
> > that don't have dangling references ?
>
> You are right:
>  * The hidden documents are included in the navigation bar (different
> from the table of contents).
>  * The search index (which install-doc omits!) includes content from
> the hidden documents.
>
> I have asked on #sphinx-doc.  Let's see if there is a solution.

FWIW, this is the thread where I asked on the sphinx-users
list about the best way to handle "multiple manuals but we
only ship a subset":

https://www.mail-archive.com/sphinx-users@googlegroups.com/msg03224.html

> It might be possible to hack docs/config.py to exclude devel/ when run
> from make install-sphinxdocs
> (https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-exclude_patterns).
> This requires building the docs again at install time but the
> advantage is we get a single searchable set of documentation.

Yeah, if you're willing to build the docs twice (once for local
and once for installed) that also works. I'd prefer not to
do build work at 'make install' time, though -- if we want
to do this we should build them twice at 'make' time and
install just the right one.

thanks
-- PMM



Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Philippe Mathieu-Daudé

On 11/8/19 12:39 PM, Laszlo Ersek wrote:

On 11/08/19 11:42, Philippe Mathieu-Daudé wrote:

On 11/8/19 11:34 AM, Thomas Huth wrote:

On 08/11/2019 11.28, Philippe Mathieu-Daudé wrote:

The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

  BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
    /bin/sh: bzip2: command not found
    make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
    make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
    make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.

Fixes: 536d2173b2b
Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
   configure | 7 +++
   1 file changed, 7 insertions(+)

diff --git a/configure b/configure
index efe165edf9..9957e913e8 100755
--- a/configure
+++ b/configure
@@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys;
print("%d.%d.%d" % (sys.version_info[0]
   # Suppress writing compiled files
   python="$python -B"
+# Some firmware binaries are compressed with bzip2
+if has bzip2; then
+  :
+else
+  error_exit "bzip2 program not found. Please install it"
+fi


It's only required for the edk2 binaries, isn't it? So should this
maybe also check whether we build any of the i386-softmmu,
x86_64-softmmu arm-softmmu or aarch64-softmmu targets and pass otherwise?


I have this on my TODO somewhere, because we extract the edk2 firmwares
even if we only build MIPS/PowerPC.


But that applies to all of "BLOBS" in the root-level Makefile, doesn't
it? We also install, say, "vgabios-qxl.bin", when only the MIPS/PowerPC
system emulators are built. The only aspect that's specific to edk2
binaries is that they are kept compressed until installation time, to
save space in the git repo and in the source tarball.


You are right, other targets could improve this too.
Since this add quite complexity to the buildsys and nobody complained 
about that previously, I suggest we clean that *after* we switch the 
build machinery to Meson.



I'm reminded of the "external QEMU blob / firmware repo" idea.


Do you mind starting a new thread asking about it? It would be nice we 
clear this during the next dev cycle.


Regards,

Phil.



Re: [PATCH v2 0/2] Deprecate implicit filters

2019-11-08 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20191108101655.10611-1-vsement...@virtuozzo.com/



Hi,

This series failed the docker-mingw@fedora build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#! /bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-mingw@fedora J=14 NETWORK=1
=== TEST SCRIPT END ===

  SIGNpc-bios/optionrom/pvh.bin
  GEN docs/interop/qemu-ga-ref.7
/tmp/qemu-test/src/qemu-deprecated.texi:210: @option expected braces
make: *** [Makefile:994: qemu-doc.txt] Error 1
make: *** Waiting for unfinished jobs
/tmp/qemu-test/src/qemu-deprecated.texi:210: @option expected braces
make: *** [Makefile:987: qemu-doc.html] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 662, in 
sys.exit(main())
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
'--label', 'com.qemu.instance.uuid=3d404482279f4fe68fcb2c9971f2480a', '-u', 
'1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
'/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
'/var/tmp/patchew-tester-tmp-mlpokwtg/src/docker-src.2019-11-08-06.56.33.23805:/var/tmp/qemu:z,ro',
 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit 
status 2.
filter=--filter=label=com.qemu.instance.uuid=3d404482279f4fe68fcb2c9971f2480a
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-mlpokwtg/src'
make: *** [docker-run-test-mingw@fedora] Error 2

real4m11.170s
user0m4.404s


The full log is available at
http://patchew.org/logs/20191108101655.10611-1-vsement...@virtuozzo.com/testing.docker-mingw@fedora/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [PATCH] tests/migration: Print some debug on bad status

2019-11-08 Thread Philippe Mathieu-Daudé

On 11/8/19 11:43 AM, Dr. David Alan Gilbert (git) wrote:

From: "Dr. David Alan Gilbert" 

We're seeing occasional asserts in 'wait_for_migraiton_fail', that
I can't reliably reproduce, and where the cores don't have any useful
state.  Print the 'status' out, so we can see which unexpected state
we're ending up in.


Back to the good remote printf() debugging :)

Reviewed-by: Philippe Mathieu-Daudé 


Signed-off-by: Dr. David Alan Gilbert 
---
  tests/migration-test.c | 9 +++--
  1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tests/migration-test.c b/tests/migration-test.c
index 59f291c654..ac780dffda 100644
--- a/tests/migration-test.c
+++ b/tests/migration-test.c
@@ -899,8 +899,13 @@ static void wait_for_migration_fail(QTestState *from, bool 
allow_active)
  
  do {

  status = migrate_query_status(from);
-g_assert(!strcmp(status, "setup") || !strcmp(status, "failed") ||
- (allow_active && !strcmp(status, "active")));
+bool result = !strcmp(status, "setup") || !strcmp(status, "failed") ||
+ (allow_active && !strcmp(status, "active"));
+if (!result) {
+fprintf(stderr, "%s: unexpected status status=%s 
allow_active=%d\n",
+__func__, status, allow_active);
+}
+g_assert(result);
  failed = !strcmp(status, "failed");
  g_free(status);
  } while (!failed);





Re: [PATCH] configure: Check bzip2 is available

2019-11-08 Thread Thomas Huth

On 08/11/2019 12.39, Laszlo Ersek wrote:

On 11/08/19 11:42, Philippe Mathieu-Daudé wrote:

On 11/8/19 11:34 AM, Thomas Huth wrote:

On 08/11/2019 11.28, Philippe Mathieu-Daudé wrote:

The bzip2 tool is not included in default installations.
On freshly installed systems, ./configure succeeds but 'make'
might fail later:

  BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
    /bin/sh: bzip2: command not found
    make: *** [Makefile:305: pc-bios/edk2-i386-secure-code.fd] Error 127
    make: *** Deleting file 'pc-bios/edk2-i386-secure-code.fd'
    make: *** Waiting for unfinished jobs

Add a check in ./configure to warn the user if bzip2 is missing.

Fixes: 536d2173b2b
Reported-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
---
   configure | 7 +++
   1 file changed, 7 insertions(+)

diff --git a/configure b/configure
index efe165edf9..9957e913e8 100755
--- a/configure
+++ b/configure
@@ -1851,6 +1851,13 @@ python_version=$($python -c 'import sys;
print("%d.%d.%d" % (sys.version_info[0]
   # Suppress writing compiled files
   python="$python -B"
+# Some firmware binaries are compressed with bzip2
+if has bzip2; then
+  :
+else
+  error_exit "bzip2 program not found. Please install it"
+fi


It's only required for the edk2 binaries, isn't it? So should this
maybe also check whether we build any of the i386-softmmu,
x86_64-softmmu arm-softmmu or aarch64-softmmu targets and pass otherwise?


I have this on my TODO somewhere, because we extract the edk2 firmwares
even if we only build MIPS/PowerPC.


But that applies to all of "BLOBS" in the root-level Makefile, doesn't
it? We also install, say, "vgabios-qxl.bin", when only the MIPS/PowerPC
system emulators are built.


IIRC there was another odd dependency that the PCI devices need their 
ROM also on non-x86 systems... but that's another story...



The only aspect that's specific to edk2
binaries is that they are kept compressed until installation time, to
save space in the git repo and in the source tarball.


I noticed that there are also some iotests that use bzip2 ... they are 
not used during the build process, but still, it might be better to 
simply always require bzip2, also in case some other architectures want 
to use it. So I'm also fine if we simply always require bzip2 for the build.


 Thomas

PS: Anybody interested in writing a patch to compress the other big 
binaries in the pc-bios folder, too? ... skiboot.lid ... 
openbios-sparc64 ... ppc_rom.bin ...





  1   2   3   >