On Wed, Feb 07, 2018 at 12:11:50PM +0100, Paolo Bonzini wrote:
> On 06/02/2018 21:30, Roman Kagan wrote:
> > +blk_io_plug(d->conf.blk);
> > +if (scsi_req_enqueue(sreq)) {
> > +scsi_req_continue(sreq);
> > +}
> > +blk_io_unplug(d->conf.
On Wed, Feb 07, 2018 at 01:00:14PM +0100, Paolo Bonzini wrote:
> On 06/02/2018 21:30, Roman Kagan wrote:
> > +/* NdisInitialize message */
> > +struct rndis_initialize_request {
> > +uint32_t req_id;
> > +uint32_t major_ver;
> > +uint32_t minor_ver
On Tue, Feb 27, 2018 at 12:56:49PM +0100, Richard Palethorpe wrote:
> Following on from the discussion about creating savevm/loadvm QMP
> equivalents. I decided to take the advice given that we should use external
> snapshots. However reverting to a snapshot currently requires QEMU to be
> restarte
On Mon, Mar 02, 2020 at 01:55:02PM +0300, Roman Kagan wrote:
> On Thu, Feb 13, 2020 at 04:55:44PM +0300, Roman Kagan wrote:
> > On Thu, Feb 13, 2020 at 06:47:10AM -0600, Eric Blake wrote:
> > > On 2/13/20 2:01 AM, Roman Kagan wrote:
> > > > On Wed, Feb 12, 2020 a
sizes handy
at times.
Make them 32 bit instead and lift the limitation up to 2 MiB which
appears to be good enough for everybody.
As the values can now be fairly big and awkward to type, make the
property setter accept common size suffixes (k, m).
Signed-off-by: Roman Kagan
---
v1 -> v2:
-
On Thu, Feb 13, 2020 at 04:55:44PM +0300, Roman Kagan wrote:
> On Thu, Feb 13, 2020 at 06:47:10AM -0600, Eric Blake wrote:
> > On 2/13/20 2:01 AM, Roman Kagan wrote:
> > > On Wed, Feb 12, 2020 at 03:44:19PM -0600, Eric Blake wrote:
> > > > On 2/11/20 5:54 AM, Roman
ff-by: Roman Kagan
Reviewed-by: Eric Blake
---
v2 -> v3:
- mention qcow2 cluster size limit in the log and comment [Eric]
v1 -> v2:
- cap the property at 2 MiB [Eric]
- accept size suffixes
include/hw/block/block.h | 8
include/hw/qdev-properties.h | 2 +-
hw/core/qdev-p
On Wed, Apr 29, 2020 at 11:41:04AM +0200, Philippe Mathieu-Daudé wrote:
> Cc'ing virtio-blk and scsi maintainers.
>
> On 4/29/20 11:18 AM, Roman Kagan wrote:
> > Devices (virtio-blk, scsi, etc.) and the block layer are happy to use
> > 32-bit for logical_block_size
On Wed, Apr 29, 2020 at 02:59:31PM +0200, Philippe Mathieu-Daudé wrote:
> On 4/29/20 2:19 PM, Roman Kagan wrote:
> > On Wed, Apr 29, 2020 at 11:41:04AM +0200, Philippe Mathieu-Daudé wrote:
> > > Cc'ing virtio-blk and scsi maintainers.
> > >
> > &g
On Tue, Oct 01, 2019 at 06:52:52PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Here is introduced ERRP_AUTO_PROPAGATE macro, to be used at start of
> functions with errp OUT parameter.
>
> It has three goals:
>
> 1. Fix issue with error_fatal & error_prepend/error_append_hint: user
> can't see t
On Tue, Mar 16, 2021 at 09:09:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 16.03.2021 19:03, Roman Kagan wrote:
> > On Mon, Mar 15, 2021 at 11:10:14PM +0300, Vladimir Sementsov-Ogievskiy
> > wrote:
> > > 15.03.2021 09:06, Roman Kagan wrote:
> > > > The rec
On Tue, Mar 16, 2021 at 09:37:13PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 16.03.2021 19:08, Roman Kagan wrote:
> > On Mon, Mar 15, 2021 at 11:15:44PM +0300, Vladimir Sementsov-Ogievskiy
> > wrote:
> > > 15.03.2021 09:06, Roman Kagan wrote:
> > > > As the
On Wed, Mar 17, 2021 at 11:35:31AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > The reconnection logic doesn't need to stop while in a drained section.
> > Moreover it has to be active during the drained section, as the requests
>
On Wed, Mar 17, 2021 at 11:35:31AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > The reconnection logic doesn't need to stop while in a drained section.
> > Moreover it has to be active during the drained section, as the requests
>
On Wed, Apr 07, 2021 at 01:46:24PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> The field is actually unused. Let's make things a bit simpler dropping
> it and corresponding logic.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 9 ++---
> 1 file changed, 2 insertions(+
On Wed, Apr 07, 2021 at 01:46:25PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to refactor connection logic to make it more
> understandable. Every bit that we can simplify in advance will help.
> Drop errp for now, it's unused anyway. We'll probably reimplement it in
> future.
Altho
On Wed, Apr 07, 2021 at 01:46:26PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> The field is used only to free it. Let's drop it for now for
> simplicity.
Well, it's *now* (after your patch 2) only used to free it. This makes
the reconnect process even further concealed from the user: the client
On Wed, Apr 07, 2021 at 01:46:29PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Add personal state NBDConnectThread for connect-thread and
> nbd_connect_thread_start() function. Next step would be moving
> connect-thread to a separate file.
>
> Note that we stop publishing thr->sioc during
> qio_c
nt.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 62 +----
> 1 file changed, 20 insertions(+), 42 deletions(-)
Reviewed-by: Roman Kagan
On Thu, Apr 08, 2021 at 05:08:19PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> These fields are write-only. Drop them.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 12 ++--
> 1 file changed, 2 insertions(+), 10 deletions(-)
Reviewed-by: Roman Kagan
a proper iothread: if the target
context was qemu_aio_context, an iothread would just schedule the
coroutine there, while a "dumb" thread would try lock the context
potentially resulting in a deadlock. This patch makes "dumb" threads
and iothreads behave identically when entering a coroutine on a foreign
context.
You may want to rephrase the log message to that end.
Anyway
Reviewed-by: Roman Kagan
ng connection API out of
> nbd.c (which is overcomplicated now).
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 49 +----
> 1 file changed, 9 insertions(+), 40 deletions(-)
Reviewed-by: Roman Kagan
-
> 1 file changed, 27 insertions(+), 76 deletions(-)
Reviewed-by: Roman Kagan
> ---
> block/nbd.c | 49 +++--
> 1 file changed, 31 insertions(+), 18 deletions(-)
Reviewed-by: Roman Kagan
> 1 file changed, 7 insertions(+), 9 deletions(-)
Reviewed-by: Roman Kagan
Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 127 ++--
> 1 file changed, 63 insertions(+), 64 deletions(-)
[To other reviewers: in addition to renaming there's one blank line
removed, hence the difference between (+) and (-)]
Reviewed-by: Roman Kagan
ov-Ogievskiy
> ---
> block/nbd.c | 15 +--
> 1 file changed, 9 insertions(+), 6 deletions(-)
Reviewed-by: Roman Kagan
nection.c | 192
> nbd/meson.build | 1 +
> 4 files changed, 204 insertions(+), 167 deletions(-)
> create mode 100644 nbd/client-connection.c
Reviewed-by: Roman Kagan
On Thu, Apr 08, 2021 at 05:08:17PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
>
> This substitutes "[PATCH 00/14] nbd: move reconnect-thread to separate file"
> Supersedes: <20210407104637.36033-1-vsement...@virtuozzo.com>
>
> I want to simplify block/nbd.c which is overcomplicated now.
On Thu, Apr 08, 2021 at 06:54:30PM +0300, Roman Kagan wrote:
> On Thu, Apr 08, 2021 at 05:08:20PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > With the following patch we want to call aio_co_wake() from thread.
> > And it works bad.
> > Assume we have no iothreads.
> >
A couple of bugfixes to block/nbd that look appropriate for 6.0.
Roman Kagan (2):
block/nbd: fix channel object leak
block/nbd: ensure ->connection_thread is always valid
block/nbd.c | 59 +++--
1 file changed, 30 insertions(+), 29 deleti
nbd_free_connect_thread leaks the channel object if it hasn't been
stolen.
Unref it and fix the leak.
Signed-off-by: Roman Kagan
---
block/nbd.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/nbd.c b/block/nbd.c
index c26dc5a54f..d86df3afcb 100644
--- a/block/nbd.c
+++ b/block/
Simplify lifetime management of BDRVNBDState->connection_thread by
delaying the possible cleanup of it until the BDRVNBDState itself goes
away.
This also fixes possible use-after-free in nbd_co_establish_connection
when it races with nbd_co_establish_connection_cancel.
Signed-off-by: Roman Ka
On Sat, Apr 10, 2021 at 12:56:34PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 10.04.2021 11:38, Vladimir Sementsov-Ogievskiy wrote:
> > 10.04.2021 11:06, Vladimir Sementsov-Ogievskiy wrote:
> > > 09.04.2021 19:04, Roman Kagan wrote:
> > > > Simplify lifet
l not hurt:
>
> pre-patch, on first hunk we'll just crash if thr is NULL,
> on second hunk it's safe to return -1, and using thr when
> s->connect_thread is already zeroed is obviously wrong.
>
> block/nbd.c | 11 +++
> 1 file changed, 11 insertions(+)
Can we please get it merged in 6.0 as it's a genuine crasher fix?
Reviewed-by: Roman Kagan
On Fri, Mar 12, 2021 at 03:35:25PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 10.03.2021 12:32, Roman Kagan wrote:
> > NBD connect coroutine takes an extra in_flight reference as if it's a
> > request handler. This prevents drain from completion until the
> > connec
("nbd: Restrict
connection_co reentrance"); as I've missed the point of that commit I'd
appreciate more scrutiny in this area.
Roman Kagan (7):
block/nbd: avoid touching freed connect_thread
block/nbd: use uniformly nbd_client_connecting_wait
block/nbd: assert attach/
with the drained section in the reconnection code.
Fixes: 5ad81b4946 ("nbd: Restrict connection_co reentrance")
Fixes: 8c517de24a ("block/nbd: fix drain dead-lock because of nbd
reconnect-delay")
Signed-off-by: Roman Kagan
---
block/nbd.c | 79 +++
Document (via a comment and an assert) that
nbd_client_detach_aio_context and nbd_client_attach_aio_context_bh run
in the desired aio_context.
Signed-off-by: Roman Kagan
---
block/nbd.c | 12
1 file changed, 12 insertions(+)
diff --git a/block/nbd.c b/block/nbd.c
index 1d8edb5b21
As the reconnect logic no longer interferes with drained sections, it
appears unnecessary to explicitly manipulate the in_flight counter.
Fixes: 5ad81b4946 ("nbd: Restrict connection_co reentrance")
Signed-off-by: Roman Kagan
---
block/nbd.c | 6 --
nbd/client.c | 2 --
2 files
Cosmetic: adjust the comment and the return value in
nbd_co_establish_connection where it's entered while the connection
thread is still running.
Signed-off-by: Roman Kagan
---
block/nbd.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/block/nbd.c b/block/nbd.c
Use nbd_client_connecting_wait uniformly all over the block/nbd.c.
While at this, drop the redundant check for nbd_client_connecting_wait
in reconnect_delay_timer_init, as all its callsites do this check too.
Signed-off-by: Roman Kagan
---
block/nbd.c | 34 +++---
1
reconnection logic on entry and starts it over on exit. However, this
patch paves the way to keeping the reconnection process active across
the drained section (in a followup patch).
Signed-off-by: Roman Kagan
---
block/nbd.c | 44 ++--
1 file changed, 42 insertions
ion thread data.
To prevent this, revalidate the ->connect_thread pointer in
nbd_co_establish_connection_cancel before using after the the yield.
Signed-off-by: Roman Kagan
---
block/nbd.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/block/nbd.c b/block/nbd.c
index c26dc5a54
On Mon, Mar 15, 2021 at 07:41:32PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > Document (via a comment and an assert) that
> > nbd_client_detach_aio_context and nbd_client_attach_aio_context_bh run
> > in the desired aio_context
&
On Mon, Mar 15, 2021 at 06:40:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > When the NBD connection is being torn down, the connection thread gets
> > canceled and "detached", meaning it is about to get freed.
> >
On Mon, Mar 15, 2021 at 10:45:39PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > The reconnection logic doesn't need to stop while in a drained section.
> > Moreover it has to be active during the drained section, as the requests
>
On Mon, Mar 15, 2021 at 11:15:44PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > As the reconnect logic no longer interferes with drained sections, it
> > appears unnecessary to explicitly manipulate the in_flight counter.
> >
> &
On Mon, Mar 15, 2021 at 11:10:14PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 15.03.2021 09:06, Roman Kagan wrote:
> > The reconnection logic doesn't need to stop while in a drained section.
> > Moreover it has to be active during the drained section, as the requests
>
On Tue, Mar 16, 2021 at 09:41:36AM -0500, Eric Blake wrote:
> On 3/15/21 1:06 AM, Roman Kagan wrote:
> > The reconnection logic doesn't need to stop while in a drained section.
> > Moreover it has to be active during the drained section, as the requests
> > that were
er in
.bdrv_{attach,detach}_aio_context callbacks.
Fixes: 5ad81b4946 ("nbd: Restrict connection_co reentrance")
Signed-off-by: Roman Kagan
---
This patch passes the regular make check but fails some extra iotests,
in particular 277. It obviously lacks more robust interaction with the
co
On Thu, May 13, 2021 at 11:04:37PM +0200, Paolo Bonzini wrote:
> On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
> > > >
> > >
> > > I don't understand. Why doesn't aio_co_enter go through the ctx !=
> > > qemu_get_current_aio_context() branch and just do aio_co_schedule?
> > > That was a
On Fri, Apr 16, 2021 at 11:08:40AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We have two "return error" paths in nbd_open() after
> nbd_process_options(). Actually we should call nbd_clear_bdrvstate()
> on these paths. Interesting that nbd_process_options() calls
> nbd_clear_bdrvstate() by itsel
On Thu, Apr 22, 2021 at 01:27:22AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 21.04.2021 17:00, Roman Kagan wrote:
> > On Fri, Apr 16, 2021 at 11:08:40AM +0300, Vladimir Sementsov-Ogievskiy
> > wrote:
> > > @@ -2305,20 +2301,23 @@ static int nbd_open(BlockDriverState *b
On Fri, Apr 16, 2021 at 11:08:42AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 2 ++
> 1 file changed, 2 insertions(+)
Reviewed-by: Roman Kagan
On Fri, Apr 16, 2021 at 11:08:44AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> With the following patch we want to call wake coroutine from thread.
> And it doesn't work with aio_co_wake:
> Assume we have no iothreads.
> Assume we have a coroutine A, which waits in the yield point for
> external a
On Fri, Apr 16, 2021 at 11:08:45AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Instead of connect_bh, bh_ctx and wait_connect fields we can live with
> only one link to waiting coroutine, protected by mutex.
>
> So new logic is:
>
> nbd_co_establish_connection() sets wait_co under mutex, release
On Fri, Apr 16, 2021 at 11:08:46AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We don't need all these states. The code refactored to use two boolean
> variables looks simpler.
Indeed.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 125 ++
ile changed, 68 insertions(+), 69 deletions(-)
Reviewed-by: Roman Kagan
On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/nbd.c | 43 ++-
> 1 file changed, 26 insertions(+), 17 deletions(-)
>
> diff --git a/block/nbd.c b/block/nbd.c
> index
On Fri, Apr 16, 2021 at 11:08:52AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We now have bs-independent connection API, which consists of four
> functions:
>
> nbd_client_connection_new()
> nbd_client_connection_unref()
> nbd_co_establish_connection()
> nbd_co_establish_connection_cance
On Fri, Apr 16, 2021 at 11:08:53AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> nbd/client-connection.c | 94 ++---
> 1 file changed, 42 insertions(+), 52 deletions(-)
>
> diff --git a/nbd/client-connection.c
lib/x86_64-linux-gnu/libpthread.so.0
Fix it by checking that the connection coroutine is non-null before
trying to enter it. If it is null, no entering is needed, as the
connection is probably going down anyway.
Signed-off-by: Roman Kagan
---
block/nbd.c | 16 +---
1 file changed
nect-delay" runs a stress load (fio with big queue depth)
in the guest on that drive and is migrated (e.g. to a file), while the
nbd server is SIGKILL-ed and restarted every second.
See the individual patches for specific crash conditions and more
detailed explanations.
Roman Kagan (3):
block/n
shing the connection.
Fix it by turning every negative return from qio_channel_read_all into
-EIO returned from nbd_read. Apparently that was the original behavior,
but got broken later. Also adjust nbd_readXX to follow.
Fixes: e6798f06a6 ("nbd: generalize usage of nbd_read")
Signed-off-
g to
detach it from the aio_context. If it is null, no detaching is needed,
and it will get reattached in the proper aio_context once the connection
is reestablished.
Signed-off-by: Roman Kagan
---
block/nbd.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/block/nbd.c
On Fri, Jan 29, 2021 at 08:37:13AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 28.01.2021 23:14, Roman Kagan wrote:
> > When the reconnect in NBD client is in progress, the iochannel used for
> > NBD connection doesn't exist. Therefore an attempt to detach it from
> &
On Fri, Jan 29, 2021 at 08:51:39AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 28.01.2021 23:14, Roman Kagan wrote:
> > During the final phase of migration the NBD reconnection logic may
> > encounter situations it doesn't expect during regular operation.
> >
> &g
pted backtraces in log messages
- add r-b by Vladimir
Roman Kagan (3):
block/nbd: only detach existing iochannel from aio_context
block/nbd: only enter connection coroutine if it's present
nbd: make nbd_read* return -EIO on error
include/block/nbd.h | 7 ---
b
it by checking that the iochannel is non-null before trying to
detach it from the aio_context. If it is null, no detaching is needed,
and it will get reattached in the proper aio_context once the connection
is reestablished.
Signed-off-by: Roman Kagan
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
00 in ?? ()
Fix it by checking that the connection coroutine is non-null before
trying to enter it. If it is null, no entering is needed, as the
connection is probably going down anyway.
Signed-off-by: Roman Kagan
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
block/nbd.c | 16 +---
1 fil
shing the connection.
Fix it by turning every negative return from qio_channel_read_all into
-EIO returned from nbd_read. Apparently that was the original behavior,
but got broken later. Also adjust nbd_readXX to follow.
Fixes: e6798f06a6 ("nbd: generalize usage of nbd_read")
Signed-off-
On Thu, Sep 03, 2020 at 10:02:58PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We pause reconnect process during drained section. So, if we have some
> requests, waiting for reconnect we should cancel them, otherwise they
> deadlock the drained section.
>
> How to reproduce:
>
> 1. Create an ima
On Wed, Feb 03, 2021 at 04:10:41PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 03.02.2021 13:53, Roman Kagan wrote:
> > On Thu, Sep 03, 2020 at 10:02:58PM +0300, Vladimir Sementsov-Ogievskiy
> > wrote:
> > > We pause reconnect process during drained section. So, if we
On Wed, Feb 03, 2021 at 05:44:34PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 03.02.2021 17:21, Roman Kagan wrote:
> > On Wed, Feb 03, 2021 at 04:10:41PM +0300, Vladimir Sementsov-Ogievskiy
> > wrote:
> > > 03.02.2021 13:53, Roman Kagan wrote:
> > > > On T
On Mon, Nov 23, 2020 at 10:47:32AM +0300, Roman Kagan wrote:
> On Mon, Nov 02, 2020 at 08:37:50AM +0300, Roman Kagan wrote:
> > When the slot is in steady powered-off state and the device is being
> > removed, there's no need to press the attention button. Nor is it
> >
On Mon, Nov 02, 2020 at 08:37:50AM +0300, Roman Kagan wrote:
> When the slot is in steady powered-off state and the device is being
> removed, there's no need to press the attention button. Nor is it
> mandated by the Standard Hot-Plug Controller Specification, Rev. 1.0.
>
>
On Mon, Dec 14, 2020 at 01:40:45PM +0300, Roman Kagan wrote:
> On Mon, Nov 23, 2020 at 10:47:32AM +0300, Roman Kagan wrote:
> > On Mon, Nov 02, 2020 at 08:37:50AM +0300, Roman Kagan wrote:
> > > When the slot is in steady powered-off state and the device is being
> > >
On Thu, Jun 13, 2019 at 12:47:21PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 11.06.2019 21:02, Andrey Shinkevich wrote:
> > The Valgrind tool fails to manage its termination when QEMU raises the
> > signal SIGKILL. Lets exclude such test cases from running under the
> > Valgrind because there is
On Mon, Jun 17, 2019 at 01:15:04PM +0200, Kevin Wolf wrote:
> Am 11.06.2019 um 20:02 hat Andrey Shinkevich geschrieben:
> > The Valgrind tool fails to manage its termination when QEMU raises the
> > signal SIGKILL. Lets exclude such test cases from running under the
> > Valgrind because there is no
longer so the race bites every test run.
Since nbd is run in a background job of the test, record the nbd pid at
the daemon start in a shell variable and perform a wait for it when
terminating it.
Roman.
> Suggested-by: Roman Kagan
> Signed-off-by: Andrey Shinkevich
> ---
> tests/q
> finished before starting a new one.
> >
> > Suggested-by: Roman Kagan
> > Signed-off-by: Andrey Shinkevich
> > ---
> > tests/qemu-iotests/common.nbd | 6 ++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/tests/qemu-iotests/common.
On Mon, Jun 17, 2019 at 02:53:55PM +0200, Kevin Wolf wrote:
> Am 17.06.2019 um 14:18 hat Roman Kagan geschrieben:
> > On Mon, Jun 17, 2019 at 01:15:04PM +0200, Kevin Wolf wrote:
> > > Am 11.06.2019 um 20:02 hat Andrey Shinkevich geschrieben:
> > > > The Val
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Roman Kagan
That said, it's tempting to just nuke qdev_prop_spinlocks and make
hv-spinlocks a regular DEFINE_PROP_UINT32...
>
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 0732e059ec..8158d0
On Mon, Jun 17, 2019 at 11:23:01AM -0300, Eduardo Habkost wrote:
> On Mon, Jun 17, 2019 at 01:48:59PM +0000, Roman Kagan wrote:
> > On Sat, Jun 15, 2019 at 05:05:05PM -0300, Eduardo Habkost wrote:
> > > The current default value for hv-spinlocks is 0x (meaning
On Tue, Jun 18, 2019 at 11:24:57AM +1000, Vadim Rozenfeld wrote:
> On Mon, 2019-06-17 at 14:49 -0300, Eduardo Habkost wrote:
> > On Mon, Jun 17, 2019 at 05:32:13PM +0000, Roman Kagan wrote:
> > > On Mon, Jun 17, 2019 at 11:23:01AM -0300, Eduardo Habkost wrote:
> > > >
dedicated getter/setter pair and convert 'hv-spinlocks' to a
regular uint32 property.
Signed-off-by: Roman Kagan
---
Based-on: <20190615200505.31348-1-ehabk...@redhat.com>
([PATCH] i386: Fix signedness of hyperv_spinlock_attempts)
target/i386/cpu.c | 45 ++--
On Tue, Jun 18, 2019 at 10:35:05AM +, Roman Kagan wrote:
> On Tue, Jun 18, 2019 at 11:24:57AM +1000, Vadim Rozenfeld wrote:
> > On Mon, 2019-06-17 at 14:49 -0300, Eduardo Habkost wrote:
> > > On Mon, Jun 17, 2019 at 05:32:13PM +, Roman Kagan wrote:
> > > >
On Thu, Jun 06, 2019 at 01:22:33PM +, Roman Kagan wrote:
> On Mon, May 27, 2019 at 11:05:38AM +0000, Roman Kagan wrote:
> > On Thu, May 23, 2019 at 12:31:16PM +0100, Alex Bennée wrote:
> > >
> > > Roman Kagan writes:
> > >
> > >
On Mon, Jun 24, 2019 at 11:58:23AM +0100, Alex Bennée wrote:
> Roman Kagan writes:
>
> > It was introduced in commit b129972c8b41e15b0521895a46fd9c752b68a5e,
> > with the following motivation:
>
> I can't find this commit in my tree.
gh the rest of the patch introduces a
> feature checking mechanism. So I've fixed the KVM_EXIT_HYPERV_SYNIC in
> hyperv-stub to do the same feature check as in the real hyperv.c
>
> Signed-off-by: Alex Bennée
> Cc: Vitaly Kuznetsov
> Cc: Paolo Bonzini
> Cc: Roman Kagan
&
On Wed, Oct 02, 2019 at 05:22:43PM +0300, Andrey Shinkevich wrote:
> Added possibility to write compressed data by using the
> blk_write_compressed. This action has the limitations of the format
> driver. For example we can't write compressed data over other.
>
> $ ./qemu-img create -f qcow2 -o si
On Fri, Nov 08, 2019 at 01:49:50PM +, Vladimir Sementsov-Ogievskiy wrote:
> 01.11.2019 19:54, Andrey Shinkevich wrote:
> > +def check_proc_NBD(proc, connector):
> > +try:
> > +exitcode = proc.wait(timeout=10)
> > +
> > +if exitcode < 0:
> > +log('NBD {}: EXIT SIG
On Mon, Nov 11, 2019 at 12:18:48PM +0300, Andrey Shinkevich wrote:
>
>
> On 08/11/2019 17:05, Roman Kagan wrote:
> > On Fri, Nov 08, 2019 at 01:49:50PM +, Vladimir Sementsov-Ogievskiy
> > wrote:
> >> 01.11.2019 19:54, Andrey Shinkevich wrote:
> >&g
err,
> +"Hyper-V %s requires KVM hypervisor signature "
> +"to be hidden (-kvm).\n",
> +kvm_hyperv_properties[HYPERV_FEAT_DIRECT_TLBFLUSH].desc);
> +return -ENOSYS;
> +}
In view of my comment above, this "else if" clause may become
unnecessary.
However, it doesn't hurt either, and doesn't make things worse, so, if
this is seen as 4.2 material and the general KVM vs Hyper-V hypercall
conflict resolution is postponed till after 4.2, the patch looks ok as
it is.
Under this provision
Reviewed-by: Roman Kagan
> +}
> +
> if (cpu->hyperv_passthrough) {
> /* We already copied all feature words from KVM as is */
> r = cpuid->nent;
> --
> 2.14.5
>
On Wed, Nov 13, 2019 at 10:29:00AM +0100, Vitaly Kuznetsov wrote:
> Roman Kagan writes:
> > On Tue, Nov 12, 2019 at 11:34:27AM +0800, lantianyu1...@gmail.com wrote:
> >> From: Tianyu Lan
> >>
> >> Hyper-V direct tlb flush targets KVM on Hyper-V guest.
> &g
On Fri, Oct 25, 2019 at 02:19:19PM +0200, Jens Freimann wrote:
> This is implementing the host side of the net_failover concept
> (https://www.kernel.org/doc/html/latest/networking/net_failover.html)
>
> Changes since v5:
> * rename net_failover_pair_id parameter/property to failover_pair_id
> * i
;> On 03.04.2020 16:23, Jon Doron wrote:
> >>>>> Guest OS uses ACPI to discover vmbus presence. Add a corresponding
> >>>>> entry to DSDT in case vmbus has been enabled.
> >>>>>
> >>>>> Experimentally Windows guests were found to
still in need for
improvement, too, but should be testable at least.
Thanks,
Roman.
> On Mon, Apr 6, 2020, 10:32 Roman Kagan wrote:
>
> > On Fri, Apr 03, 2020 at 11:00:27PM +0200, Maciej S. Szmigiero wrote:
> > > It seems to me that Roman might not be getting our e-mails since h
On Tue, Apr 07, 2020 at 09:03:05PM +0200, Maciej S. Szmigiero wrote:
> On 07.04.2020 20:56, Roman Kagan wrote:
> > On Mon, Apr 06, 2020 at 11:20:39AM +0300, Jon Doron wrote:
> >> Well I want it to be merged in :-)
> >
> > Hmm I'm curious why, it has little to
501 - 600 of 673 matches
Mail list logo