Cédric Le Goater writes:
> On 11/16/22 07:56, Markus Armbruster wrote:
>> Cédric Le Goater writes:
>>
>>> Currently, when a block backend is attached to a m25p80 device and the
>>> associated file size does not match the flash model, QEMU complains
>>> with the error message "failed to read the
On 11/16/22 09:28, Markus Armbruster wrote:
Cédric Le Goater writes:
On 11/16/22 07:56, Markus Armbruster wrote:
Cédric Le Goater writes:
Currently, when a block backend is attached to a m25p80 device and the
associated file size does not match the flash model, QEMU complains
with the erro
From: Klaus Jensen
Add the 'nmi-i2c' device that emulates an NVMe Management Interface
controller.
Initial support is very basic (Read NMI DS, Configuration Get).
This is based on previously posted code by Padmakar Kalghatgi, Arun
Kumar Agasar and Saurav Kumar.
Signed-off-by: Klaus Jensen
---
From: Klaus Jensen
It is not given that the current master will release the bus after a
transfer ends. Only schedule a pending master if the bus is idle.
Fixes: 37fa5ca42623 ("hw/i2c: support multiple masters")
Signed-off-by: Klaus Jensen
---
hw/i2c/aspeed_i2c.c | 2 ++
hw/i2c/core.c
From: Klaus Jensen
This adds a generic MCTP endpoint model that other devices may derive
from. I'm not 100% happy with the design of the class methods, but it's
a start.
Patch 1 is a bug fix, but since there are currently no in-tree users of
the API, it is not critical. I'd like to have Peter ve
From: Klaus Jensen
Add an abstract MCTP over I2C endpoint model. This implements MCTP
control message handling as well as handling the actual I2C transport
(packetization).
Devices are intended to derive from this and implement the class
methods.
Parts of this implementation is inspired by code
block_copy_reset_unallocated and block_copy_is_cluster_allocated are
only called by backup_run, a corotuine_fn itself.
Same applies to block_copy_block_status, called by
block_copy_dirty_clusters.
Therefore mark them as coroutine too.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-c
It is always called in coroutine_fn callbacks, therefore
it can directly call bdrv_co_create().
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 6 --
include/block/block-global-state.h | 3 ++-
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/
vmdk_co_create_opts() is a coroutine_fn, and calls vmdk_co_do_create()
which in turn can call two callbacks: vmdk_co_create_opts_cb and
vmdk_co_create_cb.
Mark all these functions as coroutine_fn, since vmdk_co_create_opts()
is the only caller.
Signed-off-by: Emanuele Giuseppe Esposito
---
bloc
Call two different functions depending on whether bdrv_create
is in coroutine or not, following the same pattern as
generated_co_wrapper functions.
This allows to also call the coroutine function directly,
without using CreateCo or relying in bdrv_create().
It will also be useful when we add the g
Delete the if case and make sure it won't be called again
in coroutines.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c | 37 -
1 file changed, 16 insertions(+), 21 deletions(-)
diff --git a/block.c b/block.c
index dcac28756c..7a4ce7948c 100644
--- a/b
There are probably more missing, but right now it is necessary that
we extend coroutine_fn to block{allock/status}_to_extents, because
they use bdrv_* functions calling the generated_co_wrapper API, which
checks for the qemu_in_coroutine() case.
Signed-off-by: Emanuele Giuseppe Esposito
---
nbd/
Avoid mixing bdrv_* functions with blk_*, so create blk_* counterparts
for:
- bdrv_block_status_above
- bdrv_is_allocated_above
Note that these functions will take the rdlock, so they must always run
in a coroutine.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-backend.c
This is a dump of all minor coroutine-related fixes found while looking
around and testing various things in the QEMU block layer.
Patches aim to:
- add missing coroutine_fn annotation to the functions
- simplify to avoid the typical "if in coroutine: fn()
// else create_coroutine(fn)" already p
Some functions check if they are running in a coroutine, calling
the coroutine callback directly if it's the case.
Except that no coroutine calls such functions, therefore that case
can be removed.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/dirty-bitmap.c | 66 +++---
> -Original Message-
> From: Christian Schoenebeck
> Sent: Tuesday, November 15, 2022 00:41
> To: qemu-devel@nongnu.org
> Cc: Shi, Guohuai ; Greg Kurz ;
> Meng, Bin
> Subject: Re: [PATCH v2 06/19] hw/9pfs: Add missing definitions for Windows
>
> CAUTION: This email comes from a non Wi
> -Original Message-
> From: Daniel P. Berrangé
> Sent: 15 November 2022 19:47
> To: Or Ozeri
> Cc: qemu-devel@nongnu.org; qemu-bl...@nongnu.org; Danny Harnik
> ; idryo...@gmail.com
> Subject: [EXTERNAL] Re: [PATCH v3] block/rbd: Add support for layered
> encryption
>
> AFAICT, supportin
On Wed, Nov 16, 2022 at 08:34:00AM +0530, Ani Sinha wrote:
> On Wed, Nov 16, 2022 at 12:18 AM John Snow wrote:
> >
> > On Tue, Nov 15, 2022 at 9:31 AM Ani Sinha wrote:
> > >
> > > On Tue, Nov 15, 2022 at 3:36 PM Ani Sinha wrote:
> > > >
> > > > On Tue, Nov 15, 2022 at 9:07 AM Ani Sinha wrote:
>
Ani Sinha writes:
> On Wed, Nov 16, 2022 at 4:17 AM Alex Bennée wrote:
>>
>>
>> John Snow writes:
>>
>> > Instead of using a hardcoded timeout, just rely on Avocado's built-in
>> > test case timeout. This helps avoid timeout issues on machines where 60
>> > seconds is not sufficient.
>> >
>>
On 15/11/2022 12.13, Philippe Mathieu-Daudé wrote:
On 15/11/22 12:05, Thomas Huth wrote:
On 15/11/2022 12.03, Philippe Mathieu-Daudé wrote:
Hi,
As of v7.2.0-rc0 I am getting:
(101/198)
tests/avocado/machine_s390_ccw_virtio.py:S390CCWVirtioMachine.test_s390x_fedora:
FAIL (23.51 s)
Is it
Chao Peng writes:
> On Mon, Nov 14, 2022 at 11:43:37AM +, Alex Bennée wrote:
>>
>> Chao Peng writes:
>>
>>
>> > Introduction
>> >
>> > KVM userspace being able to crash the host is horrible. Under current
>> > KVM architecture, all guest memory is inherently accessible from
Heho,
> Ok, I think I found at least one issue:
>
> /* large send MSS mask, bits 16...25 */
> #define CP_TC_LGSEN_MSS_MASK ((1 << 12) - 1)
>
> First, MSS occupies 11 bits from 16 to 26 Second, the mask is wrong it should
> be ((1 << 11) - 1)
Awesome, thanks, will give this a shot later on and le
On 11/15/22 12:15, Cédric Le Goater wrote:
Hello Pierre,
On 11/3/22 18:01, Pierre Morel wrote:
In the S390x CPU topology the core_id specifies the CPU address
and the position of the core withing the topology.
Let's build the topology based on the core_id.
Signed-off-by: Pierre Morel
---
Cc'ing Jan/Cleber/Beraldo.
On 16/11/22 10:43, Thomas Huth wrote:
On 15/11/2022 12.13, Philippe Mathieu-Daudé wrote:
On 15/11/22 12:05, Thomas Huth wrote:
On 15/11/2022 12.03, Philippe Mathieu-Daudé wrote:
Hi,
As of v7.2.0-rc0 I am getting:
(101/198)
tests/avocado/machine_s390_ccw_virtio.
On Wed, Nov 16, 2022 at 09:03:31AM +, Or Ozeri wrote:
> > -Original Message-
> > From: Daniel P. Berrangé
> > Sent: 15 November 2022 19:47
> > To: Or Ozeri
> > Cc: qemu-devel@nongnu.org; qemu-bl...@nongnu.org; Danny Harnik
> > ; idryo...@gmail.com
> > Subject: [EXTERNAL] Re: [PATCH v3
On 11/15/22 12:21, Cédric Le Goater wrote:
Hello Pierre,
On 11/3/22 18:01, Pierre Morel wrote:
The guest can use the STSI instruction to get a buffer filled
with the CPU topology description.
Let us implement the STSI instruction for the basis CPU topology
level, level 2.
Signed-off-by: Pi
I apologize, as discussed also in v2 I just realized I could introduce
generated_co_wrapper_simple already here and simplify patches 6 and 8.
Also I think commit messages are the old ones from v1.
I'll resend. Please ignore this serie.
Emanuele
Am 16/11/2022 um 09:50 schrieb Emanuele Giuseppe E
On Tue, 15 Nov 2022 15:01:06 +0100
Philippe Mathieu-Daudé wrote:
> Hi,
>
> On 7/11/22 23:47, Michael S. Tsirkin wrote:
>
> >
> > pci,pc,virtio: features, tests, fixes, cleanups
> >
> > lots of acpi rework
> > first version of bio
On Tue, Nov 15, 2022 at 03:23:28PM +0100, antoine.dam...@shadow.tech wrote:
> From: Antoine Damhet
>
> The new `qcrypto_tls_session_check_pending` function allows the caller
> to know if data have already been consumed from the backend and is
> already available.
>
> Signed-off-by: Antoine Damhe
On Tue, Nov 15, 2022 at 03:23:29PM +0100, antoine.dam...@shadow.tech wrote:
> From: Antoine Damhet
>
> Since the TLS backend can read more data from the underlying QIOChannel
> we introduce a minimal child GSource to notify if we still have more
> data available to be read.
>
> Signed-off-by: An
On 16/11/2022 11.23, Philippe Mathieu-Daudé wrote:
Cc'ing Jan/Cleber/Beraldo.
On 16/11/22 10:43, Thomas Huth wrote:
On 15/11/2022 12.13, Philippe Mathieu-Daudé wrote:
On 15/11/22 12:05, Thomas Huth wrote:
On 15/11/2022 12.03, Philippe Mathieu-Daudé wrote:
Hi,
As of v7.2.0-rc0 I am getting:
On 16/11/22 11:58, Thomas Huth wrote:
On 16/11/2022 11.23, Philippe Mathieu-Daudé wrote:
Cc'ing Jan/Cleber/Beraldo.
On 16/11/22 10:43, Thomas Huth wrote:
On 15/11/2022 12.13, Philippe Mathieu-Daudé wrote:
On 15/11/22 12:05, Thomas Huth wrote:
On 15/11/2022 12.03, Philippe Mathieu-Daudé wrote
On Wed, Nov 16, 2022 at 10:23:52AM +, Daniel P. Berrangé wrote:
> On Wed, Nov 16, 2022 at 09:03:31AM +, Or Ozeri wrote:
> > > -Original Message-
> > > From: Daniel P. Berrangé
> > > Sent: 15 November 2022 19:47
> > > To: Or Ozeri
> > > Cc: qemu-devel@nongnu.org; qemu-bl...@nongnu.
On [2022 Nov 15] Tue 16:10:00, Cédric Le Goater wrote:
> Currently, when a block backend is attached to a m25p80 device and the
> associated file size does not match the flash model, QEMU complains
> with the error message "failed to read the initial flash content".
> This is confusing for the user
On 16/11/22 12:20 am, Daniel P. Berrangé wrote:
On Tue, Nov 15, 2022 at 06:11:30PM +, Daniel P. Berrangé wrote:
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as
Heho,
Quick follow-up; Applied the change you suggested, but there are still some
things to test.
While this now works (mostly), MSS values are still off; Especially the
behavior below <=1036 is difficult, as for v4 the minimum MTU is 576 and
minimum MSS is 536:
RequestedDPRINT
1320
On 11/15/22 14:27, Cédric Le Goater wrote:
On 11/3/22 18:01, Pierre Morel wrote:
S390 CPU topology is only allowed for s390-virtio-ccw-7.2 and
newer S390 machines.
Signed-off-by: Pierre Morel
Reviewed-by: Cédric Le Goater
Thanks,
C.
Thanks,
Pierre
--
Pierre Morel
IBM Lab Boeblinge
On Wed, Nov 16, 2022 at 04:49:18PM +0530, manish.mishra wrote:
>
> On 16/11/22 12:20 am, Daniel P. Berrangé wrote:
> > On Tue, Nov 15, 2022 at 06:11:30PM +, Daniel P. Berrangé wrote:
> > > On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
> > > > Current logic assumes that channel
On 16/11/22 4:57 pm, Daniel P. Berrangé wrote:
On Wed, Nov 16, 2022 at 04:49:18PM +0530, manish.mishra wrote:
On 16/11/22 12:20 am, Daniel P. Berrangé wrote:
On Tue, Nov 15, 2022 at 06:11:30PM +, Daniel P. Berrangé wrote:
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
Cu
Extend the regex to cover also return type, pointers included.
This implies that the value returned by the function cannot be
a simple "int" anymore, but the custom return type.
Therefore remove poll_state->ret and instead use a per-function
custom "ret" field.
Signed-off-by: Emanuele Giuseppe Esp
These functions end up calling bdrv_*() implemented as generated_co_wrapper
functions.
In addition, they also happen to be always called in coroutine context,
meaning all callers are coroutine_fn.
This means that the g_c_w function will enter the qemu_in_coroutine()
case and eventually suspend (or
These functions end up calling bdrv_common_block_status_above(), a
generated_co_wrapper function.
In addition, they also happen to be always called in coroutine context,
meaning all callers are coroutine_fn.
This means that the g_c_w function will enter the qemu_in_coroutine()
case and eventually s
These functions end up calling bdrv_create() implemented as generated_co_wrapper
functions.
In addition, they also happen to be always called in coroutine context,
meaning all callers are coroutine_fn.
This means that the g_c_w function will enter the qemu_in_coroutine()
case and eventually suspend
This is a dump of all minor coroutine-related fixes found while looking
around and testing various things in the QEMU block layer.
Patches aim to:
- add missing coroutine_fn annotation to the functions
- simplify to avoid the typical "if in coroutine: fn()
// else create_coroutine(fn)" already p
This new annotation creates just a function wrapper that creates
a new coroutine. It assumes the caller is not a coroutine.
This is much better as g_c_w, because it is clear if the caller
is a coroutine or not, and provides the advantage of automating
the code creation. In the future all g_c_w fun
Basically BdrvPollCo->bs is only used by bdrv_poll_co(), and the
functions that it uses are both using bdrv_get_aio_context, that
defaults to qemu_get_aio_context() if bs is NULL.
Therefore pass NULL to BdrvPollCo to automatically generate a function
that create and runs a coroutine in the main lo
It is always called in coroutine_fn callbacks, therefore
it can directly call bdrv_co_create().
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 6 --
include/block/block-global-state.h | 3 ++-
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/
Call two different functions depending on whether bdrv_create
is in coroutine or not, following the same pattern as
generated_co_wrapper functions.
This allows to also call the coroutine function directly,
without using CreateCo or relying in bdrv_create().
Signed-off-by: Emanuele Giuseppe Esposi
This function is never called in coroutine context, therefore
instead of manually creating a new coroutine, delegate it to the
block-coroutine-wrapper script, defining it as g_c_w_simple.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 38 +-
Avoid mixing bdrv_* functions with blk_*, so create blk_* counterparts
for:
- bdrv_block_status_above
- bdrv_is_allocated_above
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-backend.c | 21 +
block/commit.c| 4 ++--
include/sysemu/
bdrv_can_store_new_dirty_bitmap and bdrv_remove_persistent_dirty_bitmap
check if they are running in a coroutine, directly calling the
coroutine callback if it's the case.
Except that no coroutine calls such functions, therefore that check
can be removed, and function creation can be offloaded to
g
On Wed, Nov 16, 2022 at 3:07 PM Alex Bennée wrote:
>
>
> Ani Sinha writes:
>
> > On Wed, Nov 16, 2022 at 4:17 AM Alex Bennée wrote:
> >>
> >>
> >> John Snow writes:
> >>
> >> > Instead of using a hardcoded timeout, just rely on Avocado's built-in
> >> > test case timeout. This helps avoid timeo
On 11/15/22 14:48, Cédric Le Goater wrote:
On 11/3/22 18:01, Pierre Morel wrote:
We keep the possibility to switch on/off the topology on newer
machines with the property topology=[on|off].
The code has changed. You will need to rebase. May be after the
8.0 machine is introduced, or include
On Wed, Nov 16, 2022 at 6:02 PM Ani Sinha wrote:
>
> On Wed, Nov 16, 2022 at 3:07 PM Alex Bennée wrote:
> >
> >
> > Ani Sinha writes:
> >
> > > On Wed, Nov 16, 2022 at 4:17 AM Alex Bennée
> > > wrote:
> > >>
> > >>
> > >> John Snow writes:
> > >>
> > >> > Instead of using a hardcoded timeout,
On Wednesday, November 16, 2022 10:01:39 AM CET Shi, Guohuai wrote:
[...]
> > > diff --git a/fsdev/file-op-9p.h b/fsdev/file-op-9p.h index
> > > 4997677460..7d9a736b66 100644
> > > --- a/fsdev/file-op-9p.h
> > > +++ b/fsdev/file-op-9p.h
> > > @@ -27,6 +27,39 @@
> > > # include
> > > #endif
> > >
On Wed, 16 Nov 2022 at 06:11, Schspa Shi wrote:
>
>
> Peter Maydell writes:
>
> > On Tue, 8 Nov 2022 at 15:50, Schspa Shi wrote:
> >>
> >>
> >> Peter Maydell writes:
> >>
> >> > On Tue, 8 Nov 2022 at 13:54, Peter Maydell
> >> > wrote:
> >> >>
> >> >> On Tue, 8 Nov 2022 at 12:52, Schspa Shi w
On Wed, 16 Nov 2022 at 08:43, Klaus Jensen wrote:
>
> From: Klaus Jensen
>
> It is not given that the current master will release the bus after a
> transfer ends. Only schedule a pending master if the bus is idle.
>
> Fixes: 37fa5ca42623 ("hw/i2c: support multiple masters")
> Signed-off-by: Klaus
Kowshik reported that building qemu with GCC 12.2.1 for 'ppc64-softmmu'
target is failing due to following build warnings:
../target/ppc/cpu_init.c:7018:13: error: 'ppc_restore_state_to_opc' defined
but not used [-Werror=unused-function]
7018 | static void ppc_restore_state_to_opc(CPUState *cs
On 16/11/2022 1:36, Alex Williamson wrote:
External email: Use caution opening links or attachments
On Thu, 3 Nov 2022 18:16:10 +0200
Avihai Horon wrote:
Currently, if IOMMU of a VFIO container doesn't support dirty page
tracking, migration is blocked. This is because a DMA-able VFIO devic
Richard,
I believe the ppc64-linux-user target didn't like what you did in this
patch. Here's the error:
$ ../configure
--target-list=ppc64-softmmu,ppc64-linux-user,ppc-softmmu,ppc-linux-user,ppc64le-linux-user
$ make -j
(...)
[15/133] Compiling C object
libqemu-ppc64-linux-user.fa.p/target_
On 16/11/2022 1:56, Alex Williamson wrote:
External email: Use caution opening links or attachments
On Thu, 3 Nov 2022 18:16:13 +0200
Avihai Horon wrote:
Move vfio_dev_get_region_info() logic from vfio_migration_probe() to
vfio_migration_init(). This logic is specific to v1 protocol and mo
This serie is the first of four series that aim to introduce and use a new
graph rwlock in the QEMU block layer.
The aim is to replace the current AioContext lock with much fine-grained locks,
aimed to protect only specific data.
Currently the AioContext lock is used pretty much everywhere, and it'
This serie is the first of four series that aim to introduce and use a new
graph rwlock in the QEMU block layer.
The aim is to replace the current AioContext lock with much fine-grained locks,
aimed to protect only specific data.
Currently the AioContext lock is used pretty much everywhere, and it'
On Wed, Nov 16, 2022 at 09:43:10AM +0100, Klaus Jensen wrote:
> From: Klaus Jensen
>
> It is not given that the current master will release the bus after a
> transfer ends. Only schedule a pending master if the bus is idle.
>
Yes, I think this is correct.
Acked-by: Corey Minyard
Is there a r
From: bakulinm
make check-avocado takes a lot of time, and avocado since version 91 has
multithreaded mode for running several tests simultaneously.
This patch allows to run "make check-avocado -j" to use all cores or,
for example, "make check-avocado -j4" to select number of workers to use.
By d
The only caller of this function is nbd_do_establish_connection, a
generated_co_wrapper that already take the graph read lock.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/nbd.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/nbd.c b/block/nbd.c
index 7d485c86d2..5cad58aaf6 1006
The only caller of these functions is bdrv_{read/write}v_vmstate, a
generated_co_wrapper function that already takes the
graph read lock.
Protecting bdrv_co_{read/write}v_vmstate() implies that
BlockDriver->bdrv_{load/save}_vmstate() is always called with
graph rdlock taken.
Signed-off-by: Emanue
The only callers of these functions are the respective
generated_co_wrapper, and they already take the lock.
Protecting bdrv_co_{check/invalidate_cache}() implies that
BlockDriver->bdrv_co_{check/invalidate_cache}() is always called with
graph rdlock taken.
Signed-off-by: Emanuele Giuseppe Esposi
We don't protect bdrv->aio_context with the graph rwlock,
so these assertions are not needed
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/block.c b/block.c
index 4ef537a9f2..afab74d4da 100644
--- a/block.c
+++ b/block.c
@@ -7183,7 +7
This function, in addition to be called by a generated_co_wrapper,
is also called by the blk_* API.
The strategy is to always take the lock at the function called
when the coroutine is created, to avoid recursive locking.
Protecting bdrv_co_pdiscard{_snapshot}() implies that the following BlockDri
Add/remove the AioContext in aio_context_list in graph-lock.c only when
it is being effectively created/destroyed.
Signed-off-by: Emanuele Giuseppe Esposito
---
util/async.c | 4
util/meson.build | 1 +
2 files changed, 5 insertions(+)
diff --git a/util/async.c b/util/async.c
index 634
This function, in addition to be called by a generated_co_wrapper,
is also called elsewhere else.
The strategy is to always take the lock at the function called
when the coroutine is created, to avoid recursive locking.
By protecting brdv_co_pwrite, we also automatically protect
the following othe
Just a wrapper to simplify what is available to the struct AioContext.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/graph-lock.c | 59 ++
include/block/aio.h| 12
include/block/graph-lock.h | 1 +
3 files changed, 48 insertions
Similar to the implementation in lockable.h, implement macros to
automatically take and release the rdlock.
Create the empty GraphLockable struct only to use it as a type for
G_DEFINE_AUTOPTR_CLEANUP_FUNC.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/block/graph-lock.h | 35
This function, in addition to be called by a generated_co_wrapper,
is also called elsewhere else.
The strategy is to always take the lock at the function called
when the coroutine is created, to avoid recursive locking.
By protecting brdv_co_pread, we also automatically protect
the following other
This function, in addition to be called by a generated_co_wrapper,
is also called by the blk_* API.
The strategy is to always take the lock at the function called
when the coroutine is created, to avoid recursive locking.
Protecting bdrv_co_flush() implies that the following BlockDriver
callbacks
The only callers are other callback functions that already run with the
graph rdlock taken.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/io.c | 2 ++
include/block/block_int-common.h | 3 +++
2 files changed, 5 insertions(+)
diff --git a/block/io.c b/block/io.c
inde
This annotation will be used to distinguish the blk_* API from the
bdrv_* API in block-gen.c. The reason for this distinction is that
blk_* API eventually result in always calling bdrv_*, which has
implications when we introduce the read graph lock.
Signed-off-by: Emanuele Giuseppe Esposito
---
All generated_co_wrapper functions create a coroutine when
called from non-coroutine context.
The format can be one of the two:
bdrv_something()
if(qemu_in_coroutine()):
bdrv_co_something();
else:
// create coroutine that calls bdrv_co_something();
blk_something()
if(
This serie is the first of four series that aim to introduce and use a new
graph rwlock in the QEMU block layer.
The aim is to replace the current AioContext lock with much fine-grained locks,
aimed to protect only specific data.
Currently the AioContext lock is used pretty much everywhere, and it'
The only callers are other callback functions that already run with the graph
rdlock taken.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c | 1 +
include/block/block_int-common.h | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/block.c b/block
This function, in addition to be called by a generated_co_wrapper,
is also called by the blk_* API.
The strategy is to always take the lock at the function called
when the coroutine is created, to avoid recursive locking.
Protecting bdrv_co_truncate() implies that
BlockDriver->bdrv_co_truncate() i
Both functions are only called by Job->run() callbacks, therefore
they must take the lock in the *_run() implementation.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/amend.c| 1 +
block/create.c | 1 +
include/block/block_int-common.h | 2 ++
3 files
Please read "Protect the block layer with a rwlock: part 1" for an additional
introduction and aim of this series.
This second part aims to add the graph rdlock to the BlockDriver functions
that already run in coroutine context and are classified as IO.
Such functions will recursively traverse the
Remove the old assert_bdrv_graph_writable, and replace it with
the new version using graph-lock API.
See the function documentation for more information.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 4 ++--
block/graph-lock.c | 11 ++
This function is either called by bdrv_create(), which always takes
care of creating a new coroutine, or by bdrv_create_file(), which
is only called by BlockDriver->bdrv_co_create_opts callbacks,
invoked by bdrv_co_create().
Protecting bdrv_co_create() implies that BlockDriver->bdrv_co_create_opts
Already protected by bdrv_co_pwrite callers.
Protecting bdrv_co_do_pwrite_zeroes() implies that
BlockDriver->bdrv_co_pwrite_zeroes() is always called with
graph rdlock taken.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/io.c | 3 +++
include/block/block_int-common.h
Peter Maydell writes:
> On Wed, 16 Nov 2022 at 06:11, Schspa Shi wrote:
>>
>>
>> Peter Maydell writes:
>>
>> > On Tue, 8 Nov 2022 at 15:50, Schspa Shi wrote:
>> >>
>> >>
>> >> Peter Maydell writes:
>> >>
>> >> > On Tue, 8 Nov 2022 at 13:54, Peter Maydell
>> >> > wrote:
>> >> >>
>> >> >> O
The only caller of this function is blk_ioctl, a generated_co_wrapper
functions that needs to take the graph read lock.
Protecting bdrv_co_ioctl() implies that
BlockDriver->bdrv_co_ioctl() is always called with
graph rdlock taken, and BlockDriver->bdrv_aio_ioctl is
a coroutine_fn callback (called
The only non-protected caller is convert_co_copy_range(), all other
callers are BlockDriver callbacks that already take the rdlock.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/block-backend.c| 2 ++
block/io.c | 5 +
include/block/block_int-common.h
From: Paolo Bonzini
block layer graph operations are always run under BQL in the
main loop. This is proved by the assertion qemu_in_main_thread()
and its wrapper macro GLOBAL_STATE_CODE.
However, there are also concurrent coroutines running in other
iothreads that always try to traverse the graph
Protect the main function where graph is modified.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c | 6 --
include/block/block_int-common.h | 1 +
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/block.c b/block.c
index d3e168408a..4ef537a9f2 1006
This function, in addition to be called by a generated_co_wrapper,
is also called elsewhere else.
The strategy is to always take the lock at the function called
when the coroutine is created, to avoid recursive locking.
Protecting bdrv_co_block_status() called by
bdrv_co_common_block_status_above(
The only callers are the respective bdrv_*_dirty_bitmap() functions that
take care of creating a new coroutine (that already takes the graph
rdlock).
Signed-off-by: Emanuele Giuseppe Esposito
---
block/dirty-bitmap.c | 2 ++
include/block/block_int-common.h | 2 ++
2 files changed, 4
Friends,
I am but a small bystander as I watch in awe at the incredible work each of you
do to advance the Kernel and Linux.
What you are working on is so important. Your ideas, and how you express them
into the code you write, become the foundation for a better world.
I can tell you that your s
Hi Klaus,
[+CC Matt]
> This adds a generic MCTP endpoint model that other devices may derive
> from. I'm not 100% happy with the design of the class methods, but
> it's a start.
Thanks for posting these! I'll have a more thorough look through soon,
but wanted to tackle some of the larger design-
BlockDriver->bdrv_io_plug is categorized as IO callback, and
it currently doesn't run in a coroutine.
This makes very difficult to add the graph rdlock, since the
callback traverses the block nodes graph.
The only caller of this function is blk_plug, therefore
make blk_plug a generated_co_wrapper_
Just omit the various 'return' when the return type is
void.
Signed-off-by: Emanuele Giuseppe Esposito
---
scripts/block-coroutine-wrapper.py | 19 ++-
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/scripts/block-coroutine-wrapper.py
b/scripts/block-coroutine-wra
BlockDriver->bdrv_debug_event is categorized as IO callback, and
it currently doesn't run in a coroutine.
This makes very difficult to add the graph rdlock, since the
callback traverses the block nodes graph.
Therefore use generated_co_wrapper to automatically
create a wrapper with the same name.
Please read "Protect the block layer with a rwlock: part 1" and
"Protect the block layer with a rwlock: part 2" for an
additional introduction and aim of this series.
In this serie, we cover the remaining BlockDriver IO callbacks that were not
running in coroutine, therefore not using the graph rd
1 - 100 of 218 matches
Mail list logo