On Wed, Mar 3, 2021 at 12:41 PM Stefano Garzarella wrote:
>
> Hi Jason,
> as reported in this BZ [1], when qemu-img creates a QCOW2 image on RBD
> writing data is very slow compared to a raw file.
>
> Comparing raw vs QCOW2 image creation with RBD I found that we use a
> different object size, for
On Mon, Feb 15, 2021 at 8:29 AM Peter Lieven wrote:
>
> Am 15.02.21 um 13:13 schrieb Kevin Wolf:
> > Am 15.02.2021 um 12:45 hat Peter Lieven geschrieben:
> >> Am 15.02.21 um 12:41 schrieb Daniel P. Berrangé:
> >>> On Mon, Feb 15, 2021 at 12:32:24PM +0100, Peter Lieven wrote:
> Am 15.02.21 um
On Thu, Jan 21, 2021 at 3:29 PM Peter Lieven wrote:
>
> Am 21.01.21 um 20:42 schrieb Jason Dillaman:
> > On Wed, Jan 20, 2021 at 6:01 PM Peter Lieven wrote:
> >>
> >>> Am 19.01.2021 um 15:20 schrieb Jason Dillaman :
> >>>
> >>> On Tue
On Wed, Jan 20, 2021 at 6:01 PM Peter Lieven wrote:
>
>
> > Am 19.01.2021 um 15:20 schrieb Jason Dillaman :
> >
> > On Tue, Jan 19, 2021 at 4:36 AM Peter Lieven wrote:
> >>> Am 18.01.21 um 23:33 schrieb Jason Dillaman:
> >>> On Fri, Jan 15,
On Tue, Jan 19, 2021 at 4:36 AM Peter Lieven wrote:
>
> Am 18.01.21 um 23:33 schrieb Jason Dillaman:
> > On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote:
> >> Am 15.01.21 um 16:27 schrieb Jason Dillaman:
> >>> On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven w
On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote:
>
> Am 15.01.21 um 16:27 schrieb Jason Dillaman:
> > On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote:
> >> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> >>> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven
On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote:
>
> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
> >> since we implement byte interfaces and librbd supports aio on byte
> >> granularity we can li
On Thu, Jan 14, 2021 at 2:41 PM Peter Lieven wrote:
>
> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
> >> Signed-off-by: Peter Lieven
> >> ---
> >> block/rbd.c | 31
On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>
> Signed-off-by: Peter Lieven
> ---
> block/rbd.c | 31 ++-
> 1 file changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/block/rbd.c b/block/rbd.c
> index 2d77d0007f..27b4404adf 100644
> --- a/block/rbd.c
>
On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>
> Signed-off-by: Peter Lieven
> ---
> block/rbd.c | 21 +++--
> 1 file changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/block/rbd.c b/block/rbd.c
> index a2da70e37f..27b232f4d8 100644
> --- a/block/rbd.c
> +++ b/blo
On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>
> Signed-off-by: Peter Lieven
> ---
> block/rbd.c | 10 +-
> 1 file changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/block/rbd.c b/block/rbd.c
> index bc8cf8af9b..a2da70e37f 100644
> --- a/block/rbd.c
> +++ b/block/rbd.c
> @@
On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>
> since we implement byte interfaces and librbd supports aio on byte
> granularity we can lift
> the 512 byte alignment.
>
> Signed-off-by: Peter Lieven
> ---
> block/rbd.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/block/rbd
On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>
> Signed-off-by: Peter Lieven
> ---
> block/rbd.c | 247 ++--
> 1 file changed, 84 insertions(+), 163 deletions(-)
>
> diff --git a/block/rbd.c b/block/rbd.c
> index 27b232f4d8..2d77d0007f 1006
On Wed, Dec 9, 2020 at 7:19 AM Peter Lieven wrote:
>
> Am 01.12.20 um 13:40 schrieb Peter Lieven:
> > Hi,
> >
> >
> > i would like to submit a series for 6.0 which will convert the aio hooks to
> > native coroutine hooks and add write zeroes support.
> >
> > The aio routines are nowadays just an
qemu_rbd_create_opts = {
>
> static const char *const qemu_rbd_strong_runtime_opts[] = {
> "pool",
> +"namespace",
> "image",
> "conf",
> "snapshot",
> --
> 2.26.2
>
lgtm
Reviewed-by: Jason Dillaman
--
Jason
On Tue, Jun 9, 2020 at 3:31 AM Yi Li wrote:
>
> Since Ceph version Infernalis (9.2.0) the new fast-diff mechanism
> of RBD allows for querying actual rbd image usage.
>
> Prior to this version there was no easy and fast way to query how
> much allocation a RBD image had inside a Ceph cluster.
>
>
ust as we did nothing
> + */
> +rados_ioctx_set_namespace(*io_ctx, opts->q_namespace);
>
> return 0;
>
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index fcb52ec24f..c6f187ec9b 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -3661,6 +3661,9 @@
> #
> # @pool: Ceph pool name.
> #
> +# @namespace: Rados namespace name in the Ceph pool.
> +# (Since 5.0)
> +#
> # @image: Image name in the Ceph pool.
> #
> # @conf: path to Ceph configuration file. Values
> @@ -3687,6 +3690,7 @@
> ##
> { 'struct': 'BlockdevOptionsRbd',
>'data': { 'pool': 'str',
> +'*namespace': 'str',
> 'image': 'str',
> '*conf': 'str',
> '*snapshot': 'str',
> --
> 2.24.1
>
Reviewed-by: Jason Dillaman
--
Jason
On Fri, Dec 20, 2019 at 9:11 AM Florian Florensa wrote:
>
> Hello Stefano and Jason,
>
> First of all thanks for the quick reply,
> Response inline belowe
> > Hi Florian,
> >
> > I think we need to add (Since: 5.0).
>
> Are you implying by that (Since: 5.0) that we need to specify its
> availabili
ce);
>
> return 0;
>
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 0cf68fea14..9ebc020e93 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -3657,6 +3657,8 @@
> #
> # @pool: Ceph pool name.
> #
> +# @nspace: Rados namespace name in the Ceph pool.
> +#
> # @image: Image name in the Ceph pool.
> #
> # @conf: path to Ceph configuration file. Values
> @@ -3683,6 +3685,7 @@
> ##
> { 'struct': 'BlockdevOptionsRbd',
>'data': { 'pool': 'str',
> +'nspace': 'str',
> 'image': 'str',
> '*conf': 'str',
> '*snapshot': 'str',
> --
> 2.24.1
>
Thanks for tackling this. I had this and msgr v2 support on my todo
list for QEMU but I haven't had a chance to work on them yet. The
changes look good to me and it works as expected during CLI
play-testing.
Reviewed-by: Jason Dillaman
Thanks for adding the support. I was actually already play-testing your
patch. I'll respond to the mailing list soon.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1843941
Title:
RBD Namespaces are
On Mon, Jul 29, 2019 at 5:40 AM Stefano Garzarella wrote:
>
> On Fri, Jul 26, 2019 at 08:46:56AM -0400, Jason Dillaman wrote:
> > On Fri, Jul 26, 2019 at 4:48 AM Stefano Garzarella
> > wrote:
> > >
> > > On Thu, Jul 25, 2019 at 09:30:30AM -0400, Jason Di
On Fri, Jul 26, 2019 at 4:48 AM Stefano Garzarella wrote:
>
> On Thu, Jul 25, 2019 at 09:30:30AM -0400, Jason Dillaman wrote:
> > On Thu, Jul 25, 2019 at 4:13 AM Stefano Garzarella
> > wrote:
> > >
> > > On Wed, Jul 24, 2019 at 01:48:42PM -0400, Jason Di
On Thu, Jul 25, 2019 at 4:13 AM Stefano Garzarella wrote:
>
> On Wed, Jul 24, 2019 at 01:48:42PM -0400, Jason Dillaman wrote:
> > On Tue, Jul 23, 2019 at 3:13 AM Stefano Garzarella
> > wrote:
> > >
> > > This patch adds the support of preallocation (off/
On Tue, Jul 23, 2019 at 3:13 AM Stefano Garzarella wrote:
>
> This patch adds the support of preallocation (off/full) for the RBD
> block driver.
> If rbd_writesame() is available and supports zeroed buffers, we use
> it to quickly fill the image when full preallocation is required.
>
> Signed-off
On Tue, Jul 9, 2019 at 11:32 AM Max Reitz wrote:
>
> On 09.07.19 15:09, Stefano Garzarella wrote:
> > On Tue, Jul 09, 2019 at 08:55:19AM -0400, Jason Dillaman wrote:
> >> On Tue, Jul 9, 2019 at 5:45 AM Max Reitz wrote:
> >>>
> >>> On 09.07.19 10:
On Tue, Jul 9, 2019 at 5:45 AM Max Reitz wrote:
>
> On 09.07.19 10:55, Max Reitz wrote:
> > On 09.07.19 05:08, Jason Dillaman wrote:
> >> On Fri, Jul 5, 2019 at 6:43 AM Stefano Garzarella
> >> wrote:
> >>>
> >>> On Fri, Jul 05, 2019 at 11:58
On Fri, Jul 5, 2019 at 6:43 AM Stefano Garzarella wrote:
>
> On Fri, Jul 05, 2019 at 11:58:43AM +0200, Max Reitz wrote:
> > On 05.07.19 11:32, Stefano Garzarella wrote:
> > > This patch allows 'qemu-img info' to show the 'disk size' for
> > > the RBD images that have the fast-diff feature enabled.
On Fri, Jun 28, 2019 at 4:59 AM Stefano Garzarella wrote:
>
> On Thu, Jun 27, 2019 at 03:43:04PM -0400, Jason Dillaman wrote:
> > On Thu, Jun 27, 2019 at 1:24 PM John Snow wrote:
> > > On 6/27/19 4:48 AM, Stefano Garzarella wrote:
> > > > On Wed, Jun 26, 2019 at
On Thu, Jun 27, 2019 at 3:45 PM John Snow wrote:
>
>
>
> On 6/27/19 3:43 PM, Jason Dillaman wrote:
> > On Thu, Jun 27, 2019 at 1:24 PM John Snow wrote:
> >>
> >>
> >>
> >> On 6/27/19 4:48 AM, Stefano Garzarella wrote:
> >>> On
On Thu, Jun 27, 2019 at 1:24 PM John Snow wrote:
>
>
>
> On 6/27/19 4:48 AM, Stefano Garzarella wrote:
> > On Wed, Jun 26, 2019 at 05:04:25PM -0400, John Snow wrote:
> >> It looks like this has hit a 30 day expiration without any reviews or
> >> being merged; do we still want this? If so, can you
On Fri, May 3, 2019 at 12:30 PM Stefano Garzarella wrote:
>
> RBD APIs don't allow us to write more than the size set with
> rbd_create() or rbd_resize().
> In order to support growing images (eg. qcow2), we resize the
> image before write operations that exceed the current size.
>
> Signed-off-by
On Fri, May 3, 2019 at 7:02 AM Stefano Garzarella wrote:
>
> This patch allows 'qemu-img info' to show the 'disk size' for
> rbd images. We use the rbd_diff_iterate2() API to calculate the
> allocated size for the image.
>
> Signed-off-by: Stefano Garzarella
> ---
> block/rbd.c | 33
On Mon, Apr 29, 2019 at 8:47 AM Stefano Garzarella wrote:
>
> On Sat, Apr 27, 2019 at 08:43:26AM -0400, Jason Dillaman wrote:
> > On Sat, Apr 27, 2019 at 7:37 AM Stefano Garzarella
> > wrote:
> > >
> > > This patch adds the support of preallocation (off/
On Sat, Apr 27, 2019 at 7:37 AM Stefano Garzarella wrote:
>
> This patch adds the support of preallocation (off/full) for the RBD
> block driver.
> If available, we use rbd_writesame() to quickly fill the image when
> full preallocation is required.
>
> Signed-off-by: Stefano Garzarella
> ---
>
On Sun, Apr 14, 2019 at 9:20 AM Stefano Garzarella wrote:
>
> On Thu, Apr 11, 2019 at 01:06:49PM -0400, Jason Dillaman wrote:
> > On Thu, Apr 11, 2019 at 9:02 AM Stefano Garzarella
> > wrote:
> > >
> > > On Thu, Apr 11, 2019 at 08:35:44AM -0400, Jason Di
On Thu, Apr 11, 2019 at 9:02 AM Stefano Garzarella wrote:
>
> On Thu, Apr 11, 2019 at 08:35:44AM -0400, Jason Dillaman wrote:
> > On Thu, Apr 11, 2019 at 7:00 AM Stefano Garzarella
> > wrote:
> > >
> > > RBD APIs don't allow us to write more tha
On Thu, Apr 11, 2019 at 7:00 AM Stefano Garzarella wrote:
>
> RBD APIs don't allow us to write more than the size set with rbd_create()
> or rbd_resize().
> In order to support growing images (eg. qcow2), we resize the image
> before RW operations that exceed the current size.
What's the use-case
I think the SCSI spec is limited to 16 bits for representing the block
length (in bytes) (see READ CAPACITY(10) command). It's also probably
sub-optimal to force a full 4MiB write even for small IOs. You might
achieve what you are looking for by setting the minimal and optimal IO
size hints to 4MiB
@Nick: if you can recreate the librbd memory growth, any chance you can
help test a potential fix [1]?
[1] https://github.com/ceph/ceph/pull/24297
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/170144
On Tue, Apr 24, 2018 at 3:25 PM, Eric Blake wrote:
> We are gradually moving away from sector-based interfaces, towards
> byte-based. Make the change for the last few sector-based callbacks
> in the rbd driver.
>
> Note that the driver was already using byte-based calls for
> performing actual I/
On Wed, 2017-09-13 at 10:44 -0600, Adam Wolfe Gordon wrote:
> Register a watcher with rbd so that we get notified when an image is
> resized. Propagate resizes to parent block devices so that guest devices
> get resized without user intervention.
>
> Signed-off-by: Adam Wolfe Gordon
> ---
> Hello
On Thu, Feb 16, 2017 at 10:13 AM, Alexandre DERUMIER
wrote:
> Hi, I would like to bench it with small 4k read/write.
>
> On the ceph side,do we need this PR ? :
> https://github.com/ceph/ceph/pull/13447
Yes, that is the correct PR for the client-side librbd changes. You
should be able to test it
On Thu, Feb 16, 2017 at 4:00 AM, wrote:
> From: tianqing
>
> Rbd can do readv and writev directly, so wo do not need to transform
> iov to buf or vice versa any more.
>
> Signed-off-by: tianqing
> ---
> block/rbd.c | 49 ++---
> 1 file changed, 42 in
On Sat, Nov 5, 2016 at 1:17 AM, wrote:
> From: tianqing
>
> Rbd can do readv and writev directly, so wo do not need to transform
> iov to buf or vice versa any more.
>
> Signed-off-by: tianqing
> ---
> block/rbd.c | 124
>
> 1 file
On Fri, Jun 3, 2016 at 4:48 AM, Fam Zheng wrote:
> +typedef enum {
> +/* The values are ordered so that lower number implies higher
> restriction.
> + * Starting from 1 to make 0 an invalid value.
> + * */
> +BDRV_LOCKF_EXCLUSIVE = 1,
> +BDRV_LOCKF_SHARED,
> +BDRV_LOCKF_UN
On Wed, May 18, 2016 at 4:19 AM, Kevin Wolf wrote:
>> Updating this setting on an open image won't do anything, but if you
>> rbd_close() and rbd_open() it again the setting will take effect.
>> rbd_close() will force a flush of any pending I/O in librbd and
>> free the memory for librbd's ImageCt
On Tue, May 17, 2016 at 6:03 AM, Sebastian Färber wrote:
> Hi Kevin,
>
>> A correct reopen implementation must consider all options and flags that
>> .bdrv_open() looked at.
>>
>> The options are okay, as both "filename" and "password-secret" aren't
>> things that we want to allow a reopen to chan
Any chance you can re-test with a more recent kernel on the hypervisor
host? If the spin-lock was coming from user-space, I would expect
futex_wait_setup and futex_wake to be much higher.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU
Can you run 'perf top' against just the QEMU process? There was an
email chain from nearly a year ago about tcmalloc causing extremely high
'_raw_spin_lock' calls under high IOPS scenarios.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to Q
On Tue, Apr 26, 2016 at 7:20 PM, Fam Zheng wrote:
> On Tue, 04/26 10:42, Jason Dillaman wrote:
>> On Sun, Apr 24, 2016 at 7:42 PM, Fam Zheng wrote:
>> > On Fri, 04/22 21:57, Jason Dillaman wrote:
>> >> Since this cannot automatically recover from a crashed QEMU
On Sun, Apr 24, 2016 at 7:42 PM, Fam Zheng wrote:
> On Fri, 04/22 21:57, Jason Dillaman wrote:
>> Since this cannot automatically recover from a crashed QEMU client with an
>> RBD image, perhaps this RBD locking should not default to enabled.
>> Additionally, this w
Since this cannot automatically recover from a crashed QEMU client with an
RBD image, perhaps this RBD locking should not default to enabled.
Additionally, this will conflict with the "exclusive-lock" feature
available since the Ceph Hammer-release since both utilize the same locking
construct.
As
Can you reproduce with Ceph debug logging enabled (i.e. debug rbd=20 in your
ceph.conf)? If you could attach the log to the Ceph tracker ticket I opened
[1], that would be very helpful.
[1] http://tracker.ceph.com/issues/13726
Thanks,
Jason
- Original Message -
> From: "Alexandre DE
>> Bumping this...
>>
>> For now, we are rarely suffering with an unlimited cache growth issue
>> which can be observed on all post-1.4 versions of qemu with rbd
>> backend in a writeback mode and certain pattern of a guest operations.
>> The issue is confirmed for virtio and can be re-triggered by
54 matches
Mail list logo