virsh secret-define --file secret.xml
virsh secret-set-value --secret --base64 `ceph auth get-key
client.rbd.vps 2>/dev/null`
-Original Message-
To: ceph-users@lists.ceph.com
Cc: d...@ceph.io
Subject: [ceph-users] Qemu RBD image usage
Hi all,
I want to attach another RBD image
Hi all,
I want to attach another RBD image into the Qemu VM to be used as disk.
However, it always failed. The VM definiation xml is attached.
Could anyone tell me where I did wrong?
|| nstcc3@nstcloudcc3:~$ sudo virsh start ubuntu_18_04_mysql --console
|| error: Failed to start dom
Hi,
I was reading this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008486.html
And I am trying to get better performance in my virtual machines.
These are my RBD settings:
"rbd_cache": "true",
"rbd_cache_block_writes_upfront": "false",
"rbd_cache_max_dirty":
On Thu, Oct 20, 2016 at 1:51 AM, Ahmed Mostafa
wrote:
> different OSDs
PGs -- but more or less correct since the OSDs will process requests
for a particular PG sequentially and not in parallel.
--
Jason
___
ceph-users mailing list
ceph-users@lists.cep
Does this also mean that strip count can be thought of as the number of
parrallel writes to different objects at different OSDs ?
Thank you
On Thursday, 20 October 2016, Jason Dillaman wrote:
> librbd (used by QEMU to provide RBD-backed disks) uses librados and
> provides the necessary handling
librbd (used by QEMU to provide RBD-backed disks) uses librados and
provides the necessary handling for striping across multiple backing
objects. When you don't specify "fancy" striping options via
"--stripe-count" and "--stripe-unit", it essentially defaults to
stripe count of 1 and stripe unit of
Hello
>From the documentation i understand that clients that uses librados must
perform striping for themselves, but i do not understand how could this be
if we have striping options in ceph ? i mean i can create rbd images that
has configuration for striping, count and unite size.
So my question
On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman wrote:
>> Hi Jason,
>>
>> Le 22/03/2016 14:12, Jason Dillaman a écrit :
>> >
>> > We actually recommend that OpenStack be configured to use writeback cache
>> > [1]. If the guest OS is properly issuing flush requests, the cache will
>> > still provi
> Hi Jason,
>
> Le 22/03/2016 14:12, Jason Dillaman a écrit :
> >
> > We actually recommend that OpenStack be configured to use writeback cache
> > [1]. If the guest OS is properly issuing flush requests, the cache will
> > still provide crash-consistency. By default, the cache will automaticall
Hi Jason,
Le 22/03/2016 14:12, Jason Dillaman a écrit :
We actually recommend that OpenStack be configured to use writeback cache [1].
If the guest OS is properly issuing flush requests, the cache will still
provide crash-consistency. By default, the cache will automatically start up
in wr
> > I've been looking on the internet regarding two settings which might
> > influence
> > performance with librbd.
> >
> > When attaching a disk with Qemu you can set a few things:
> > - cache
> > - aio
> >
> > The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
> > 'none'. I
Hi Wido,
Le 22/03/2016 13:52, Wido den Hollander a écrit :
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and OpenSt
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
'none'. Is that still the recommend value
lzer"
> À: "Alex Crow"
> Cc: ceph-users@lists.ceph.com
> Envoyé: Samedi 12 Avril 2014 17:56:07
> Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live
> migration safe ?
>
>
> Hello,
>
>> On Sat, 12 Apr 2014 16:26:40 +0
is working, I'm
interested)
- Mail original -
De: "Alex Crow"
À: ceph-users@lists.ceph.com
Envoyé: Samedi 12 Avril 2014 17:26:40
Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live
migration safe ?
Hi.
I've read in many places that you sh
hanks for link reference !
- Mail original -
De: "Christian Balzer"
À: "Alex Crow"
Cc: ceph-users@lists.ceph.com
Envoyé: Samedi 12 Avril 2014 17:56:07
Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live
migration safe ?
Hello,
Hello,
AFAIK qemu calls bdrv_flush at the end of migration process so this is
absolutely safe. Anyway it`s proven by our production systems very
well too :)
On Sat, Apr 12, 2014 at 7:01 PM, Alexandre DERUMIER wrote:
> Hello,
>
> I known that qemu live migration with disk with cache=writeback are
Hello,
On Sat, 12 Apr 2014 16:26:40 +0100 Alex Crow wrote:
> Hi.
>
> I've read in many places that you should never use writeback on any kind
> of shared storage. Caching is better dealt with on the storage side
> anyway as you have hopefully provided resilience there. In fact if your
> SAN/
Hi.
I've read in many places that you should never use writeback on any kind
of shared storage. Caching is better dealt with on the storage side
anyway as you have hopefully provided resilience there. In fact if your
SAN/NAS is good enough it's supposed to be best to use "none" as the
caching
Hello,
I known that qemu live migration with disk with cache=writeback are not safe
with storage like nfs,iscsi...
Is it also true with rbd ?
If yes, it is possible to disable manually writeback online with qmp ?
Best Regards,
Alexandre
___
ceph-us
There is a RBD engine for FIO, have a look at
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bi
> I tried rbd-fuse and it's throughput using fio is approx. 1/4 that of the
> kernel client.
>
> Can you please let me know how to setup RBD backend for FIO? I'm assuming
> this RBD backend is also based on librbd?
You will probably have to build fio from source since the rbd engine is new:
htt
[mailto:g...@inktank.com]
Sent: Tuesday, March 11, 2014 2:41 PM
To: Sushma Gurram
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] qemu-rbd
On Tue, Mar 11, 2014 at 2:24 PM, Sushma Gurram
wrote:
> It seems good with master branch. Sorry about the confusion.
>
> On a side note, is it
On Tue, Mar 11, 2014 at 2:24 PM, Sushma Gurram
wrote:
> It seems good with master branch. Sorry about the confusion.
>
> On a side note, is it possible to create/access the block device using librbd
> and run fio on it?
...yes? librbd is the userspace library that QEMU is using to access
it to b
: Sushma Gurram
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] qemu-rbd
On Tue, Mar 11, 2014 at 1:38 PM, Sushma Gurram
wrote:
> Hi,
>
>
>
> I'm trying to follow the instructions for QEMU rbd installation at
> http://ceph.com/docs/master/rbd/qemu-rbd/
>
>
>
> I
On Tue, Mar 11, 2014 at 1:38 PM, Sushma Gurram
wrote:
> Hi,
>
>
>
> I'm trying to follow the instructions for QEMU rbd installation at
> http://ceph.com/docs/master/rbd/qemu-rbd/
>
>
>
> I tried to write a raw qemu image to ceph cluster using the following
> command
>
> qemu-img convert -f raw -O
Hi,
I'm trying to follow the instructions for QEMU rbd installation at
http://ceph.com/docs/master/rbd/qemu-rbd/
I tried to write a raw qemu image to ceph cluster using the following command
qemu-img convert -f raw -O raw ../linux-0.2.img rbd:data/linux
OSD seems to be working, but it seems to
27 matches
Mail list logo