rpm ;)
BTW, if you want to test it, proxmox VE (www.proxmox.com) have qemu with
jemmaloc prebuild.
- Mail original -
De: "Bill WONG"
À: "aderumier"
Cc: "ceph-users"
Envoyé: Lundi 7 Novembre 2016 11:29:37
Objet: Re: [ceph-users] RBD Block performance
-- Mail original -
De: "Bill WONG" < [ mailto:wongahsh...@gmail.com | wongahsh...@gmail.com ] >
À: "aderumier" < [ mailto:aderum...@odiso.com | aderum...@odiso.com ] >
Cc: "dillaman" < [ mailto:dilla...@redhat.com | dilla...@redhat.com ] >,
"c
riginal -
> De: "aderumier"
> À: "Bill WONG"
> Cc: "ceph-users"
> Envoyé: Lundi 7 Novembre 2016 07:46:16
> Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem
>
> >>any document can provided for how i can com
client).
>
> Note that changing it only is not possible. so you need to shutdown all
> the clients before doing this change.
>
>
>
> - Mail original -
> De: "Bill WONG"
> À: "aderumier"
> Cc: "dillaman" , "ceph-users" &l
WONG"
> À: "aderumier"
> Cc: "dillaman" , "ceph-users"
> Envoyé: Lundi 7 Novembre 2016 06:35:38
> Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem
>
> HI Alexandre,
> thank you!
> any document can provided for how
l -
De: "aderumier"
À: "Bill WONG"
Cc: "ceph-users"
Envoyé: Lundi 7 Novembre 2016 07:46:16
Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem
>>any document can provided for how i can complied ceph with jemalloc as well?
>
qemu client).
Note that changing it only is not possible. so you need to shutdown all the
clients before doing this change.
- Mail original -
De: "Bill WONG"
À: "aderumier"
Cc: "dillaman" , "ceph-users"
Envoyé: Lundi 7 Novembre 2016 06:35:38
Obj
rumier" , "ceph-users" <
> ceph-users@lists.ceph.com>
> Envoyé: Mardi 1 Novembre 2016 02:06:22
> Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem
>
> For better or worse, I can repeat your "ioping" findings against a
> qcow2 imag
devel/2015-06/msg05265.html
- Mail original -
De: "Jason Dillaman"
À: "Bill WONG"
Cc: "aderumier" , "ceph-users"
Envoyé: Mardi 1 Novembre 2016 02:06:22
Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem
For better or worse,
For better or worse, I can repeat your "ioping" findings against a
qcow2 image hosted on a krbd-backed volume. The "bad" news is that it
actually isn't even sending any data to the OSDs -- which is why your
latency is shockingly low. When performing a "dd ... oflag=dsync"
against the krbd-backed qc
Hi Jason,
it looks the situation is the same, no difference. my ceph.conf is below,
any comments or improvement required?
---
[global]
fsid = 106a12b0-5ed0-4a71-b6aa-68a09088ec33
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 192.168.8.11,192.168.8.12,192.168.8.13
auth_cluster_re
On Sun, Oct 30, 2016 at 5:40 AM, Bill WONG wrote:
> any ideas or comments?
Can you set "rbd non blocking aio = false" in your ceph.conf and retry
librbd? This will eliminate at least one context switch on the read IO
path -- which result in increased latency under extremely low queue
depths.
--
k you!
>
>
> ----- Mail original -----
> De: "Bill WONG"
> À: "aderumier"
> Cc: "ceph-users"
> Envoyé: Vendredi 28 Octobre 2016 17:58:42
> Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem
>
> hi,
> we both VM use
&
h ?
Maybe krbd has been latency here, and dd is a single stream, do it could impact
resultw.
thank you!
- Mail original -
De: "Bill WONG"
À: "aderumier"
Cc: "ceph-users"
Envoyé: Vendredi 28 Octobre 2016 17:58:42
Objet: Re: [ceph-users] RBD Block p
hi,
we both VM use
and VM is unable to mount /dev/rbd0 directly to test the speed..
and i think technically, librbd should be much beter performance than
mouting /dev/rbd0.. but the actual test looks not the cases, anything i did
wrongly, or any performance tuning required...
thank you!
Bill
On
Hi,
do you have tried to enable cache=writeback when you use librbd ?
Could be interesting to see performance with using /dev/rbd0 in your vm,
instead mounting a qcow2 inside.
- Mail original -
De: "Bill WONG"
À: "ceph-users"
Envoyé: Vendredi 28 Octobre 2016 10:24:50
Objet: [ceph-users
16 matches
Mail list logo