Hi Florent,
Cache tiering , No .

** Our Architecture :

vdbench/FIO inside VM <--> RBD without cache <-> Ceph Cluster (6 OSDs + 3
Mons)


Thanks
sumit

[root@ceph-mon01 ~]# ceph -s
    cluster 47b3b559-f93c-4259-a6fb-97b00d87c55a
     health HEALTH_WARN clock skew detected on mon.ceph-mon02,
mon.ceph-mon03
     monmap e1: 3 mons at {ceph-mon01=
192.168.10.19:6789/0,ceph-mon02=192.168.10.20:6789/0,ceph-mon03=192.168.10.21:6789/0},
election epoch 14, quorum 0,1,2 ceph-mon01,ceph-mon02,ceph-mon03
     osdmap e603: 36 osds: 36 up, 36 in
      pgmap v40812: 5120 pgs, 2 pools, 179 GB data, 569 kobjects
            522 GB used, 9349 GB / 9872 GB avail
                5120 active+clean


On Mon, Feb 2, 2015 at 12:21 AM, Florent MONTHEL <fmont...@flox-arts.net>
wrote:

> Hi Sumit
>
> Do you have cache pool tiering activated ?
> Some feed-back regarding your architecture ?
> Thanks
>
> Sent from my iPad
>
> > On 1 févr. 2015, at 15:50, Sumit Gaur <sumitkg...@gmail.com> wrote:
> >
> > Hi
> > I have installed 6 node ceph cluster and to my surprise when I ran rados
> bench I saw that random write has more performance number then sequential
> write. This is opposite to normal disk write. Can some body let me know if
> I am missing any ceph Architecture point here ?
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to