Yeah, you need to check your disks individually and see how they compare.
Sounds like the second one is slower. And you're also getting a bit slower
going to 2x replication.
-Greg

On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:

>  I do test follow the below steps:****
>
> ** **
>
> + create image with size 100Gb in pool data  ****
>
> + after that, I do map that image on one server.****
>
> + and do mkfs.xfs /dev/rbd0  -> mount /deve/rbd0 /mnt****
>
> + I do the write benchmark on that mount point with dd tool :****
>
> ** **
>
> dd if=/dev/zero of=/mnt/good2 bs=1M count=10000 oflag=direct****
>
> ** **
>
> ** **
>
> *From:* Gregory Farnum [mailto:g...@inktank.com <javascript:_e({},
> 'cvml', 'g...@inktank.com');>]
> *Sent:* Thursday, May 23, 2013 11:47 AM
> *To:* Khanh. Nguyen Dang Quoc
> *Cc:* ceph-users@lists.ceph.com <javascript:_e({}, 'cvml',
> 'ceph-users@lists.ceph.com');>
> *Subject:* Re: [ceph-users] performance degradation issue****
>
> ** **
>
> "rados bench write", you mean? Or something else?****
>
> ** **
>
> Have you checkd the disk performance of each OSD outside of Ceph? In
> moving from one to two OSDs your performance isn't actually going to go up
> because you're replicating all the data. It ought to stay flat rather than
> dropping, but my guess is your second disk is slow.
>
> On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:****
>
>  Hi Greg,****
>
>  ****
>
> It’s the write  benchmark.. ****
>
>  ****
>
> Regards****
>
> Khanh****
>
>  ****
>
> *From:* Gregory Farnum [mailto:g...@inktank.com]
> *Sent:* Thursday, May 23, 2013 10:56 AM
> *To:* Khanh. Nguyen Dang Quoc
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] performance degradation issue****
>
>  ****
>
> What's the benchmark?****
>
> -Greg
>
> On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:****
>
>  Dear all,****
>
>  ****
>
> now i faced one issue in ceph block device: performance degradation****
>
>  ****
>
> ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)****
>
> ceph status ****
>
>  ****
>
> health HEALTH_OK****
>
> monmap e1: 2 mons at {a=49.213.67.204:6789/0,b=49.213.67.203:6789/0},
> election epoch 20, quorum 0,1 a,b****
>
> osdmap e53: 2 osds: 2 up, 2 in****
>
> pgmap v535: 576 pgs: 576 active+clean; 11086 MB data, 22350 MB used, 4437
> GB / 4459 GB avail****
>
> mdsmap e29: 1/1/1 up {0=a=up:active}, 1 up:standby****
>
>  ****
>
> I do benchmark with one osd, I receive the io speed is about 190MB/s****
>
>  ****
>
> But When i add more osd, replicate size =2 , the write performance
> degradated is about 90MB/s****
>
>  ****
>
> Follow the pratice, the write performance must be increated as adding more
> osd but I can't receive that. :(****
>
> anyone can you help me check what's wrong in config file or anything else?
> ****
>
>  ****
>
> Regards,****
>
> Khanh Nguyen****
>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com****
>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com****
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to