Hi Karan thx for u cooperation. I got some help on #ceph, so now I got mon[123] and mds[123] and ods[12345]. also I got one client on centos6,5 with elrepo lt kernel, which provides me nonfuse mount of my cephfs.
I got some simple cases for test. like bonnie++ and this: for i in `seq 0 256`; do time mkdir $i ; done cd 256 for i in `seq 0 256`; do time mkdir $i ; done for i in `seq 0 5`; do time ls > /dev/null ; done time find . > /dev/null so I got next results: local fs real 0m0.007s user 0m0.002s sys 0m0.004s ceph fs real 0m0.394s user 0m0.008s sys 0m0.026s and bonnie: local Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 8 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP web11.abboom.wor 4G 585 99 56297 9 45337 7 3009 99 112865 7 1472 31 Latency 22491us 451ms 454ms 12023us 110ms 29311us Version 1.96 ------Sequential Create------ --------Random Create-------- web11.abboom.world -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 100 47361 66 +++++ +++ 63536 68 52393 72 +++++ +++ 61602 68 Latency 911us 667us 2674us 240us 181us 1976us 1.96,1.96,web11.abboom.world,8,1398170891,4G,,585,99,56297,9,45337,7,3009,99,112865,7,1472,31,100,,,,,47361,66,+++++,+++,63536,68,52393,72,+++++,+++,61602,68,22491us,451ms,454ms,12023us,110ms,29311us,911us,667us,2674us,240us,181us,1976us ceph Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 8 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP web11.abboom.wor 4G 489 97 15635 2 15565 2 839 99 319882 21 817.0 16 Latency 124ms 35589us 9818ms 15806us 730ms 48226us Version 1.96 ------Sequential Create------ --------Random Create-------- web11.abboom.world -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 100 118 0 52002 11 66 0 97 0 8582 9 60 0 Latency 11718ms 32137us 8566ms 6235ms 204ms 24989ms 1.96,1.96,web11.abboom.world,8,1398170818,4G,,489,97,15635,2,15565,2,839,99,319882,21,817.0,16,100,,,,,118,0,52002,11,66,0,97,0,8582,9,60,0,124ms,35589us,9818ms,15806us,730ms,48226us,11718ms,32137us,8566ms,6235ms,204ms,24989ms so I got some questions now. all me scheme works on hp storage -> iscsi -> vmware vmfs -> vitrual machine with ods or mon or mds. if I will use just hardware. I mean what if I got some hw with linux install on it and 20hdds where ods = hdd will it be faster? or I need some advice how to tune cephfs for faster works. 2014-04-23 11:48 GMT+04:00 Karan Singh <karan.si...@csc.fi>: > Hi Alexander > > Try adding your monitor details in /etc/ceph/ceph.conf file (please > check for typos) > > > [mon] > > [mon.nfs2.abboom.world] > host = nfs2.abboom.world > mon addr = 10.60.0.111:6789 > > [mon.nfs3.abboom.world] > host = nfs3.abboom.world > mon addr = 10.60.0.112:6789 > > [mon.nfs4.abboom.world] > host = nfs4.abboom.world > mon addr = 10.60.0.113:6789 > > > - karan - > > On 21 Apr 2014, at 14:46, *sm1Ly <st.uz...@gmail.com> wrote: > > re2all. I use centos 6.5. this is my ceph.conf > http://pastebin.com/0UCevzF5 I doing manual deploying with this guide > http://ceph.com/docs/master/install/manual-deployment/ when I get to step > 15 I cant start my node, cause it doesnt defined in my conf. like this: > /etc/init.d/ceph start mon.nfs2.abboom.world /etc/init.d/ceph: > mon.nfs2.abboom.world not found (/etc/ceph/ceph.conf defines , > /var/lib/ceph defines ) what I m doing wrong? > > > > -- > yours respectfully, Alexander Vasin. > > 8 926 1437200 > icq: 9906064 > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > -- yours respectfully, Alexander Vasin. 8 926 1437200 icq: 9906064
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com