On 01/17/2014 10:01 AM, Никитенко Виталий wrote:
Good day! Please help me solve the problem. There are the following scheme :
Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage 
connected to the second server .
The second server supermicro: two 1TB hdd (lsi 9261-8i with battery), 8 CPU 
cores, 32 GB RAM and 2 1Gb NICs . On /dev/sda installed ubuntu 12 and 
ceph-emperor. /dev/sdb disk placed under osd.0.

How do you do journaling?

What i do next:
   # rbd create esxi
   # rbd map esxi

Get /dev/rbd1 which shared using iscsitarget

   # cat ietd.conf
   Target iqn.2014-01.ru.ceph: rados.iscsi.001
     Lun 0 Path = / dev/rbd1, Type = blockio, ScsiId = f817ab
   Target iqn.2014-01.ru.ceph: rados.iscsi.002
     Lun 1 Path = / opt/storlun0.bin, Type = fileio, ScsiId = lun1, ScsiSN = 
lun1

For test I also create iscsi storage on /dev/sda (Lun1).
When migrating a virtual machine from store2Tb to Lun0 (ceph) the rate of 
migration of 400-450 Mbit/second.
When migrating a VM from store2Tb to Lun1 (ubuntu file) then the rate of 
migration of 800-900 Mbit / second.
 From this I conclude that the rate is not limited by disk(controller) and not 
to the network.
Tried osd format to ext4 and xfs and btrfs but same speed. For me, speed is 
very important , especially since the plan
translate 10Gb network links.

Have you tried TGT instead? It uses librbd instead of using the Kernel layers for RBD and iSCSI: http://ceph.com/dev-notes/updates-to-ceph-tgt-iscsi-support/

Have you also tried to run a rados benchmark? (rados bench)

Also, be aware that Ceph excels in it's parallel performance. You shouldn't look at the performance of a single "LUN" or RBD image that much, it's much more interesting to see the aggegrated performance of 10 or maybe 100 "LUNs" together.

Thanks.
Vitaliy
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to