Hi All,
Recently.I meet a question and I did'nt find out any thing for explain it.


Ops process like blow:
ceph 10.2.5  jewel, qemu 2.5.0  centos 7.2 x86_64
create pool  rbd_vms  3  replications with cache tier pool 3 replication too.
create 100 images in rbd_vms
rbd map 100 image to local device, like  /dev/rbd0  ... /dev/rbd100
dd if=/root/win7.qcow2  of=/dev/rbd0 bs=1M count=3000
virsh define 100 vms(vm0... vm100), 1 vms  configured 1 /dev/rbd .
virsh start  100 vms.


when the 100 vms start concurrence, will caused some vms hang.
when do fio testing in those vms, will casued some vms hang .


I checked ceph status ,osd status , log etc.  all like same as before.


but check device with  iostat -dx 1,   some  rbd* device are  strange.
util% are 100% full, but  read and wirte count all are zero.


i checked virsh log, vms log etc, but not found any useful info.


Can help to fingure out some infomartion.  librbd krbd or other place is need 
to adjust some arguments?


Thanks All.


------------------
????????????????????????????????-????????????   
??????15908149443
??????wangy...@datatom.com
??????????????????????????666??????????????C??1409
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to