Thanks everyone, the env like this : Linux 3.0.97-1.el6.elrepo.x86_64 CentOS 6.4
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff) /dev/sdd1 on /var/lib/ceph/osd/ceph-2 type xfs (rw) /dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs (rw) /dev/sdc1 on /var/lib/ceph/osd/ceph-4 type xfs (rw) meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=8895321 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=35581281, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17373, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 /usr/libexec/qemu-kvm -name centos6-clone2 -S -machine rhel6.4.0,accel=kvm -m 1000 -smp 2,sockets=2,cores=1,threads=1 -uuid dd1a7093-bdea-4816-8a62-df61cb0c9bfa -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/centos6-clone2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:rbd/centos6-clone2:auth_supported=none:mon_host=agent21.kisops.org\:6789,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:5c:71:f1,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Use `umount -l` to force umount. Anything else ? 2013/10/8 Mark Nelson <mark.nel...@inktank.com> > Also, mkfs, mount, and kvm disk options? > > Mark > > > On 10/07/2013 03:15 PM, Samuel Just wrote: > >> Sounds like it's probably an issue with the fs on the rbd disk? What >> fs was the vm using on the rbd? >> -Sam >> >> On Mon, Oct 7, 2013 at 8:11 AM, higkoohk <higko...@gmail.com> wrote: >> >>> We use ceph as the storage of kvm . >>> >>> I found the VMs errors when force umount the ceph disk. >>> >>> Is it just right ? How to repair it ? >>> >>> Many thanks . >>> >>> --higkoo >> >>
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com