Hello I address you with this issue i noticed it with ceph 072.2 and
linux ubuntu 13.10 and with 0.80.1 with ubuntu 14.04.
here is what i do:
1) I create and format to ext4 or xfs a rbd image of 4 TB . the image
has --order 25 and --image-format 2
2) I create a snapshot of that rbd image
3) I protect that snapshot
4) I create a clone image of that inicial rbd image using the protected
snapshot as reference.
5) I insert the line in /etc/ceph/rbdmap I map the new image. I mount
the new image to my ceph client server.
Until here all is fine cool and dandy
6) I umount the /dev/rbd1 which is the previous mounted rbd clone image.
and umount is stuck
in the client server with the umount stuck i have this message in the
/var/log/syslog
Jun 11 12:26:10 tesla kernel: [63365.178657] libceph: osd8
20.10.10.105:6803 socket error on read
as it seems the problem is somehow related to osd8 on my 20.10.10.105
ceph node then i go there to get more information from log
in the /var/log/ceph-osd.8.log there is this message comming in endlessly
2014-06-11 12:31:51.692031 7fa26085c700 0 -- 20.10.10.105:6805/23321 >>
20.10.10.12:0/2563935849 pipe(0x9dd6780 sd=231 :6805 s=0 pgs=0 cs=0 l=0
c=0x7ed6840).accept peer addr is really 20.10.10.12:0/2563935849 (socket
is 20.10.10.12:33056/0)
Can anyone help me solve this issue ?
--
Alphe Salas
I.T ingeneer
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com