My envirenment is displayed below:
The ceph-fuse client is 10.2.2, and the ceph osd is 0.94.3, details beblow:
[root@localhost ~]# rpm -qa | grep ceph
libcephfs1-10.2.2-0.el7.centos.x86_64
python-cephfs-10.2.2-0.el7.centos.x86_64
ceph-common-0.94.3-0.el7.x86_64
ceph-fuse-10.2.2-0.el7.centos.x86_64
ceph-0.94.3-0.el7.x86_64
ceph-mds-10.2.2-0.el7.centos.x86_64
[root@localhost ~]# rpm -qa | grep rados
librados2-devel-0.94.3-0.el7.x86_64
librados2-0.94.3-0.el7.x86_64
libradosstriper1-0.94.3-0.el7.x86_64
python-rados-0.94.3-0.el7.x86_64
When I mount the cephfs with "ceph-fuse -m 10.222.5.229:6789 --client-mount
/client_one /mnt/test",I got ceph-client crash when I run a few hours later.
-16> 2016-08-18 18:37:54.134672 7fd552ffd700 3 client.214296 ll_flush
0x7fd5307e8520 1478575
-15> 2016-08-18 18:37:54.134717 7fd5128e2700 3 client.214296 ll_release
(fh)0x7fd5307e8520 1478575
-14> 2016-08-18 18:37:54.134725 7fd5128e2700 5 client.214296 _release_fh
0x7fd5307e8520 mode 1 on 1478575.head(faked_ino=0 ref=3 ll_ref=11030
cap_refs={1024=0,2048=0} open={1=1} mode=100644 size=12401/0 mtime=2016-08-17
13:49:59.382502 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[1478575 ts 0/0
objects 1 dirty_or_tx 0] parents=0x7fd55c0120d0 0x7fd55c011b30)
-13> 2016-08-18 18:37:54.136109 7fd551ffb700 3 client.214296 ll_getattr
147417f.head
-12> 2016-08-18 18:37:54.136118 7fd551ffb700 3 client.214296 ll_getattr
147417f.head = 0
-11> 2016-08-18 18:37:54.136126 7fd551ffb700 3 client.214296 ll_forget
147417f 1
-10> 2016-08-18 18:37:54.136133 7fd551ffb700 3 client.214296 ll_lookup
0x7fd55c0108d0 2016
-9> 2016-08-18 18:37:54.136140 7fd551ffb700 3 client.214296 ll_lookup
0x7fd55c0108d0 2016 -> 0 (1474182)
-8> 2016-08-18 18:37:54.136148 7fd551ffb700 3 client.214296 ll_forget
147417f 1
-7> 2016-08-18 18:37:54.136181 7fd5527fc700 3 client.214296 ll_getattr
1474182.head
-6> 2016-08-18 18:37:54.136189 7fd5527fc700 3 client.214296 ll_getattr
1474182.head = 0
-5> 2016-08-18 18:37:54.136735 7fd550c92700 2 -- 10.155.2.5:0/1557134465
>> 10.155.2.5:6820/4511 pipe(0x7fd54c012ef0 sd=2 :48226 s=2 pgs=107 cs=1 l=1
c=0x7fd54c0141b0).reader couldn't read tag, (0) Success
-4> 2016-08-18 18:37:54.136792 7fd550c92700 2 -- 10.155.2.5:0/1557134465
>> 10.155.2.5:6820/4511 pipe(0x7fd54c012ef0 sd=2 :48226 s=2 pgs=107 cs=1 l=1
c=0x7fd54c0141b0).fault (0) Success
-3> 2016-08-18 18:37:54.136950 7fd56bff7700 1 client.214296.objecter
ms_handle_reset on osd.5
-2> 2016-08-18 18:37:54.136967 7fd56bff7700 1 -- 10.155.2.5:0/1557134465
mark_down 0x7fd54c0141b0 -- pipe dne
-1> 2016-08-18 18:37:54.137054 7fd56bff7700 1 -- 10.155.2.5:0/1557134465
--> 10.155.2.5:6820/4511 -- osd_op(client.214296.0:630732 4.a8ddcaa5
1493bde. [write 0~12401] snapc 1=[] RETRY=1
ondisk+retry+write+known_if_redirected e836) v7 -- ?+0 0x7fd55ca2ff40 con
0x7fd55ca6d710
0> 2016-08-18 18:37:54.141233 7fd5527fc700 -1 *** Caught signal
(Segmentation fault) **
in thread 7fd5527fc700 thread_name:ceph-fuse
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
1: (()+0x29eeda) [0x7fd57878feda]
2: (()+0xf130) [0x7fd577505130]
3: (Client::get_root_ino()+0x10) [0x7fd57868be60]
4: (CephFuse::Handle::make_fake_ino(inodeno_t, snapid_t)+0x18d)
[0x7fd57868992d]
5: (()+0x199261) [0x7fd57868a261]
6: (()+0x164b5) [0x7fd5780a64b5]
7: (()+0x16bdb) [0x7fd5780a6bdb]
8: (()+0x13471) [0x7fd5780a3471]
9: (()+0x7df5) [0x7fd5774fddf5]
10: (clone()+0x6d) [0x7fd5763e61ad]
But when I mount the cephfs with "ceph-fuse -m 10.222.5.229:6789 /mnt/test",
ceph fuse client runs all right during few days.
I do not think the problem is related with the 0.94.3 OSD.
Does someone encouter the same problem with cephfs10.2.2?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com