[ceph-users] Increase PG number

2016-09-17 Thread Matteo Dacrema
Hi All,

I need to expand my ceph cluster and I also need to increase pg number.
In a test environment I see that during pg creation all read and write 
operations are stopped.

Is that a normal behavior ?

Thanks
Matteo
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs-client Segmentation fault with not-root mount point

2016-09-17 Thread yu2xiangyang
My envirenment is displayed below:

The ceph-fuse client is 10.2.2, and the ceph osd is 0.94.3, details beblow:

[root@localhost ~]# rpm -qa | grep ceph
libcephfs1-10.2.2-0.el7.centos.x86_64
python-cephfs-10.2.2-0.el7.centos.x86_64
ceph-common-0.94.3-0.el7.x86_64
ceph-fuse-10.2.2-0.el7.centos.x86_64
ceph-0.94.3-0.el7.x86_64
ceph-mds-10.2.2-0.el7.centos.x86_64
 
 
 [root@localhost ~]# rpm -qa | grep rados
librados2-devel-0.94.3-0.el7.x86_64
librados2-0.94.3-0.el7.x86_64
libradosstriper1-0.94.3-0.el7.x86_64
python-rados-0.94.3-0.el7.x86_64


 When I mount the cephfs with "ceph-fuse -m  10.222.5.229:6789  --client-mount 
/client_one /mnt/test",I got ceph-client crash when I run a few hours later.


-16> 2016-08-18 18:37:54.134672 7fd552ffd700  3 client.214296 ll_flush 
0x7fd5307e8520 1478575

   -15> 2016-08-18 18:37:54.134717 7fd5128e2700  3 client.214296 ll_release 
(fh)0x7fd5307e8520 1478575

   -14> 2016-08-18 18:37:54.134725 7fd5128e2700  5 client.214296 _release_fh 
0x7fd5307e8520 mode 1 on 1478575.head(faked_ino=0 ref=3 ll_ref=11030 
cap_refs={1024=0,2048=0} open={1=1} mode=100644 size=12401/0 mtime=2016-08-17 
13:49:59.382502 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[1478575 ts 0/0 
objects 1 dirty_or_tx 0] parents=0x7fd55c0120d0 0x7fd55c011b30)

   -13> 2016-08-18 18:37:54.136109 7fd551ffb700  3 client.214296 ll_getattr 
147417f.head

   -12> 2016-08-18 18:37:54.136118 7fd551ffb700  3 client.214296 ll_getattr 
147417f.head = 0

   -11> 2016-08-18 18:37:54.136126 7fd551ffb700  3 client.214296 ll_forget 
147417f 1

   -10> 2016-08-18 18:37:54.136133 7fd551ffb700  3 client.214296 ll_lookup 
0x7fd55c0108d0 2016

-9> 2016-08-18 18:37:54.136140 7fd551ffb700  3 client.214296 ll_lookup 
0x7fd55c0108d0 2016 -> 0 (1474182)

-8> 2016-08-18 18:37:54.136148 7fd551ffb700  3 client.214296 ll_forget 
147417f 1

-7> 2016-08-18 18:37:54.136181 7fd5527fc700  3 client.214296 ll_getattr 
1474182.head

-6> 2016-08-18 18:37:54.136189 7fd5527fc700  3 client.214296 ll_getattr 
1474182.head = 0

-5> 2016-08-18 18:37:54.136735 7fd550c92700  2 -- 10.155.2.5:0/1557134465 
>> 10.155.2.5:6820/4511 pipe(0x7fd54c012ef0 sd=2 :48226 s=2 pgs=107 cs=1 l=1 
c=0x7fd54c0141b0).reader couldn't read tag, (0) Success

-4> 2016-08-18 18:37:54.136792 7fd550c92700  2 -- 10.155.2.5:0/1557134465 
>> 10.155.2.5:6820/4511 pipe(0x7fd54c012ef0 sd=2 :48226 s=2 pgs=107 cs=1 l=1 
c=0x7fd54c0141b0).fault (0) Success

-3> 2016-08-18 18:37:54.136950 7fd56bff7700  1 client.214296.objecter 
ms_handle_reset on osd.5

-2> 2016-08-18 18:37:54.136967 7fd56bff7700  1 -- 10.155.2.5:0/1557134465 
mark_down 0x7fd54c0141b0 -- pipe dne

-1> 2016-08-18 18:37:54.137054 7fd56bff7700  1 -- 10.155.2.5:0/1557134465 
--> 10.155.2.5:6820/4511 -- osd_op(client.214296.0:630732 4.a8ddcaa5 
1493bde. [write 0~12401] snapc 1=[] RETRY=1 
ondisk+retry+write+known_if_redirected e836) v7 -- ?+0 0x7fd55ca2ff40 con 
0x7fd55ca6d710

 0> 2016-08-18 18:37:54.141233 7fd5527fc700 -1 *** Caught signal 
(Segmentation fault) **

 in thread 7fd5527fc700 thread_name:ceph-fuse

 

 ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

 1: (()+0x29eeda) [0x7fd57878feda]

 2: (()+0xf130) [0x7fd577505130]

 3: (Client::get_root_ino()+0x10) [0x7fd57868be60]

 4: (CephFuse::Handle::make_fake_ino(inodeno_t, snapid_t)+0x18d) 
[0x7fd57868992d]

 5: (()+0x199261) [0x7fd57868a261]

 6: (()+0x164b5) [0x7fd5780a64b5]

 7: (()+0x16bdb) [0x7fd5780a6bdb]

 8: (()+0x13471) [0x7fd5780a3471]

 9: (()+0x7df5) [0x7fd5774fddf5]

 10: (clone()+0x6d) [0x7fd5763e61ad]


But when I mount the cephfs with "ceph-fuse -m 10.222.5.229:6789   /mnt/test", 
ceph fuse client runs all right during few days.
I do not think the problem is related with the 0.94.3 OSD.
Does someone encouter the same problem with cephfs10.2.2?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com