[ceph-users] osd exit common/Thread.cc: 160: FAILED assert(ret == 0)--10.2.10

2019-02-27 Thread hnuzhoulin2
Hi, guysSo far, there have been 10 osd service exit because of this error.the error messages are all the same.2019-02-27 17:14:59.757146 7f89925ff700 0 -- 10.191.175.15:6886/192803 >> 10.191.175.49:6833/188731 pipe(0x55ebba819400 sd=741 :6886 s=0 pgs=0 cs=0 l=0 c=0x55ebb

Re: [ceph-users] jewel10.2.11 EC pool out a osd,its PGs remap to the osds in the same host

2019-02-14 Thread hnuzhoulin2
Farnum wrote: Your CRUSH rule for EC spools is forcing that behavior with the linestep chooseleaf indep 1 type ctnrIf you want different behavior, you’ll need a different crush rule.On Tue, Feb 12, 2019 at 5:18 PM hnuzhoulin2 <hnuzhoul...@gmail.com> wrote: Hi, cephersI a

[ceph-users] jewel10.2.11 EC pool out a osd,its PGs remap to the osds in the same host

2019-02-13 Thread hnuzhoulin2
Hi, cephersI am building a ceph EC cluster.when a disk is error,I out it.But its all PGs remap to the osds in the same host,which I think they should remap to other hosts in the same rack.test process is:ceph osd pool create .rgw.buckets.data 8192 8192 erasure ISA-4-2 si

[ceph-users] jewel10.2.11 EC pool out a osd,its PGs remap to the osds in the same host

2019-02-12 Thread hnuzhoulin2
Hi, cephersI am building a ceph EC cluster.when a disk is error,I out it.But its all PGs remap to the osds in the same host,which I think they should remap to other hosts in the same rack.test process is:ceph osd pool create .rgw.buckets.data 8192 8192 erasure ISA-4-2 site1_sata_

[ceph-users] repair do not work for inconsistent pg which three replica are the same

2019-01-09 Thread hnuzhoulin2
Hi,cephersI have two inconsistent pg.I try list inconsistent obj,got nothing.rados list-inconsistent-obj 388.c29No scrub information available for pg 388.c29error 2: (2) No such file or directory so I se

[ceph-users] pre-split causing slow requests when rebuild osd ?

2018-11-26 Thread hnuzhoulin2
Hi,guysI have a 42 nodes cluster,and I create the pool using expected_num_objects to pre-split filestore dirs.today I rebuild a osd because a disk error,it cause much slow request,filestore logs like below2018-11-26 16:49:41.003336 7f2dad075700 10 filestore(/home/ceph/var/lib/osd

Re: [ceph-users] can we get create time ofsnap

2018-08-16 Thread hnuzhoulin2
Sorry,replay late.I mean the rbd snap. and the timestamp feature about luminous is:https://github.com/ceph/ceph/pull/12817On 08/16/2018 22:54,Gregory Farnum wrote: On Mon, Aug 13, 2018 at 11:18 PM, hnuzhoulin2 wrote: hi,guys we have much snaps,and we want to clear. but can not

[ceph-users] can we get create time ofsnap

2018-08-13 Thread hnuzhoulin2
hi,guyswe have much snaps,and we want to clear.but can not know the create time of these snaps.I know luminous have this feature default.but for jewel and hammer,have some hack way to get the create time?or it is just impossible.Thanks __