Re: [ceph-users] Decreasing pg_num

2019-04-14 Thread Feng Zhang
You might have some clients with older version? Or need to do: ceph osd require-osd-release ***? Best, Feng Best, Feng On Sun, Apr 14, 2019 at 1:31 PM Alfredo Daniel Rezinovsky wrote: > > autoscale-status reports some of my PG_NUMs are way too big > > I have 256 and need 32 > > POOL

Re: [ceph-users] Tip for erasure code profile?

2019-05-03 Thread Feng Zhang
Will m=6 cause huge CPU usage? Best, Feng On Fri, May 3, 2019 at 11:57 AM Ashley Merrick wrote: > > I may be wrong, but your correct with your m=6 statement. > > Your need atleast K amount of shards available. If you had k=8 and m=2 > equally across 2 rooms (5 each), a faidlure in either room

[ceph-users] maximum rebuild speed for erasure coding pool

2019-05-09 Thread Feng Zhang
Hello all, I have a naive question about the way and the maximum rebuild speed for erasure coding pool. I did some search, but could not find any formal and detailed information about this. For pool recovering, the way Ceph works(to my understanding) is: each active OSD scrubs the drive, and if i

Re: [ceph-users] maximum rebuild speed for erasure coding pool

2019-05-09 Thread Feng Zhang
Thanks, guys. I forgot the IOPS. So since I have 100disks, the total IOPS=100X100=10K. For the 4+2 erasure, one disk fail, then it needs to read 5 and write 1 objects.Then the whole 100 disks can do 10K/6 ~ 2K rebuilding actions per seconds. While for the 100X6TB disks, suppose the object size is

Re: [ceph-users] maximum rebuild speed for erasure coding pool

2019-05-10 Thread Feng Zhang
:53 AM Janne Johansson wrote: > > > > Den tors 9 maj 2019 kl 17:46 skrev Feng Zhang : >> >> Thanks, guys. >> >> I forgot the IOPS. So since I have 100disks, the total >> IOPS=100X100=10K. For the 4+2 erasure, one disk fail, then it needs to >> read 5

Re: [ceph-users] Ceph crush map randomly changes for one host

2019-06-19 Thread Feng Zhang
Could it because all the osds in it are set reweight =0? -75.45695 host ceph-osd3 26 hdd 1.81898 osd.26 down0 1.0 27 hdd 1.81898 osd.27 down0 1.0 30 hdd 1.81898 osd.30 down

Re: [ceph-users] MDS fails repeatedly while handling many concurrent meta data operations

2019-07-24 Thread Feng Zhang
Does Ceph-fuse mount also has the same issue? On Wed, Jul 24, 2019 at 3:35 AM Janek Bevendorff wrote: > > > I mean kernel version > > Oh, of course. 4.15.0-54 on Ubuntu 18.04 LTS. > > Right now I am also experiencing a different phenomenon. Since I wrapped it > up yesterday, the MDS machines hav