You might have some clients with older version?
Or need to do: ceph osd require-osd-release ***?
Best,
Feng
Best,
Feng
On Sun, Apr 14, 2019 at 1:31 PM Alfredo Daniel Rezinovsky
wrote:
>
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL
Will m=6 cause huge CPU usage?
Best,
Feng
On Fri, May 3, 2019 at 11:57 AM Ashley Merrick wrote:
>
> I may be wrong, but your correct with your m=6 statement.
>
> Your need atleast K amount of shards available. If you had k=8 and m=2
> equally across 2 rooms (5 each), a faidlure in either room
Hello all,
I have a naive question about the way and the maximum rebuild speed
for erasure coding pool. I did some search, but could not find any
formal and detailed information about this.
For pool recovering, the way Ceph works(to my understanding) is: each
active OSD scrubs the drive, and if i
Thanks, guys.
I forgot the IOPS. So since I have 100disks, the total
IOPS=100X100=10K. For the 4+2 erasure, one disk fail, then it needs to
read 5 and write 1 objects.Then the whole 100 disks can do 10K/6 ~ 2K
rebuilding actions per seconds.
While for the 100X6TB disks, suppose the object size is
:53 AM Janne Johansson wrote:
>
>
>
> Den tors 9 maj 2019 kl 17:46 skrev Feng Zhang :
>>
>> Thanks, guys.
>>
>> I forgot the IOPS. So since I have 100disks, the total
>> IOPS=100X100=10K. For the 4+2 erasure, one disk fail, then it needs to
>> read 5
Could it because all the osds in it are set reweight =0?
-75.45695 host ceph-osd3
26 hdd 1.81898 osd.26 down0 1.0
27 hdd 1.81898 osd.27 down0 1.0
30 hdd 1.81898 osd.30 down
Does Ceph-fuse mount also has the same issue?
On Wed, Jul 24, 2019 at 3:35 AM Janek Bevendorff
wrote:
>
>
> I mean kernel version
>
> Oh, of course. 4.15.0-54 on Ubuntu 18.04 LTS.
>
> Right now I am also experiencing a different phenomenon. Since I wrapped it
> up yesterday, the MDS machines hav