For replicated pools (w/o rounding to nearest power of two) overall
PGs number is calculated so:
Pools_PGs = 100 * (OSDs / Pool_Size),
where
100 -- target number of PGs per single OSD related to that pool,
Pool_Size -- factor showing how much raw storage would in fact be
used to store
OK. Thanks.
Once I thought restarting OSD could make it work.
发件人: Frédéric Nass
发送时间: 2019年4月23日 14:05
收件人: 刘 俊
抄送: ceph-users
主题: Re: [ceph-users] Bluestore with so many small files
Hi,
You probably forgot to recreate the OSD after changing bluestore_min_allo
Use k+m for PG calculation, that value also shows up as "erasure size"
in ceph osd pool ls detail
The important thing here is on how many OSDs the PG shows up. And the
EC PG shows up on all k+m OSDs.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
Looks like you got lots of tiny objects. By default the recovery speed
on HDDs is limited to 10 objects per second (40 with DB on a SSD) per
thread.
Decrease osd_recovery_sleep_hdd (default 0.1) to increase
recovery/backfill speed.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluste
On Sun, 28 Apr 2019 at 16:14, Paul Emmerich wrote:
> Use k+m for PG calculation, that value also shows up as "erasure size"
> in ceph osd pool ls detail
So does it mean that for PG calculation those 2 pools are equivalent:
1) EC(4, 2)
2) replicated, size 6
? Sounds weird to be honest. Replicate
Hello Everyone,
I have a CephFS cluster which has 4 node, every node has 5 HDD and 1 SSD.
I use bluestore and place the wal and db on ssd. also we get 50GB on each
ssd for a metadata pool.
My workload is write 10 million file to 200 dirs at 200 client.
When I use 1 mds I get 4k ops and everythin
In this thread [1] it is suggested to bump up
mds log max segments = 200
mds log max expiring = 150
1- http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023490.html
On Sun, Apr 28, 2019 at 2:58 PM Winger Cheng wrote:
>
> Hello Everyone,
>
> I have a CephFS cluster which has 4 no
I have already set mds log max segments to 256 and in 13.2.5 mds log max
expiring is not needed, since https://github.com/ceph/ceph/pull/18624
Serkan Çoban 于2019年4月28日周日 下午9:03写道:
> In this thread [1] it is suggested to bump up
> mds log max segments = 200
> mds log max expiring = 150
>
> 1-
> h
Thanks Paul,
Coming back to my question, is it a good idea to add SSD Journals for HDD
on a new node in an existing hdd journal and osd cluster?
On Sun, Apr 28, 2019 at 2:49 PM Paul Emmerich
wrote:
> Looks like you got lots of tiny objects. By default the recovery speed
> on HDDs is limited t
It will mean you have some OSD"s that will perform better than others, but
it won't cause any issues within CEPH.
It may help you expand your cluster at the speed you need to fix the MAX
Avail issue, however your only going to be able to backfill as fast as the
source OSD's can handle, but writing
Thanks Ashley.
Is there a way we could stop writes to the old osd’s and write to only the
new osd’s??
On Sun, 28 Apr 2019 at 19:21, Ashley Merrick
wrote:
> It will mean you have some OSD"s that will perform better than others, but
> it won't cause any issues within CEPH.
>
> It may help you expa
Hey,
What controls / determines object size of a purely cephfs ec (6.3) pool? I
have large file but seemingly small objects.
Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I just upgraded from mimic to nautilus(14.2.0) and stumbled upon a strange
"feature".
I tried to increase pg_num for a pool. There was no errors but also no
visible effect:
# ceph osd pool get foo_pool01 pg_num
pg_num: 256
# ceph osd pool set foo_pool01 pg_num 512
set pool 11 pg_num to 512
#
Say, some nodes have OSDs that are 1.5 times bigger, than other nodes
have, meanwhile weights of all the nodes in question is almost equal
(due having different number of OSDs obviously)
--
End of message. Next message?
___
ceph-users mailing list
ceph-
14 matches
Mail list logo