Is there any rules for computing RAM requeirements in terms of the number of
PGs?
Just curious abount what is the fundamental limitations on the number of PGs
per OSD for bigger capacity HDD
best regards,
Samuel
huxia...@horebdata.cn
From: Anthony D'Atri
Date: 2020-09-05 20:00
To: huxia..
In our unofficial testing, under heavy random 4KB write workloads, with large
PGs, we observed large latency such as 100ms or above. On the other hand, when
peering at the source code, it seems that PG lock could impact performance if
the capacity of PGs grows bigger
That is why i am wondering
I think there are multiple variables there.
My advice is for HDDs to aim for an average of 150-200 as I wrote before. The
limitation is the speed of the device, throw a thousand PGs on there and you
won’t get any more out of it, you’ll just have more peering and more RAM used.
NVMe is a diffe
On Sat, 2020-09-05 at 08:10 +, Magnus HAGDORN wrote:
> > I don't have any recent data on how long it could take but you
> > might
> > try using at least 8 workers.
>
>
> We are using 4 workers and the first stage hasn't completed yet. Is
> it
>
> safe to interrupt and restart the procedure with
Hello, list.
Have anybody been in the situation when after "ceph fs reset" filesystem
becomes blank (mounts OK, ls shows no files/directories), but data and
metadata pools still hold something (698G and 400M respectively by "ceph
fs status").
Would be grateful for documentation vectors and/or
I have been inserting 10790 exactly the same 64kb text message to a
passive compressing enabled pool. I am still counting, but it looks like
only half the objects are compressed.
mail/b08c3218dbf1545ff43052412a8e mtime 2020-09-06 16:27:39.00,
size 63580
mail/00f6043775f1545ff43
FWIW a handful of years back there was a bug in at least some LSI firmware
where the setting “Disk Default” silently turned the volatile cache *on*
instead of the documented behavior, which was to leave alone.
> On Sep 3, 2020, at 8:13 AM, Reed Dier wrote:
>
> It looks like I ran into the same
The hints have to be given from the client side as far as I understand, can you
share the client code too?
Also,not seems that there's no guarantees that it will actually do anything
(best effort I guess):
https://docs.ceph.com/docs/mimic/rados/api/librados/#c.rados_set_alloc_hint
Cheers
On 6
Hi David,
I suppose it is this part
https://github.com/ceph-dovecot/dovecot-ceph-plugin/tree/master/src/storage-rbox
-Original Message-
To: ceph-users@ceph.io;
Subject: Re: [ceph-users] ceph rbox test on passive compressed pool
The hints have to be given from the client side as far as
Hi guys,
When I update the pg_num of a pool, I found it not worked(no
rebalanced), anyone know the reason? Pool's info:
pool 21 'openstack-volumes-rs' replicated size 3 min_size 2 crush_rule
21 object_hash rjenkins pg_num 1024 pgp_num 512 pgp_num_target 1024
autoscale_mode warn last_change 85103
The current OSD's size and pg_num? Are you using the different size OSDs?
On 6/9/2020 上午1:34, huxia...@horebdata.cn wrote:
Dear Ceph folks,
As the capacity of one HDD (OSD) is growing bigger and bigger, e.g. from 6TB up
to 18TB or even more, should the number of PG per OSD increase as well, e.
HI all,
I have created a new cephs cluster using the cephadm command and it
appears to work very well. I tried to specify using an ssd for the
journals but it doesn't appear to have worked. My yaml file is:
service_type: osd
service_id: default_drive_group
placement:
host_pattern: 'ceph-o
Instead of journal_devices you should specify db_devices:
db_devices:
rotational: 0
I think journal_devices is for filestore OSDs which is not the default.
Zitat von Darrin Hodges :
HI all,
I have created a new cephs cluster using the cephadm command and it
appears to work very well. I tr
13 matches
Mail list logo