Hi All,
currently running some tests and I have run with up to 2048 without any problem.
As per the code here is what it says:
#ifndef MAX_WORKER_THREADS
#define MAX_WORKER_THREADS (1024 * 64)
#endif
This value was introduced via
https://github.com/ceph/civetweb/commit/8a07012185851b8e8be18039
We’ve running with 2000 fwiw.
> On Oct 11, 2019, at 2:02 PM, Paul Emmerich wrote:
>
> Which defaults to rgw_thread_pool_size, so yeah, you can adjust that option.
>
> To answer your actual question: we've run civetweb with 1024 threads
> with no problems related to the number of threads.
>
>
Which defaults to rgw_thread_pool_size, so yeah, you can adjust that option.
To answer your actual question: we've run civetweb with 1024 threads
with no problems related to the number of threads.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
c
you probably want to increase the number of civetweb threads, that's a
parameter for civetweb in the rgw_frontends configuration (IIRC it's
threads=xyz)
Also, consider upgrading and use Beast, it's so much better for rgw
setups that get lots of requests.
Paul
--
Paul Emmerich
Looking for help
Hello all,
Looking for guidance on the recommended highest setting (or input on
experiences from users who have a high setting) for rgw_thread_pool_size. We
are running multiple Luminous 12.2.11 clusters with usually 3-4 RGW daemons in
front of them. We set our rgw_thread_pool_size at 512 out o
Very large omaps can take quite a while.
>
>> You meta data PGs *are* backfilling. It is the "61 keys/s" statement in the
>> ceph status output in the recovery I/O line. If this is too slow, increase
>> osd_max_backfills and osd_recovery_max_active.
>>
>> Or just have some coffee ...
>
>
> I
Parallelism. The backfill/recovery tunables control how many recovery ops a
given OSD will perform. If you’re adding a new OSD, naturally it is the
bottleneck. For other forms of data movement, early on one has multiple OSDs
reading and writing independently. Toward the end, increasingly few
Yeah we also noticed decreasing recovery speed if it comes to the last
PGs, but we never put up a theory. I think your explanation makes
sense. Next time I'll try with much higher values, thanks for sharing
that.
Regards,
Eugen
Zitat von Frank Schilder :
I did a lot of data movement late
I did a lot of data movement lately and my observation is, that backfill is
very fast (high bandwidth and many thousand keys/s) as long as this is
many-to-many OSDs. The number of OSD participating slowly decreases over time
until there is only 1 disk left that is written to. This becomes really
You meta data PGs *are* backfilling. It is the "61 keys/s" statement
in the ceph status output in the recovery I/O line. If this is too
slow, increase osd_max_backfills and osd_recovery_max_active.
Or just have some coffee ...
I already had increased osd_max_backfills and osd_recovery_max_
You meta data PGs *are* backfilling. It is the "61 keys/s" statement in the
ceph status output in the recovery I/O line. If this is too slow, increase
osd_max_backfills and osd_recovery_max_active.
Or just have some coffee ...
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygn
Am 11.10.2019 um 09:21 schrieb ceph-users-requ...@ceph.io:
Send ceph-users mailing list submissions to
ceph-users@ceph.io
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
ceph-users-requ...@ceph.io
You can reach the person managing the list at
Hi list,
Had a power outage killing the whole cluster. Cephfs will not start at all, but
RBD works just fine.
I did have 4 unfound objects that I eventually had to rollback or delete which
I don't really understand as I should've had a copy of the those pbjects on the
other drives?
2/3 mons and
13 matches
Mail list logo