[ceph-users] Re: RadosGW max worker threads

2019-10-11 Thread JC Lopez
Hi All, currently running some tests and I have run with up to 2048 without any problem. As per the code here is what it says: #ifndef MAX_WORKER_THREADS #define MAX_WORKER_THREADS (1024 * 64) #endif This value was introduced via https://github.com/ceph/civetweb/commit/8a07012185851b8e8be18039

[ceph-users] Re: RadosGW max worker threads

2019-10-11 Thread Anthony D'Atri
We’ve running with 2000 fwiw. > On Oct 11, 2019, at 2:02 PM, Paul Emmerich wrote: > > Which defaults to rgw_thread_pool_size, so yeah, you can adjust that option. > > To answer your actual question: we've run civetweb with 1024 threads > with no problems related to the number of threads. > >

[ceph-users] Re: RadosGW max worker threads

2019-10-11 Thread Paul Emmerich
Which defaults to rgw_thread_pool_size, so yeah, you can adjust that option. To answer your actual question: we've run civetweb with 1024 threads with no problems related to the number of threads. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io c

[ceph-users] Re: RadosGW max worker threads

2019-10-11 Thread Paul Emmerich
you probably want to increase the number of civetweb threads, that's a parameter for civetweb in the rgw_frontends configuration (IIRC it's threads=xyz) Also, consider upgrading and use Beast, it's so much better for rgw setups that get lots of requests. Paul -- Paul Emmerich Looking for help

[ceph-users] RadosGW max worker threads

2019-10-11 Thread Benjamin . Zieglmeier
Hello all, Looking for guidance on the recommended highest setting (or input on experiences from users who have a high setting) for rgw_thread_pool_size. We are running multiple Luminous 12.2.11 clusters with usually 3-4 RGW daemons in front of them. We set our rgw_thread_pool_size at 512 out o

[ceph-users] Re: Nautilus: PGs stuck remapped+backfilling

2019-10-11 Thread Anthony D'Atri
Very large omaps can take quite a while. > >> You meta data PGs *are* backfilling. It is the "61 keys/s" statement in the >> ceph status output in the recovery I/O line. If this is too slow, increase >> osd_max_backfills and osd_recovery_max_active. >> >> Or just have some coffee ... > > > I

[ceph-users] Re: Nautilus: PGs stuck remapped+backfilling

2019-10-11 Thread Anthony D'Atri
Parallelism. The backfill/recovery tunables control how many recovery ops a given OSD will perform. If you’re adding a new OSD, naturally it is the bottleneck. For other forms of data movement, early on one has multiple OSDs reading and writing independently. Toward the end, increasingly few

[ceph-users] Re: Nautilus: PGs stuck remapped+backfilling

2019-10-11 Thread Eugen Block
Yeah we also noticed decreasing recovery speed if it comes to the last PGs, but we never put up a theory. I think your explanation makes sense. Next time I'll try with much higher values, thanks for sharing that. Regards, Eugen Zitat von Frank Schilder : I did a lot of data movement late

[ceph-users] Re: Nautilus: PGs stuck remapped+backfilling

2019-10-11 Thread Frank Schilder
I did a lot of data movement lately and my observation is, that backfill is very fast (high bandwidth and many thousand keys/s) as long as this is many-to-many OSDs. The number of OSD participating slowly decreases over time until there is only 1 disk left that is written to. This becomes really

[ceph-users] Re: Nautilus: PGs stuck remapped+backfilling

2019-10-11 Thread Eugen Block
You meta data PGs *are* backfilling. It is the "61 keys/s" statement in the ceph status output in the recovery I/O line. If this is too slow, increase osd_max_backfills and osd_recovery_max_active. Or just have some coffee ... I already had increased osd_max_backfills and osd_recovery_max_

[ceph-users] Re: Nautilus: PGs stuck remapped+backfilling

2019-10-11 Thread Frank Schilder
You meta data PGs *are* backfilling. It is the "61 keys/s" statement in the ceph status output in the recovery I/O line. If this is too slow, increase osd_max_backfills and osd_recovery_max_active. Or just have some coffee ... Best regards, = Frank Schilder AIT Risø Campus Bygn

[ceph-users] help

2019-10-11 Thread Jörg Kastning
Am 11.10.2019 um 09:21 schrieb ceph-users-requ...@ceph.io: Send ceph-users mailing list submissions to ceph-users@ceph.io To subscribe or unsubscribe via email, send a message with subject or body 'help' to ceph-users-requ...@ceph.io You can reach the person managing the list at

[ceph-users] Nautilus power outage - 2/3 mons and mgrs dead and no cephfs

2019-10-11 Thread Alex L
Hi list, Had a power outage killing the whole cluster. Cephfs will not start at all, but RBD works just fine. I did have 4 unfound objects that I eventually had to rollback or delete which I don't really understand as I should've had a copy of the those pbjects on the other drives? 2/3 mons and