Which defaults to rgw_thread_pool_size, so yeah, you can adjust that option.

To answer your actual question: we've run civetweb with 1024 threads
with no problems related to the number of threads.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Oct 11, 2019 at 10:50 PM Paul Emmerich <paul.emmer...@croit.io> wrote:
>
> you probably want to increase the number of civetweb threads, that's a
> parameter for civetweb in the rgw_frontends configuration (IIRC it's
> threads=xyz)
>
> Also, consider upgrading and use Beast, it's so much better for rgw
> setups that get lots of requests.
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Oct 11, 2019 at 10:02 PM Benjamin.Zieglmeier
> <benjamin.zieglme...@target.com> wrote:
> >
> > Hello all,
> >
> >
> >
> > Looking for guidance on the recommended highest setting (or input on 
> > experiences from users who have a high setting) for rgw_thread_pool_size. 
> > We are running multiple Luminous 12.2.11 clusters with usually 3-4 RGW 
> > daemons in front of them. We set our rgw_thread_pool_size at 512 out of the 
> > gate, and run civetweb. We had occasional service outages in one of our 
> > clusters this week and determined the rgws were running out of available 
> > threads to handle requests. We doubled our thread pool size to 1024 on each 
> > rgw and everything has been ok so far.
> >
> >
> >
> > What, if any, would be the high-end limit to set for rgw_thread_pool_size? 
> > I’ve been unable to find anything in the documentation or the user list 
> > that depicts anything higher than the default 100 threads.
> >
> >
> >
> > Thanks,
> >
> > Ben
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to