Your system will schedule each thread as it sees fit. There is nothing that will “happen”. If you are waiting on IO completion, having more threads than cores could be a way to handle more requests.
Bert > On Apr 22, 2016, at 19:09, [email protected] wrote: > > What can happen if we increase the number of waitress threads beyond the > number of CPU cores? > > On Saturday, March 12, 2016 at 4:02:35 PM UTC-8, Tom Wiltzius wrote: > Thank you both for the information! > > It sounds like there isn't any significant downside the increasing the number > of waitress threads beyond the number of available CPU cores if we expect > them to be I/O bound rather than CPU bound. Is that true? > > > I will investigate our nginx configuration; perhaps it's limiting the number > of requests per client to the upstream server. Thanks for that tip. We're > using SPDY 3.1 and I'm testing in Chrome, so I don't think the number of > requests should be throttled by the client or by nginx on the WAN side (it > should be one, persistent TCP connection). > > I haven't tried uWSGI, but I did try gunicorn and switched to using multiple > processes instead of multiple threads. That doesn't seem to have changed the > timings much, so I don't think we're blocking on the GIL. > > The last option is the database or SqlAlchemy; I have not ruled that out yet > but I can write a script completely outside the context of the web server > that makes similar requests and see how it performs. > > Thank you both again for the help. > > On Friday, March 11, 2016 at 2:44:24 PM UTC-8, Jonathan Vanasco wrote: > > My theory is that if the threads get tied up with a few slow requests, the > > server can no longer service the faster ones. > > That's usually the issue. It's compounded more when you don't pipe things > through something nginx, which can block resources on slow/dropped > connections. > > A few ideas come to mind: > > i'd take a look at your nginx config. there are options to throttle the > number of connections per client. (upstream and WAN) > your browser could also have a limit on requests as well, and the keepalive > implementation (if enabled on nginx) could be a factor. are you sure > they're being sent in parallel and not serial? > > it's possible that you're having issues with database blocking. > > it's also possible, though i doubt it, that you're running into issues with > the GIL. you could try using uwsgi to see if there is any difference. > > > > -- > You received this message because you are subscribed to the Google Groups > "pylons-discuss" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/pylons-discuss/c8156435-bc76-40d1-8e11-c70a6016b909%40googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "pylons-discuss" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/3A145BB6-C56E-45B2-A7DE-17F5CE8221DE%400x58.com. For more options, visit https://groups.google.com/d/optout.
smime.p7s
Description: S/MIME cryptographic signature
