let's not spread misinformations .... python has the GIL, true, and 15 
threads with a single process won't use any cpu a single thread can use, 
nor span multiple cpus. But for anyone's sake, we're deploying a webapp in 
PRODUCTION, whose purpose should be - at the very least - be concurrent. 
If we force 1 process and a single thread, make ourselves a clap, than bang 
the head to the wall. we're serializing everything! 

tl;dr: I understand someone may think that 1 process with 15 threads 
(defaults of the apache directive) slow things down when you have 4xcpu and 
3 of them remain underutilized while serving concurrently 15 requests (buh 
huh GIL, but I'd like to see test results), and that maybe a 4 processes 
with each 5 threads would be a smarter choice (really?!) but 1 process and 
1 thread is a complete joke. 

On Friday, July 31, 2015 at 8:18:23 AM UTC+2, Derek wrote:
>
> I believe that's what Massimo is trying to say. Each processor has it's 
> own GIL.
>
> On Thursday, July 30, 2015 at 6:31:57 PM UTC-7, Dave wrote:
>>
>> Thanks for tracking that down!  So based on that post, and this one, it 
>> looks like i should be setting proccesses = # of cores, and threads = 1?
>>
>> https://groups.google.com/forum/#!topic/web2py/mPdn1ClxLTI
>>
>> Massimo DI Pierro:
>> There are pros and cons. If you use threads in a python program, the more 
>> computing cores you have, the slower - not faster - the program gets. This 
>> is a python feature because even if you have threads, there is only one 
>> interpreted and therefore execution is serialized anyway. For scalability 
>> you should have processes (not threads) one per core.
>>
>>
>>
>> On Thursday, 30 July 2015 14:15:36 UTC-6, Derek wrote:
>>>
>>> looks like that change came from here:
>>>
>>>
>>> https://groups.google.com/forum/#!searchin/web2py/processes$3D1$20threads$3D1|sort:date/web2py/jumZFKX2614/pu-NNXSMKHgJ
>>>
>>> based on the post from Thomas...
>>>
>>> Thomas J. 
>>> 10/3/13
>>> I've recently been comparing Web2py and PHP, this is what i found, maybe 
>>> it helps:
>>>
>>> 1.: PHP is faster whereever it can do stuff within the interpreter, that 
>>> includes handling post/get-data or templates. The PHP interpreter is 
>>> written in C, so these things are really fast, whereas Web2py has to to 
>>> them in Python.
>>> 2. DB access is heavily dependent on how many records you retrieve. 
>>> Translating a query from DAL to SQL is basically free (so using the DAL 
>>> syntax for DB access isn't an issue), putting the data into Python objects 
>>> is quite expensive however. Only query what you really need. Also, using 
>>> executesql() instead of the regular DAL syntax may help there, as it 
>>> returns tuples, not complex data structures
>>> 3. I've found the default config for Apache2 to be somewhat broken as 
>>> far as concurrency is concerned. Check your Apache/WSGI config, you 
>>> probably have a line like "WSGIDaemonProcess web2py user=www-data 
>>> group=www-data" in there. This actually defaults to 1 process with 15 
>>> threads, which -- on my machine -- completely kills performance (Python 
>>> doesn't do threads well because of the global interpreter lock). I've found 
>>> that even something like "WSGIDaemonProcess web2py user=www-data 
>>> group=www-data processes=1 threads=1" improves performance for concurrent 
>>> requests dramatically. Try tinkering with the processes-value to find the 
>>> best config for your machine. 
>>>
>>> You should probably change processes before you change threads.
>>>
>>> On Wednesday, July 29, 2015 at 3:40:30 PM UTC-7, Dave wrote:
>>>>
>>>> That would be good to know.  Nothing in the web2py docs suggests that 
>>>> Apache should not be used in production, it actually seems more like the 
>>>> default since it is the first discussed in the documentation and it's used 
>>>> in the one-step production deployment. 
>>>>
>>>> The line of code in question is also in the one-step deployment script 
>>>> here:
>>>> http://web2py.googlecode.com/hg/scripts/setup-web2py-ubuntu.sh
>>>>
>>>>
>>>> On Wednesday, 29 July 2015 14:32:19 UTC-6, Niphlod wrote:
>>>>>
>>>>> I dropped off the apache train too soon to have any issues with it, 
>>>>> but frankly, given the total sum of issues encountered so far on the 
>>>>> forums, I'm starting to think that we'd need to "officially discontinue" 
>>>>> our apache support.. may be total lack of luck in setting it up or very 
>>>>> biased perspective, or total lack of internal knowledge but it seems that 
>>>>> every problem that pops up with deployments have apache as the common 
>>>>> ground.
>>>>>
>>>>> looks like this is the commit to be blamed
>>>>>
>>>>>
>>>>> https://github.com/web2py/web2py/commit/2a062a2ff5aa1e07e7bfcfdbf36b7f72e8aac5b4
>>>>>
>>>>> I don't know the specifics around it but if it acts like it suggests, 
>>>>> 1 thread and 1 process as a total sum aren't really worth of a production 
>>>>> deployment.
>>>>>
>>>>>
>>>>> On Wednesday, July 29, 2015 at 6:00:47 PM UTC+2, Dave wrote:
>>>>>>
>>>>>> Actually, it looks like i was chasing the wrong issue... It wasn't 
>>>>>> https after all.
>>>>>>
>>>>>> Everything seems to be working after changing this line in apache 
>>>>>> default.conf:
>>>>>>
>>>>>> WSGIDaemonProcess web2py user=www-data group=www-data processes=1 
>>>>>> threads=1
>>>>>>
>>>>>>
>>>>>> to:
>>>>>>
>>>>>> WSGIDaemonProcess web2py user=www-data group=www-data processes=5 
>>>>>> threads=15
>>>>>>
>>>>>>
>>>>>>
>>>>>> Is there any reason not to change this default setting from one-step 
>>>>>> deployment?  Can I likely set these values higher based on my hardware?
>>>>>>
>>>>>> Thanks again,
>>>>>> Dave
>>>>>>
>>>>>> On Wednesday, 29 July 2015 02:52:20 UTC-6, Niphlod wrote:
>>>>>>>
>>>>>>> uhm, you left out some pretty specific details.... what resources 
>>>>>>> has the server web2py is deployed on ? moreover, what's the size of the 
>>>>>>> file ? and what code are you using to handle the upload? are you using 
>>>>>>> the 
>>>>>>> default 'upload' Field or is it in conjunction with a 'blob' one to 
>>>>>>> store 
>>>>>>> the file on the database ?
>>>>>>>
>>>>>>> On Wednesday, July 29, 2015 at 4:59:29 AM UTC+2, Dave wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> I have this same behavior on multiple web2py servers.  If a large 
>>>>>>>> file is being uploaded using a SQLFORM or downloaded using the default 
>>>>>>>> download controller, over HTTPS, the entire web server becomes 
>>>>>>>> unresponsive 
>>>>>>>> until the transfer is completed or cancelled.  However, I have no 
>>>>>>>> issues 
>>>>>>>> uploading/downloading the same file over HTTP, which can also take 
>>>>>>>> several 
>>>>>>>> minutes to complete, but the web server is still responsive during 
>>>>>>>> those 
>>>>>>>> transfers.
>>>>>>>>
>>>>>>>> I am using the one-step deployment with Apache and a wildcard 
>>>>>>>> certificate (RapidSSL).  Would switching to nginx or cherokee give 
>>>>>>>> better 
>>>>>>>> performance for https file transfers, or is this likely an issue with 
>>>>>>>> the 
>>>>>>>> SSL certificate format?  Or if the file transfers over HTTPS are too 
>>>>>>>> CPU 
>>>>>>>> intensive, am i better off setting up multiple servers and a load 
>>>>>>>> balancer?
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>> Dave R
>>>>>>>>
>>>>>>>>
>>>>>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to