Yes right, it could not be anymore included into a single zip file that you
just unpack and good to go.

About PyPy, I read long time ago from someone that has make works web2py
with it that there was not that much speed improvement.

Richard

On Mon, May 14, 2012 at 12:06 PM, Ross Peoples <ross.peop...@gmail.com>wrote:

> The problem with doing it as a C module is that it would have to be
> compiled. I know that this is something that has been mentioned before but
> was shot down because web2py wouldn't be easily accessible, modified, etc,
> which is the goal of web2py. Alternatively, maybe running web2oy in PyPy
> environment could improve the performance without taking away from the
> flexibility of web2py.
>
>
> On Monday, May 14, 2012 11:57:49 AM UTC-4, Richard wrote:
>>
>> Hello,
>>
>> I wonder if some of the speed problem that web2py has could be address by
>> translate some of the web2py module into C??
>>
>> There is already many walk around for speed problem, but could there is
>> some major speed improvement by rewrite some of web2py fondation into C?
>>
>> I am just curious about that possibility since I didn't see popup as
>> possible option to improve speed of app in major scalling process.
>>
>> Thanks
>>
>> Richard
>>
>> On Mon, May 14, 2012 at 11:50 AM, Anthony <abasta...@gmail.com> wrote:
>>
>>>
>>>>    1. Your "adviewer/viewads" makes 10 calls to the database. Try to
>>>>    optimize this either by writing fewer queries, creating a view, and/or 
>>>> only
>>>>    selecting fields that you need. Also make sure you get the criteria 
>>>> right
>>>>    so that you (ideally) don't have any extra, unneeded rows.
>>>>
>>>> If it's feasible, also consider caching the queries for some amount of
>>> time (assuming the results don't change too frequently).
>>>
>>>>
>>>>    1. When you absolutely have to load a few thousand rows (or more)
>>>>    in a query (you should avoid this whenever possible), then try using
>>>>    "db.executesql(query)" to manually execute a hand-crafted SQL query. 
>>>> This
>>>>    will always be faster than using the DAL directly.
>>>>
>>>> Note, the difference in speed is due to the fact that the DAL won't be
>>> converting the results set to a Rows object -- so you won't have the
>>> convenience of dealing with DAL Rows and Row objects. If you do 
>>> db.executesql(query,
>>> as_dict=True), it will convert to a list of dictionaries (which is
>>> still faster than converting to a Rows object).
>>>
>>>>
>>>>    1. Another point about executesql: The obvious issue is reduced
>>>>    portability, but if you are only planning on using PostgreSQL, then you 
>>>> can
>>>>     hand-craft a SQL query and profile it against PostgreSQL for maximum
>>>>    performance. Once you've got it giving only the data you want, then you 
>>>> can
>>>>    copy and paste that query into executesql.
>>>>
>>>> If you want to use db.executesql() but remain portable, you can still
>>> have the DAL generate the SQL for you by using the ._select() method:
>>>
>>> db.executesql(db(query)._**select(...))
>>>
>>> Obviously in that case you don't get to hand optimize the SQL, but you
>>> still get the speed advantage of not converting the results to a Rows
>>> object (which is only significant for large results sets).
>>>
>>> Anthony
>>>
>>>
>>

Reply via email to