I am puzzled because I have never seen this. Has anybody else experienced this problem?
There are few things you can try, like connection pooling db=SQLDB(....,pools=10) does it make it any fast (it will not affect the first request, only the successive ones)? Is it the actual insert that takes time or the entire controller? Could you profile your code? You could use import time t0=time.time() ... code print 'checkpoint',time.time()-t0 To identify the slow part. You can also try from the shell: python web2py.py -S yourapp -M >>> db.yourable.insert(yourfield=yourvalue) >>> db.commit() #### required in shell Is the insert slow or is the commit slow? Massimo On Nov 2, 6:39 am, NoviceSortOf <[EMAIL PROTECTED]> wrote: > Thanks for your response. > > PostGreSql access via web2py is remains very slow at this point. > > Adding for instance a record takes 30-60 seconds. > Everything that accesses the DB seems to have 30 second overhead. > > * Hardware : duo-core 2gz, 32MB cache hard drive, 1gig Ram. > > * Via Python having tested postgre access in same environment, > issuing select (python and psycopg 2) it is instant as one would > expect on both large and small tables. > > * I've converted the Cookbook application to access Postgre > so can use that as a test example. > > Any tips on where to look in solving this web2py postgre slow > performance appreciated. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "web2py Web Framework" group. To post to this group, send email to web2py@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/web2py?hl=en -~----------~----~----~----~------~----~------~--~---