Web2py can be used to build fairly effortlessly either complex applications with *small* numbers of concurrent users (*small* here is relatively and objective) or simple applications with *large* numbers of concurrent users. But if you want to build complex applications with large numbers of concurrent users (again "large" is relatively), there are a few things that might hurt you.
First, models and controllers are executed at every request. You can divide big controllers into many small ones; but when your app is complex enough, it's going to be difficult. With large numbers of concurrent users and complex controllers and models, delays might be noticeable. Second, session handling. Sessions are locked at the beginning of a request and is released only when the request is finished. You can "session forget"; but when your app is complex, this is not feasibly or natural. When your controllers are little compute intensive, delays will be noticeable. Further, if you have a sudden peak, session files will quickly accumulate and slow things down. I think web2py does not yet have an effective and efficient way to delete session files. But if you want to build complex apps with many concurrent users, I think you need a very flexible micro-framework with minimal assumptions. But you will need to do a lot more. And you had better know very well what you are doing when you put all of these components together. In other words, something has got to give. On Saturday, September 1, 2012 11:48:01 AM UTC-5, Webtechie wrote: > > > I would like to use web2py for a web application which has large databases > (really large) , expects high volume of traffic . Are there any ways to > make web2py apps run faster ? (like really faster ) , (looking for > solutions apart from pooling more hardware and replacing Cpython wth pypy , > running on a non-blocking server like tornado ) . How can i optimise web2py > for my needs ? are web2py applications scalable ? > --