>
>
> When you face the problem of scaling to serve more concurrent requests, 
> either you do spawning more processes, or adding servers.
> Adding frontend servers is easy: the data is transactionally consistent as 
> long as you have a single database instance. You put a load balancer in 
> front of frontends (it's relatively inexpensive) and go on.
>

How am I suppose to *spawn more processes* ? PythonAnywhere has this 
concept of* workers*. The more workers the more you need to pay. Is 
spawning more processes related to buying more workers ? what kind of a 
hosting service/company is good at dealing with scaling and caching ?. I 
don't think Pythonanywhere allows Redis or Memcache.......


thanks guys

10 seconds of silence isn't enough...........but silence is as 'workers' 
it's expensive...... 


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to