On Jan 25, 12:46 am, "Prateek" <[EMAIL PROTECTED]> wrote: > Concurrency Level: 1 > Time taken for tests: 18.664 seconds > Complete requests: 1000 > Failed requests: 0 > Broken pipe errors: 0 > Total transferred: 14680000 bytes > HTML transferred: 14417000 bytes > Requests per second: 53.58 [#/sec] (mean) > Time per request: 18.66 [ms] (mean) > Time per request: 18.66 [ms] (mean, across all concurrent > requests) > Transfer rate: 786.54 [Kbytes/sec] received > > FYI: This request returns a PNG image (image/png) and not html > > My understanding is that the problem is either with the CherryPy setup > (which is likely because even in other cases, i don't get much more > than 65 requests per second) or PIL itself (even though I'm caching the > background images and source images) > > Does anyone have a better solution? Is there a faster replacement for > PIL? > So you have some gross level statistics on how long a request takes to process, that's good. But before you start replacing PIL, or optimizing CherryPy, or other possible performance-improving efforts, you should profile the within-request processing, find the bottleneck, and take care of that first. Without profiling info, you can easily spend days optimizing out 90% of a task that takes 2% of the total processing time, for little net gain.
-- Paul -- http://mail.python.org/mailman/listinfo/python-list