This is a complete rewrite of wsgiserver. All functions are different. This is trivial to port to Python 3 and/or lightweight Stackless python threads (althought the pros/cons are not quite clear to me).
I am also thinking about rewriting the accepting mechanism to include some of the features of asynchronous servers. Massimo On Sep 18, 7:44 am, Timbo <tfarr...@swgen.com> wrote: > Line 215: > self.socket=ssl_interface(self.socket) > > should be: > self.socket=self.ssl_interface(self.socket) > > I presume. > > What version of wsgiserver are you basing this off of? I remember > that a previous version of wsgiserver was using deprecated socket APIs > and could not be easily ported to Python3. Do you know if yours is > Python3 compatible? > > Thanks, > -tim > > On Sep 17, 11:38 pm, Graham Dumpleton <graham.dumple...@gmail.com> > wrote: > > > On Sep 18, 2:04 pm, mdipierro <mdipie...@cs.depaul.edu> wrote: > > > > Here are some "hello world" benchmarks not using web2py by the > > > barebone wsgi hello world: > > > > benchmark web2pySWGIServer > > > ================ > > > > massimo-di-pierros-macbook:gluon mdipierro$ ab -n > > > 10000http://127.0.0.1:8002/ > > > Concurrency Level: 1 > > > Time taken for tests: 5.609 seconds > > > Complete requests: 10000 > > > Failed requests: 0 > > > Write errors: 0 > > > Total transferred: 1280000 bytes > > > HTML transferred: 130000 bytes > > > Requests per second: 1782.88 [#/sec] (mean) > > > Time per request: 0.561 [ms] (mean) > > > Time per request: 0.561 [ms] (mean, across all concurrent > > > requests) > > > Transfer rate: 222.86 [Kbytes/sec] received > > > > Connection Times (ms) > > > min mean[+/-sd] median max > > > Connect: 0 0 0.0 0 2 > > > Processing: 0 0 0.1 0 2 > > > Waiting: 0 0 0.1 0 2 > > > Total: 0 1 0.1 0 3 > > > Percentage of the requests served within a certain time (ms) > > > 50% 0 > > > 66% 1 > > > 75% 1 > > > 80% 1 > > > 90% 1 > > > 95% 1 > > > 98% 1 > > > 99% 1 > > > 100% 3 (longest request) > > > > benchmark Cherrypy > > > ================ > > > > massimo-di-pierros-macbook:gluon mdipierro$ ab -n > > > 10000http://127.0.0.1:8002/ > > > Concurrency Level: 1 > > > Time taken for tests: 7.247 seconds > > > Complete requests: 10000 > > > Failed requests: 0 > > > Write errors: 0 > > > Total transferred: 1350000 bytes > > > HTML transferred: 130000 bytes > > > Requests per second: 1379.87 [#/sec] (mean) > > > Time per request: 0.725 [ms] (mean) > > > Time per request: 0.725 [ms] (mean, across all concurrent > > > requests) > > > Transfer rate: 181.92 [Kbytes/sec] received > > > > Connection Times (ms) > > > min mean[+/-sd] median max > > > Connect: 0 0 0.0 0 1 > > > Processing: 0 1 0.1 1 3 > > > Waiting: 0 0 0.1 0 3 > > > Total: 1 1 0.1 1 3 > > > > Percentage of the requests served within a certain time (ms) > > > 50% 1 > > > 66% 1 > > > 75% 1 > > > 80% 1 > > > 90% 1 > > > 95% 1 > > > 98% 1 > > > 99% 1 > > > 100% 3 (longest request) > > > For reference, care to provide results for static file on Apache on > > same system, as well as WSGI hello world program under Apache/ > > mod_wsgi. Will be interesting to see the comparison. I have latest 13 > > inch MacBook pro and am running 64 bit Apache/Python under Snow > > Leopard, so may not be comparable to your MacBook, but I get results > > below. > > > BTW, in the past, don't know how things are now, I have found > > performance of CherryPy WSGI server to not be as good on MacOS X > > compared to Apache/mod_wsgi as it is on Linux systems. On Linux the > > results were quite close, but on MacOS X the CherryPy WSGI server > > lagged somewhat. That was with a much older version of CherryPy WSGI > > server and also when running Tiger/Leopard. Using MacOS X as your test > > platform may not be the best idea. > > > For static file: > > > Server Software: Apache/2.2.11 > > Server Hostname: tests.example.com > > Server Port: 80 > > > Document Path: /hello.txt > > Document Length: 13 bytes > > > Concurrency Level: 1 > > Time taken for tests: 2.373 seconds > > Complete requests: 10000 > > Failed requests: 0 > > Write errors: 0 > > Total transferred: 3340000 bytes > > HTML transferred: 130000 bytes > > Requests per second: 4213.62 [#/sec] (mean) > > Time per request: 0.237 [ms] (mean) > > Time per request: 0.237 [ms] (mean, across all concurrent > > requests) > > Transfer rate: 1374.36 [Kbytes/sec] received > > > Connection Times (ms) > > min mean[+/-sd] median max > > Connect: 0 0 0.0 0 2 > > Processing: 0 0 0.1 0 3 > > Waiting: 0 0 0.0 0 3 > > Total: 0 0 0.1 0 3 > > > Percentage of the requests served within a certain time (ms) > > 50% 0 > > 66% 0 > > 75% 0 > > 80% 0 > > 90% 0 > > 95% 0 > > 98% 0 > > 99% 0 > > 100% 3 (longest request) > > > For WSGI hello world program: > > > Server Software: Apache/2.2.11 > > Server Hostname: tests.example.com > > Server Port: 80 > > > Document Path: /hello.wsgi > > Document Length: 12 bytes > > > Concurrency Level: 1 > > Time taken for tests: 3.785 seconds > > Complete requests: 10000 > > Failed requests: 0 > > Write errors: 0 > > Total transferred: 2330000 bytes > > HTML transferred: 120000 bytes > > Requests per second: 2641.73 [#/sec] (mean) > > Time per request: 0.379 [ms] (mean) > > Time per request: 0.379 [ms] (mean, across all concurrent > > requests) > > Transfer rate: 601.10 [Kbytes/sec] received > > > Connection Times (ms) > > min mean[+/-sd] median max > > Connect: 0 0 0.0 0 2 > > Processing: 0 0 0.1 0 2 > > Waiting: 0 0 0.1 0 2 > > Total: 0 0 0.1 0 3 > > > Percentage of the requests served within a certain time (ms) > > 50% 0 > > 66% 0 > > 75% 0 > > 80% 0 > > 90% 0 > > 95% 0 > > 98% 1 > > 99% 1 > > 100% 3 (longest request) > > > Graham --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "web2py-users" group. To post to this group, send email to web2py@googlegroups.com To unsubscribe from this group, send email to web2py+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/web2py?hl=en -~----------~----~----~----~------~----~------~--~---