On Oct 13, 2:36 pm, Stefan Behnel <stefan...@behnel.de> wrote: > Ashish Vyas, 12.10.2010 14:40: > > > When I send request using HTTP, I am able to reach 1 transaction (request > > sent, > > response rcvd and validated.) per second from 20 parallel connections > > easily. > > Average response time shown is about 0.15 seconds. > > However, when I send request using HTTPS, I am seeing that the response time > > shown by tool goes to 1.1 seconds for same 20 parallel connection each > > trying 1 > > transaction per second. > > You shouldn't overestimate the performance requirements for SSL/TLS support > inside of the server application itself, simply because it's not used that > much in real world deployments. > > It's actually very common to use a proxy to handle HTTPS traffic, and to > forward the requests as plain HTTP to the "real" server. Separating the two > gives you more freedom in your deployment (e.g. you can deploy the HTTPS > proxy locally or on an entirely different machine at a suitable place in > the network architecture), and makes your server generally more scalable. > You can additionally use the HTTPS proxy machine to distribute the normal > HTTP load over multiple server instances. There's even dedicated networking > hardware for SSL/TLS proxying if you need it. > > Stefan
Yes, I absolutely agree to you that the server shall also have similar overhead when HTTPS is used in place of HTTP. Thanks for suggesting the HTTPS proxy box. However, my problem here is:- client on XEON machine sending req over HTTPS: average response time ~= .2 secs client on P4 machine sending req over HTTP: average response time ~= .15 secs client on P4 machine sending req over HTTPS: average response time ~= 1.1 secs And I understand that until we have the feature implementation of issue8106 as pointed out by Antoine, we may see no further improvement on 1.1 secs (or .97 secs with 3.2a2) that I see. Kindly confirm if my above conclusion is correct. Thanks Ashish. -- http://mail.python.org/mailman/listinfo/python-list