On Wed, Jul 10, 2013 at 6:57 PM, Ben Reser <b...@reser.org> wrote: >... > I have about 160ms of latency to this server. Mostly because I'm on > the west coast of the US and it's in the UK someplace based on it's > domain name.
Human perception is right around the 250ms mark. User interface designers use that metric for response time. [ Google shoots for some huge percentile of searches to come in under that. Once you cross the 250ms mark with *no response*, then the user perceives a delay. In Google's case, that delay means a user goes elsewhere, and Google loses money. They've demonstrated a specific correlation between response time and revenue. It's a bit frightening when you file an SEC annual report that says "our revenue is tied to our HTTP response time". (okay, they didn't file that, but they *could*) ] Of course, you can play certain tricks, like: $ svn ls $URL <... 150ms ...> Listing of $URL: <... 200ms ...> $FILES In any case. In your example, with a 160ms latency, then RTT*2 is past that barrier. The user will notice a lag. That said: there may already be other lag, so RTT*2 might be 320ms of a 5s response time. >... Thanks for running these numbers. I can see that it might also be constructive to add some kind of connection/request profiling into ra_serf that we could use to direct optimizations in the future. Cheers, -g