On Sat, Jun 15, 2013 at 1:18 AM, Wolfgang Keller <felip...@gmx.net> wrote: > Server-roundtrips required for simple user interaction are an absolute > non-starter for productivity applications. No matter whether in a LAN > or WAN. If you want a responsive application you have to de-centralise > as much as possible.
Okay... how long does a round-trip cost? Considering that usability guidelines generally permit ~100ms for direct interaction, and considering that computers on a LAN can easily have sub-millisecond ping times, it seems to me you're allowing a ridiculous amount of time for code to execute on the server. Now, granted, there are systems that suboptimal. (Magento, a PHP-based online shopping cart system, took the better part of a second - something in the order of 700-800ms - to add a single item. And that on reasonable hardware, not a dedicated server but my test box was certainly not trash.) For a real-world example of a LAN system that uses a web browser as its UI, I'm using the Yosemite Project here. It consists of a single-threaded Python script, no scaling assistance from Apache, just the simplest it can possibly be. It is running on three computers: yosemite [the one after whom the project was named], huix, and sikorsky. I used 'time wget http://hostname:3003/airshow' for my testing, which involves: * A DNS lookup from the local DNS server (on the LAN) * An HTTP query to the specified host * A directory listing, usually remote * Building a response (in Python) * Returning that via HTTP * Save the resulting page to disk Since I'm using the bash 'time' builtin, all of this is counted (I'm using the 'real' figure here; the 'user' and 'sys' figures are of course zero, or as close as makes no odds - takes no CPU to do this). The files in question are actually stored on Huix. Queries to that server therefore require a local directory listing; queries to sikorsky involve an sshfs directory listing, and those to yosemite use NetBIOS. (Yosemite runs Windows XP, Huix and Sikorsky are running Linux.) My figures do have one peculiar outlier. Queries from Sikorsky to Yosemite were taking 4-5 seconds, consistently; identical queries from Huix to Yosemite were more consistent with other data. I have no idea why this is. So, the figures! Every figure I could get for talking to a Linux server (either Huix or Sikorsky) was between 7ms and 16ms. (Any particular combination of client and server is fairly stable, eg sikorsky -> sikorsky is consistently 8ms.) And talking to the Windows server, aside from the crazy outlier, varied from 22ms to 29ms. Considering that the Windows lookups involve NetBIOS, I'm somewhat not surprised; there's a bit of cost there. That's the entire round-trip cost. The queries from Sikorsky to Yosemite involve three computers (the client, the server, and the file server), and is completed in under 30 milliseconds. That still gives you 70 milliseconds to render the page to the user, and still be within the estimated response time for an immediate action. In the case of localhost, as mentioned above, that figure comes down to just 8ms - just *eight milliseconds* to do a query involving two servers - so I have to conclude that HTTP is plenty fast enough for a UI. I have seen a number of desktop applications that can't beat that kind of response time. There, thou hast it all, Master Wilfred. Make the most of it. :) ChrisA -- http://mail.python.org/mailman/listinfo/python-list