On Mon, Mar 2, 2015 at 8:21 PM, Richard Gaskin <ambassa...@fourthworld.com> wrote:
> Local DB performance like that makes a good case for working with text > files. :) > > How many records are in there? Complex indices? What could account for > so much time to connect locally? > It *is* entirely text. "remote" postgres database has a single table, full of insertion commands into three tables. One has ~ 500 rows with about 7 or 8 columns, another has a few dozen with about 6 columns, and another typically a couple of dozen with something like 50 columns. I've seriously reduced the number of transactions, but there are still several separate db queries to get a file open and sanity checked, and to bring in a few but small tables. It's the latency that's expensive for me; the queries are cheap. For a fairly complicated debtor, the gzipped list of commands is 24kb in my backup file; 16kb is more typical. The remote query, however, going from my N-wireless to airport expres to cablebox, then across town on cox to office, into cablebox, to airport express wirelessly to my iMac seems to typically take 200-250 ms. Now, when I was doing that on exiting every field, over clear wireless internet to runrev servers, it was more like 1-2 *seconds* for a transaction . . . Thus my interest in going asynchronous with sockets . . . -- Dr. Richard E. Hawkins, Esq. (702) 508-8462 _______________________________________________ use-livecode mailing list use-livecode@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-livecode