On 27Aug2012 13:41, bruceg113...@gmail.com <bruceg113...@gmail.com> wrote: | When using the database on my C Drive, Sqlite performance is great! (<1S) | When using the database on a network, Sqlite performance is terrible! (17S)
Let me first echo everyone saying not to use SQLite on a network file. | I like your idea of trying Python 2.7 I doubt it will change anything. | Finally, the way my program is written is: | loop for all database records: | read a database record | process data | display data (via wxPython) | | Perhaps, this is a better approach: | read all database records | loop for all records: | process data | display data (via wxPython) Yes, provided the "read all database records" is a single select statement. In general, with any kind of remote resource you want to minimise the number of transactions - the to and fro part, because each such item tends to have latency while something is sent to and again receiving from. So if you can say "gimme all the records" you get one "unit" of latency at the start and end, versus latency around each record fetch. Having said all that, because SQLite works directly against the file, if you say to it "giev me all the records" and the file is remote, SQLite will probably _still_ fetch each record individually internally, gaining you little. This is why people are suggesting a database "server": then you can say "get me all the records" over the net, and the server does local-to-the-server file access to obtain the data. So all the "per record" latency is at its end, and very small. Not to mention any cacheing it may do. Of course, if your requirements are very simple you might be better off with a flat text file, possibly in CSV format, and avoid SQLite altogether. Cheers, -- Cameron Simpson <c...@zip.com.au> I do not trust thee, Cage from Hell, / The reason why I cannot tell, / But this I know, and know full well: / I do not trust thee, Cage from Hell. - Leigh Ann Hussey, leigh...@sybase.com, DoD#5913 -- http://mail.python.org/mailman/listinfo/python-list